* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
@ 2021-10-19 9:23 0% ` Andrew Rybchenko
2021-10-19 9:27 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-19 9:23 UTC (permalink / raw)
To: David Marchand
Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
Jerin Jacob, Nithin Dabilpuram, dev, Thomas Monjalon
On 10/19/21 12:04 PM, Andrew Rybchenko wrote:
> On 10/19/21 11:49 AM, David Marchand wrote:
>> On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>
>>> Add RTE_ prefix to macro used to register mempool driver.
>>> The old one is still available but deprecated.
>>
>> ODP seems to use its own mempools.
>>
>> $ git grep-all -w MEMPOOL_REGISTER_OPS
>> OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
>>
>> I'd say it counts as a driver macro.
>> If so, we could hide it in a driver-only header, along with
>> rte_mempool_register_ops getting marked as internal.
>>
>> $ git grep-all -w rte_mempool_register_ops
>> FD.io-VPP/src/plugins/dpdk/buffer.c: rte_mempool_register_ops (&ops);
>> FD.io-VPP/src/plugins/dpdk/buffer.c: rte_mempool_register_ops (&ops);
>
> Do I understand correctly that it is required to remove it from
> stable ABI/API, but still allow external SW to use it?
>
> Should I add one more patch to the series?
>
I'm afraid not now. It is too invasive or too illogical.
Basically it should more rte_mempool_ops to the header
as well, but it is heavily used by inline functions in
rte_mempool.h.
Of course, it is possible to move just register API
to the mempool_driver.h header, but value of such
changes is not really big.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
2021-10-19 9:23 0% ` Andrew Rybchenko
@ 2021-10-19 9:27 0% ` David Marchand
2021-10-19 9:38 0% ` Andrew Rybchenko
2021-10-19 9:42 0% ` Thomas Monjalon
1 sibling, 2 replies; 200+ results
From: David Marchand @ 2021-10-19 9:27 UTC (permalink / raw)
To: Andrew Rybchenko, Thomas Monjalon
Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
Jerin Jacob, Nithin Dabilpuram, dev
On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> On 10/19/21 11:49 AM, David Marchand wrote:
> > On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
> > <andrew.rybchenko@oktetlabs.ru> wrote:
> >>
> >> Add RTE_ prefix to macro used to register mempool driver.
> >> The old one is still available but deprecated.
> >
> > ODP seems to use its own mempools.
> >
> > $ git grep-all -w MEMPOOL_REGISTER_OPS
> > OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
> >
> > I'd say it counts as a driver macro.
> > If so, we could hide it in a driver-only header, along with
> > rte_mempool_register_ops getting marked as internal.
> >
> > $ git grep-all -w rte_mempool_register_ops
> > FD.io-VPP/src/plugins/dpdk/buffer.c: rte_mempool_register_ops (&ops);
> > FD.io-VPP/src/plugins/dpdk/buffer.c: rte_mempool_register_ops (&ops);
>
> Do I understand correctly that it is required to remove it from
> stable ABI/API, but still allow external SW to use it?
>
> Should I add one more patch to the series?
If we want to do the full job, we need to inspect driver-only symbols
in rte_mempool.h.
But this goes way further than a simple prefixing as this series intended.
I just read your reply, I think we agree.
Let's go with simple prefix and take a note to cleanup in the future.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
2021-10-19 9:27 0% ` David Marchand
@ 2021-10-19 9:38 0% ` Andrew Rybchenko
2021-10-19 9:42 0% ` Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-19 9:38 UTC (permalink / raw)
To: David Marchand, Thomas Monjalon
Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
Jerin Jacob, Nithin Dabilpuram, dev
On 10/19/21 12:27 PM, David Marchand wrote:
> On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> On 10/19/21 11:49 AM, David Marchand wrote:
>>> On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
>>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>>
>>>> Add RTE_ prefix to macro used to register mempool driver.
>>>> The old one is still available but deprecated.
>>>
>>> ODP seems to use its own mempools.
>>>
>>> $ git grep-all -w MEMPOOL_REGISTER_OPS
>>> OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
>>>
>>> I'd say it counts as a driver macro.
>>> If so, we could hide it in a driver-only header, along with
>>> rte_mempool_register_ops getting marked as internal.
>>>
>>> $ git grep-all -w rte_mempool_register_ops
>>> FD.io-VPP/src/plugins/dpdk/buffer.c: rte_mempool_register_ops (&ops);
>>> FD.io-VPP/src/plugins/dpdk/buffer.c: rte_mempool_register_ops (&ops);
>>
>> Do I understand correctly that it is required to remove it from
>> stable ABI/API, but still allow external SW to use it?
>>
>> Should I add one more patch to the series?
>
> If we want to do the full job, we need to inspect driver-only symbols
> in rte_mempool.h.
> But this goes way further than a simple prefixing as this series intended.
>
> I just read your reply, I think we agree.
> Let's go with simple prefix and take a note to cleanup in the future.
Agreed.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
2021-10-19 9:27 0% ` David Marchand
2021-10-19 9:38 0% ` Andrew Rybchenko
@ 2021-10-19 9:42 0% ` Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-19 9:42 UTC (permalink / raw)
To: Andrew Rybchenko, David Marchand
Cc: dev, Olivier Matz, Ray Kinsella, Artem V. Andreev,
Ashwin Sekhar T K, Pavan Nikhilesh, Hemant Agrawal,
Sachin Saxena, Harman Kalra, Jerin Jacob, Nithin Dabilpuram, dev
19/10/2021 11:27, David Marchand:
> On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
> >
> > On 10/19/21 11:49 AM, David Marchand wrote:
> > > On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru> wrote:
> > >>
> > >> Add RTE_ prefix to macro used to register mempool driver.
> > >> The old one is still available but deprecated.
> > >
> > > ODP seems to use its own mempools.
> > >
> > > $ git grep-all -w MEMPOOL_REGISTER_OPS
> > > OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
> > >
> > > I'd say it counts as a driver macro.
> > > If so, we could hide it in a driver-only header, along with
> > > rte_mempool_register_ops getting marked as internal.
> > >
> > > $ git grep-all -w rte_mempool_register_ops
> > > FD.io-VPP/src/plugins/dpdk/buffer.c: rte_mempool_register_ops (&ops);
> > > FD.io-VPP/src/plugins/dpdk/buffer.c: rte_mempool_register_ops (&ops);
> >
> > Do I understand correctly that it is required to remove it from
> > stable ABI/API, but still allow external SW to use it?
> >
> > Should I add one more patch to the series?
>
> If we want to do the full job, we need to inspect driver-only symbols
> in rte_mempool.h.
> But this goes way further than a simple prefixing as this series intended.
>
> I just read your reply, I think we agree.
> Let's go with simple prefix and take a note to cleanup in the future.
Yes, and we should probably discuss in techboard what should be kept
compatible for external mempool drivers.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v15 0/5] Add PIE support for HQoS library
@ 2021-10-19 12:18 0% ` Dumitrescu, Cristian
2021-10-19 12:45 3% ` [dpdk-dev] [PATCH v16 " Liguzinski, WojciechX
1 sibling, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2021-10-19 12:18 UTC (permalink / raw)
To: Liguzinski, WojciechX, dev, Singh, Jasvinder; +Cc: Ajmera, Megha
> -----Original Message-----
> From: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>
> Sent: Tuesday, October 19, 2021 9:19 AM
> To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Ajmera, Megha <megha.ajmera@intel.com>
> Subject: [PATCH v15 0/5] Add PIE support for HQoS library
>
> DPDK sched library is equipped with mechanism that secures it from the
> bufferbloat problem
> which is a situation when excess buffers in the network cause high latency
> and latency
> variation. Currently, it supports RED for active queue management.
> However, more
> advanced queue management is required to address this problem and
> provide desirable
> quality of service to users.
>
> This solution (RFC) proposes usage of new algorithm called "PIE"
> (Proportional Integral
> controller Enhanced) that can effectively and directly control queuing latency
> to address
> the bufferbloat problem.
>
> The implementation of mentioned functionality includes modification of
> existing and
> adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation notice is
> going
> to be prepared and sent.
>
> Liguzinski, WojciechX (5):
> sched: add PIE based congestion management
> example/qos_sched: add PIE support
> example/ip_pipeline: add PIE support
> doc/guides/prog_guide: added PIE
> app/test: add tests for PIE
>
> app/test/meson.build | 4 +
> app/test/test_pie.c | 1065 ++++++++++++++++++
> config/rte_config.h | 1 -
> doc/guides/prog_guide/glossary.rst | 3 +
> doc/guides/prog_guide/qos_framework.rst | 60 +-
> doc/guides/prog_guide/traffic_management.rst | 13 +-
> drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
> examples/ip_pipeline/tmgr.c | 142 +--
> examples/qos_sched/app_thread.c | 1 -
> examples/qos_sched/cfg_file.c | 127 ++-
> examples/qos_sched/cfg_file.h | 5 +
> examples/qos_sched/init.c | 27 +-
> examples/qos_sched/main.h | 3 +
> examples/qos_sched/profile.cfg | 196 ++--
> lib/sched/meson.build | 10 +-
> lib/sched/rte_pie.c | 86 ++
> lib/sched/rte_pie.h | 398 +++++++
> lib/sched/rte_sched.c | 241 ++--
> lib/sched/rte_sched.h | 63 +-
> lib/sched/version.map | 4 +
> 20 files changed, 2171 insertions(+), 284 deletions(-)
> create mode 100644 app/test/test_pie.c
> create mode 100644 lib/sched/rte_pie.c
> create mode 100644 lib/sched/rte_pie.h
>
> --
> 2.25.1
Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 6/7] cryptodev: update fast path APIs to use new flat array
@ 2021-10-19 12:28 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-19 12:28 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh, Zhang,
Roy Fan, jianjay.zhou, asomalap, ruifeng.wang, Nicolau, Radu,
ajit.khaparde, rnagadheeraj, adwivedi, Power, Ciara
>
> Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
> 1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index ce0dca72be..56e3868ada 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -1832,13 +1832,18 @@ static inline uint16_t
> rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
> struct rte_crypto_op **ops, uint16_t nb_ops)
> {
> - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> + const struct rte_crypto_fp_ops *fp_ops;
> + void *qp;
>
> rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
> - nb_ops = (*dev->dequeue_burst)
> - (dev->data->queue_pairs[qp_id], ops, nb_ops);
> +
> + fp_ops = &rte_crypto_fp_ops[dev_id];
> + qp = fp_ops->qp.data[qp_id];
> +
> + nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
> +
> #ifdef RTE_CRYPTO_CALLBACKS
> - if (unlikely(dev->deq_cbs != NULL)) {
> + if (unlikely(fp_ops->qp.deq_cb != NULL)) {
> struct rte_cryptodev_cb_rcu *list;
> struct rte_cryptodev_cb *cb;
As I ca see you decided to keep call-back related data-structs as public API.
I wonder that's to avoid extra changes with CB related code?
Or performance reasons?
Or probably something else?
>
> @@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
> * not required.
> */
> - list = &dev->deq_cbs[qp_id];
> + list = &fp_ops->qp.deq_cb[qp_id];
> rte_rcu_qsbr_thread_online(list->qsbr, 0);
> cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
>
> @@ -1899,10 +1904,13 @@ static inline uint16_t
> rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
> struct rte_crypto_op **ops, uint16_t nb_ops)
> {
> - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> + const struct rte_crypto_fp_ops *fp_ops;
> + void *qp;
>
> + fp_ops = &rte_crypto_fp_ops[dev_id];
> + qp = fp_ops->qp.data[qp_id];
> #ifdef RTE_CRYPTO_CALLBACKS
> - if (unlikely(dev->enq_cbs != NULL)) {
> + if (unlikely(fp_ops->qp.enq_cb != NULL)) {
> struct rte_cryptodev_cb_rcu *list;
> struct rte_cryptodev_cb *cb;
>
> @@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
> * not required.
> */
> - list = &dev->enq_cbs[qp_id];
> + list = &fp_ops->qp.enq_cb[qp_id];
> rte_rcu_qsbr_thread_online(list->qsbr, 0);
> cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
>
> @@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
> #endif
>
> rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
> - return (*dev->enqueue_burst)(
> - dev->data->queue_pairs[qp_id], ops, nb_ops);
> + return fp_ops->enqueue_burst(qp, ops, nb_ops);
> }
>
>
> --
> 2.25.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v16 0/5] Add PIE support for HQoS library
2021-10-19 12:18 0% ` Dumitrescu, Cristian
@ 2021-10-19 12:45 3% ` Liguzinski, WojciechX
2021-10-20 7:49 3% ` [dpdk-dev] [PATCH v17 " Liguzinski, WojciechX
1 sibling, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-10-19 12:45 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Liguzinski, WojciechX (5):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 62 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/app_thread.c | 1 -
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 10 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 241 ++--
lib/sched/rte_sched.h | 63 +-
lib/sched/version.map | 4 +
20 files changed, 2172 insertions(+), 285 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 1/5] hash: add new toeplitz hash implementation
@ 2021-10-19 15:42 0% ` Medvedkin, Vladimir
0 siblings, 0 replies; 200+ results
From: Medvedkin, Vladimir @ 2021-10-19 15:42 UTC (permalink / raw)
To: Stephen Hemminger, Ananyev, Konstantin
Cc: dev, Wang, Yipeng1, Gobriel, Sameh, Richardson, Bruce
Hi Stephen,
On 19/10/2021 03:15, Stephen Hemminger wrote:
> On Mon, 18 Oct 2021 10:40:00 +0000
> "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
>
>>> On Fri, 15 Oct 2021 10:30:02 +0100
>>> Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
>>>
>>>> + m[i * 8 + j] = (rss_key[i] << j)|
>>>> + (uint8_t)((uint16_t)(rss_key[i + 1]) >>
>>>> + (8 - j));
>>>> + }
>>>
>>> This ends up being harder than necessary to read. Maybe split into
>>> multiple statements and/or use temporary variable.
>>>
>>>> +RTE_INIT(rte_thash_gfni_init)
>>>> +{
>>>> + rte_thash_gfni_supported = 0;
>>>
>>> Not necessary in C globals are initialized to zero by default.
>>>
>>> By removing that the constructor can be totally behind #ifdef
>>>
>>>> +__rte_internal
>>>> +static inline __m512i
>>>> +__rte_thash_gfni(const uint64_t *mtrx, const uint8_t *tuple,
>>>> + const uint8_t *secondary_tuple, int len)
>>>> +{
>>>> + __m512i permute_idx = _mm512_set_epi8(7, 6, 5, 4, 7, 6, 5, 4,
>>>> + 6, 5, 4, 3, 6, 5, 4, 3,
>>>> + 5, 4, 3, 2, 5, 4, 3, 2,
>>>> + 4, 3, 2, 1, 4, 3, 2, 1,
>>>> + 3, 2, 1, 0, 3, 2, 1, 0,
>>>> + 2, 1, 0, -1, 2, 1, 0, -1,
>>>> + 1, 0, -1, -2, 1, 0, -1, -2,
>>>> + 0, -1, -2, -3, 0, -1, -2, -3);
>>>
>>> NAK
>>>
>>> Please don't put the implementation in an inline. This makes it harder
>>> to support (API/ABI) and blocks other architectures from implementing
>>> same thing with different instructions.
>>
>> I don't really understand your reasoning here.
>> rte_thash_gfni.h is an arch-specific header, which provides
>> arch-specific optimizations for RSS hash calculation
>> (Vladimir pls correct me if I am wrong here).
>
> Ok, but rte_thash_gfni.h is included on all architectures.
>
Ok, I'll rework the patch to move x86 + avx512 related things into x86
arch specific header. Would that suit?
>> We do have dozens of inline functions that do use arch-specific instructions (both x86 and arm)
>> for different purposes:
>> sync primitives, memory-ordering, cache manipulations, LPM lookup, TSX, power-saving, etc.
>> That's a usual trade-off taken for performance reasons, when extra function call
>> costs too much comparing to the operation itself.
>> Why it suddenly became a problem for that particular case and how exactly it blocks other architectures?
>> Also I don't understand how it makes things harder in terms of API/ABI stability.
>> As I can see this patch doesn't introduce any public structs/unions.
>> All functions take as arguments just raw data buffers and length.
>> To summarize - in general, I don't see any good reason why this patch shouldn't be allowed.
>> Konstantin
>
> The comments about rte_thash_gfni_supported initialization still apply.
> Why not:
>
> #ifdef __GFNI__
> RTE_INIT(rte_thash_gfni_init)
> {
> if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_GFNI))
> rte_thash_gfni_supported = 1;
> }
> #endif
>
Agree, I'll reflect this changes in v3.
--
Regards,
Vladimir
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] test/hash: fix buffer overflow
@ 2021-10-19 15:57 0% ` Medvedkin, Vladimir
0 siblings, 0 replies; 200+ results
From: Medvedkin, Vladimir @ 2021-10-19 15:57 UTC (permalink / raw)
To: David Marchand
Cc: dev, Wang, Yipeng1, Gobriel, Sameh, Bruce Richardson, dpdk stable
Hi David,
On 19/10/2021 09:02, David Marchand wrote:
> On Fri, Oct 15, 2021 at 3:02 PM Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com> wrote:
>>> I am confused.
>>> Does it mean that rte_jhash_32b is not compliant with rte_hash_create API?
>>>
>>
>> I think so too, because despite the fact that the ABI is the same, the
>> API remains different with respect to the length argument.
>
> Sorry I don't follow you with "ABI is the same".
> Can you explain please?
>
I meant that rte_hash accepts:
/** Type of function that can be used for calculating the hash value. */
typedef uint32_t (*rte_hash_function)(const void *key, uint32_t key_len,
uint32_t init_val);
as a hash function. And signatures of rte_jhash() and rte_jhash_32b()
are the same, but differ in the semantics of the "key_len" argument.
Internally rte_hash passes a length of the key counted in bytes to this
functions, so problems appears if configured hash function considers the
key_len as something else than the size in bytes.
>
> I am not against the fix, but it seems to test something different
> than what an application using the hash library would do.
> Or if an application directly calls this hash function, maybe the unit
> test should not test it via rte_hash_create (which seems to defeat the
> abstraction).
>
I'd say that user should not use this hash function with rte_hash.
Yipeng, Sameh, Bruce,
what do you think?
>
--
Regards,
Vladimir
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 3/7] cryptodev: move inline APIs into separate structure
@ 2021-10-19 16:00 0% ` Zhang, Roy Fan
0 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-19 16:00 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
Ciara, Troy, Rebecca
Apart from the scheduler PMD changes required mentioned by Ciara,
re-acking this patch as all doubts are cleared on our end.
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Monday, October 18, 2021 3:42 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>; Troy, Rebecca
> <rebecca.troy@intel.com>
> Subject: [PATCH v3 3/7] cryptodev: move inline APIs into separate structure
>
> Move fastpath inline function pointers from rte_cryptodev into a
> separate structure accessed via a flat array.
> The intension is to make rte_cryptodev and related structures private
> to avoid future API/ABI breakages.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Tested-by: Rebecca Troy <rebecca.troy@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/cryptodev/cryptodev_pmd.c | 53
> +++++++++++++++++++++++++++++-
> lib/cryptodev/cryptodev_pmd.h | 11 +++++++
> lib/cryptodev/rte_cryptodev.c | 19 +++++++++++
> lib/cryptodev/rte_cryptodev_core.h | 29 ++++++++++++++++
> lib/cryptodev/version.map | 5 +++
> 5 files changed, 116 insertions(+), 1 deletion(-)
>
> diff --git a/lib/cryptodev/cryptodev_pmd.c
> b/lib/cryptodev/cryptodev_pmd.c
> index 44a70ecb35..fd74543682 100644
> --- a/lib/cryptodev/cryptodev_pmd.c
> +++ b/lib/cryptodev/cryptodev_pmd.c
> @@ -3,7 +3,7 @@
> */
>
> #include <sys/queue.h>
> -
> +#include <rte_errno.h>
> #include <rte_string_fns.h>
> #include <rte_malloc.h>
>
> @@ -160,3 +160,54 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev
> *cryptodev)
>
> return 0;
> }
> +
> +static uint16_t
> +dummy_crypto_enqueue_burst(__rte_unused void *qp,
> + __rte_unused struct rte_crypto_op **ops,
> + __rte_unused uint16_t nb_ops)
> +{
> + CDEV_LOG_ERR(
> + "crypto enqueue burst requested for unconfigured device");
> + rte_errno = ENOTSUP;
> + return 0;
> +}
> +
> +static uint16_t
> +dummy_crypto_dequeue_burst(__rte_unused void *qp,
> + __rte_unused struct rte_crypto_op **ops,
> + __rte_unused uint16_t nb_ops)
> +{
> + CDEV_LOG_ERR(
> + "crypto dequeue burst requested for unconfigured device");
> + rte_errno = ENOTSUP;
> + return 0;
> +}
> +
> +void
> +cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
> +{
> + static struct rte_cryptodev_cb_rcu
> dummy_cb[RTE_MAX_QUEUES_PER_PORT];
> + static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> + static const struct rte_crypto_fp_ops dummy = {
> + .enqueue_burst = dummy_crypto_enqueue_burst,
> + .dequeue_burst = dummy_crypto_dequeue_burst,
> + .qp = {
> + .data = dummy_data,
> + .enq_cb = dummy_cb,
> + .deq_cb = dummy_cb,
> + },
> + };
> +
> + *fp_ops = dummy;
> +}
> +
> +void
> +cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
> + const struct rte_cryptodev *dev)
> +{
> + fp_ops->enqueue_burst = dev->enqueue_burst;
> + fp_ops->dequeue_burst = dev->dequeue_burst;
> + fp_ops->qp.data = dev->data->queue_pairs;
> + fp_ops->qp.enq_cb = dev->enq_cbs;
> + fp_ops->qp.deq_cb = dev->deq_cbs;
> +}
> diff --git a/lib/cryptodev/cryptodev_pmd.h
> b/lib/cryptodev/cryptodev_pmd.h
> index 36606dd10b..a71edbb991 100644
> --- a/lib/cryptodev/cryptodev_pmd.h
> +++ b/lib/cryptodev/cryptodev_pmd.h
> @@ -516,6 +516,17 @@ RTE_INIT(init_ ##driver_id)\
> driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
> }
>
> +/* Reset crypto device fastpath APIs to dummy values. */
> +__rte_internal
> +void
> +cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops);
> +
> +/* Setup crypto device fastpath APIs. */
> +__rte_internal
> +void
> +cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
> + const struct rte_cryptodev *dev);
> +
> static inline void *
> get_sym_session_private_data(const struct rte_cryptodev_sym_session
> *sess,
> uint8_t driver_id) {
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> index eb86e629aa..305e013ebb 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -53,6 +53,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
> .nb_devs = 0
> };
>
> +/* Public fastpath APIs. */
> +struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
> +
> /* spinlock for crypto device callbacks */
> static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>
> @@ -917,6 +920,8 @@ rte_cryptodev_pmd_release_device(struct
> rte_cryptodev *cryptodev)
>
> dev_id = cryptodev->data->dev_id;
>
> + cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
> +
> /* Close device only if device operations have been set */
> if (cryptodev->dev_ops) {
> ret = rte_cryptodev_close(dev_id);
> @@ -1080,6 +1085,9 @@ rte_cryptodev_start(uint8_t dev_id)
> }
>
> diag = (*dev->dev_ops->dev_start)(dev);
> + /* expose selection of PMD fast-path functions */
> + cryptodev_fp_ops_set(rte_crypto_fp_ops + dev_id, dev);
> +
> rte_cryptodev_trace_start(dev_id, diag);
> if (diag == 0)
> dev->data->dev_started = 1;
> @@ -1109,6 +1117,9 @@ rte_cryptodev_stop(uint8_t dev_id)
> return;
> }
>
> + /* point fast-path functions to dummy ones */
> + cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
> +
> (*dev->dev_ops->dev_stop)(dev);
> rte_cryptodev_trace_stop(dev_id);
> dev->data->dev_started = 0;
> @@ -2411,3 +2422,11 @@ rte_cryptodev_allocate_driver(struct
> cryptodev_driver *crypto_drv,
>
> return nb_drivers++;
> }
> +
> +RTE_INIT(cryptodev_init_fp_ops)
> +{
> + uint32_t i;
> +
> + for (i = 0; i != RTE_DIM(rte_crypto_fp_ops); i++)
> + cryptodev_fp_ops_reset(rte_crypto_fp_ops + i);
> +}
> diff --git a/lib/cryptodev/rte_cryptodev_core.h
> b/lib/cryptodev/rte_cryptodev_core.h
> index 1633e55889..e9e9a44b3c 100644
> --- a/lib/cryptodev/rte_cryptodev_core.h
> +++ b/lib/cryptodev/rte_cryptodev_core.h
> @@ -25,6 +25,35 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
> struct rte_crypto_op **ops, uint16_t nb_ops);
> /**< Enqueue packets for processing on queue pair of a device. */
>
> +/**
> + * @internal
> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for fast-path cryptodev inline functions in advance.
> + */
> +struct rte_cryptodev_qpdata {
> + /** points to array of internal queue pair data pointers. */
> + void **data;
> + /** points to array of enqueue callback data pointers */
> + struct rte_cryptodev_cb_rcu *enq_cb;
> + /** points to array of dequeue callback data pointers */
> + struct rte_cryptodev_cb_rcu *deq_cb;
> +};
> +
> +struct rte_crypto_fp_ops {
> + /** PMD enqueue burst function. */
> + enqueue_pkt_burst_t enqueue_burst;
> + /** PMD dequeue burst function. */
> + dequeue_pkt_burst_t dequeue_burst;
> + /** Internal queue pair data pointers. */
> + struct rte_cryptodev_qpdata qp;
> + /** Reserved for future ops. */
> + uintptr_t reserved[4];
> +} __rte_cache_aligned;
> +
> +extern struct rte_crypto_fp_ops
> rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
> +
> /**
> * @internal
> * The data part, with no function pointers, associated with each device.
> diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
> index 43cf937e40..ed62ced221 100644
> --- a/lib/cryptodev/version.map
> +++ b/lib/cryptodev/version.map
> @@ -45,6 +45,9 @@ DPDK_22 {
> rte_cryptodev_sym_session_init;
> rte_cryptodevs;
>
> + #added in 21.11
> + rte_crypto_fp_ops;
> +
> local: *;
> };
>
> @@ -109,6 +112,8 @@ EXPERIMENTAL {
> INTERNAL {
> global:
>
> + cryptodev_fp_ops_reset;
> + cryptodev_fp_ops_set;
> rte_cryptodev_allocate_driver;
> rte_cryptodev_pmd_allocate;
> rte_cryptodev_pmd_callback_process;
> --
> 2.25.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal
@ 2021-10-19 18:35 4% ` Harman Kalra
2021-10-19 18:35 1% ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-22 20:49 4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
2 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: malloc: introduce malloc is ready API
This patch introduces a new API which tells if DPDK memory
subsystem is initialized and rte_malloc* APIs are ready to be
used. If rte_malloc* are setup, memory for interrupt instance
is allocated using rte_malloc else using traditional heap APIs.
Patch 2: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
Harman Kalra (7):
malloc: introduce malloc is ready API
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 162 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 9 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 26 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 15 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 19 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 14 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 +++-
drivers/bus/pci/pci_common.c | 27 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 5 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 108 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 22 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 21 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 59 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 18 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 51 +-
drivers/net/mlx5/linux/mlx5_socket.c | 24 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 35 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +-
drivers/net/thunderx/nicvf_ethdev.c | 11 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 34 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 75 +-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 47 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 61 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 9 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 585 ++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/malloc_heap.c | 19 +-
lib/eal/common/malloc_heap.h | 3 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 52 +-
lib/eal/freebsd/eal_interrupts.c | 92 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 648 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 37 +-
lib/eal/linux/eal_dev.c | 63 +-
lib/eal/linux/eal_interrupts.c | 287 +++++---
lib/eal/version.map | 47 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
134 files changed, 3568 insertions(+), 1709 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-10-19 18:35 4% ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-10-19 18:35 1% ` Harman Kalra
2021-10-19 21:27 4% ` Dmitry Kozlyuk
0 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson
Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 92 ++++++----
lib/eal/linux/eal_interrupts.c | 287 +++++++++++++++++++------------
2 files changed, 234 insertions(+), 145 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..846ca4aa89 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,9 +137,18 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ src->intr_handle = rte_intr_instance_alloc();
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
}
@@ -151,7 +162,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +185,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +241,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +282,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +296,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +329,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +381,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +405,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +423,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +447,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +459,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +482,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +495,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +566,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +578,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +590,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..a250a9df66 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,21 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_instance_alloc();
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +589,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +599,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +640,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +650,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +682,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +714,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +772,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +795,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +838,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +848,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +906,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +939,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +952,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1016,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1056,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1066,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1169,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1233,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1246,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1467,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1490,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1502,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1527,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1549,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1558,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0,
+ rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1595,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) >
+ rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1617,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-10-19 18:35 1% ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-19 21:27 4% ` Dmitry Kozlyuk
2021-10-20 9:25 3% ` [dpdk-dev] [EXT] " Harman Kalra
0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-10-19 21:27 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
2021-10-20 00:05 (UTC+0530), Harman Kalra:
> Making changes to the interrupt framework to use interrupt handle
> APIs to get/set any field. Direct access to any of the fields
> should be avoided to avoid any ABI breakage in future.
I get and accept the point why EAL also should use the API.
However, mentioning ABI is still a wrong wording.
There is no ABI between EAL structures and EAL functions by definition of ABI.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> lib/eal/freebsd/eal_interrupts.c | 92 ++++++----
> lib/eal/linux/eal_interrupts.c | 287 +++++++++++++++++++------------
> 2 files changed, 234 insertions(+), 145 deletions(-)
>
> diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
[...]
> @@ -135,9 +137,18 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
> ret = -ENOMEM;
> goto fail;
> } else {
> - src->intr_handle = *intr_handle;
> - TAILQ_INIT(&src->callbacks);
> - TAILQ_INSERT_TAIL(&intr_sources, src, next);
> + src->intr_handle = rte_intr_instance_alloc();
> + if (src->intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Can not create intr instance\n");
> + free(callback);
> + ret = -ENOMEM;
goto fail?
> + } else {
> + rte_intr_instance_copy(src->intr_handle,
> + intr_handle);
> + TAILQ_INIT(&src->callbacks);
> + TAILQ_INSERT_TAIL(&intr_sources, src,
> + next);
> + }
> }
> }
>
[...]
> @@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
> struct rte_intr_callback *cb, *next;
>
> /* do parameter checking first */
> - if (intr_handle == NULL || intr_handle->fd < 0) {
> + if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
The handle is checked for NULL inside the accessor, here and in other places:
grep -R 'intr_handle == NULL ||' lib/eal
> RTE_LOG(ERR, EAL,
> "Unregistering with invalid input parameter\n");
> return -EINVAL;
> diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
[...]
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v17 0/5] Add PIE support for HQoS library
2021-10-19 12:45 3% ` [dpdk-dev] [PATCH v16 " Liguzinski, WojciechX
@ 2021-10-20 7:49 3% ` Liguzinski, WojciechX
2021-10-25 11:32 3% ` [dpdk-dev] [PATCH v18 " Liguzinski, WojciechX
0 siblings, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-10-20 7:49 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Liguzinski, WojciechX (5):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/app_thread.c | 1 -
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 10 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 241 ++--
lib/sched/rte_sched.h | 63 +-
lib/sched/version.map | 4 +
20 files changed, 2173 insertions(+), 286 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5] lib/cmdline: release cl when cmdline exit
@ 2021-10-20 9:22 0% ` Peng, ZhihongX
0 siblings, 0 replies; 200+ results
From: Peng, ZhihongX @ 2021-10-20 9:22 UTC (permalink / raw)
To: olivier.matz, dmitry.kozliuk; +Cc: dev
> -----Original Message-----
> From: Peng, ZhihongX <zhihongx.peng@intel.com>
> Sent: Monday, October 18, 2021 9:59 PM
> To: olivier.matz@6wind.com; dmitry.kozliuk@gmail.com
> Cc: dev@dpdk.org; Peng, ZhihongX <zhihongx.peng@intel.com>
> Subject: [PATCH v5] lib/cmdline: release cl when cmdline exit
>
> From: Zhihong Peng <zhihongx.peng@intel.com>
>
> Malloc cl in the cmdline_stdin_new function, so release in the
> cmdline_stdin_exit function is logical, so that cl will not be released alone.
>
> Fixes: af75078fece3 ("first public release")
> Cc: intel.com
>
> Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
> ---
> app/test/test.c | 1 -
> app/test/test_cmdline_lib.c | 1 -
> doc/guides/rel_notes/release_21_11.rst | 3 +++
> lib/cmdline/cmdline_socket.c | 1 +
> 4 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/app/test/test.c b/app/test/test.c index 173d202e47..5194131026
> 100644
> --- a/app/test/test.c
> +++ b/app/test/test.c
> @@ -233,7 +233,6 @@ main(int argc, char **argv)
>
> cmdline_interact(cl);
> cmdline_stdin_exit(cl);
> - cmdline_free(cl);
> }
> #endif
> ret = 0;
> diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c index
> d5a09b4541..6bcfa6511e 100644
> --- a/app/test/test_cmdline_lib.c
> +++ b/app/test/test_cmdline_lib.c
> @@ -174,7 +174,6 @@ test_cmdline_socket_fns(void)
> /* void functions */
> cmdline_stdin_exit(NULL);
>
> - cmdline_free(cl);
> return 0;
> error:
> printf("Error: function accepted null parameter!\n"); diff --git
> a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index d5435a64aa..6aa98d1e34 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -237,6 +237,9 @@ API Changes
> the crypto/security operation. This field will be used to communicate
> events such as soft expiry with IPsec in lookaside mode.
>
> +* cmdline: ``cmdline_stdin_exit()`` now frees the ``cmdline`` structure.
> + Calls to ``cmdline_free()`` after it need to be deleted from applications.
> +
>
> ABI Changes
> -----------
> diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
> index 998e8ade25..ebd5343754 100644
> --- a/lib/cmdline/cmdline_socket.c
> +++ b/lib/cmdline/cmdline_socket.c
> @@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
> return;
>
> terminal_restore(cl);
> + cmdline_free(cl);
> }
> --
> 2.25.1
Tested-by: Zhihong Peng <zhihongx.peng@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-10-19 21:27 4% ` Dmitry Kozlyuk
@ 2021-10-20 9:25 3% ` Harman Kalra
0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-10-20 9:25 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Wednesday, October 20, 2021 2:58 AM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Bruce Richardson <bruce.richardson@intel.com>;
> david.marchand@redhat.com; mdr@ashroe.eu; thomas@monjalon.net
> Subject: [EXT] Re: [PATCH v4 3/7] eal/interrupts: avoid direct access to
> interrupt handle
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-10-20 00:05 (UTC+0530), Harman Kalra:
> > Making changes to the interrupt framework to use interrupt handle APIs
> > to get/set any field. Direct access to any of the fields should be
> > avoided to avoid any ABI breakage in future.
>
> I get and accept the point why EAL also should use the API.
> However, mentioning ABI is still a wrong wording.
> There is no ABI between EAL structures and EAL functions by definition of
> ABI.
Sure, I will reword the commit message without ABI inclusion.
>
> >
> > Signed-off-by: Harman Kalra <hkalra@marvell.com>
> > ---
> > lib/eal/freebsd/eal_interrupts.c | 92 ++++++----
> > lib/eal/linux/eal_interrupts.c | 287 +++++++++++++++++++------------
> > 2 files changed, 234 insertions(+), 145 deletions(-)
> >
> > diff --git a/lib/eal/freebsd/eal_interrupts.c
> > b/lib/eal/freebsd/eal_interrupts.c
> [...]
> > @@ -135,9 +137,18 @@ rte_intr_callback_register(const struct
> rte_intr_handle *intr_handle,
> > ret = -ENOMEM;
> > goto fail;
> > } else {
> > - src->intr_handle = *intr_handle;
> > - TAILQ_INIT(&src->callbacks);
> > - TAILQ_INSERT_TAIL(&intr_sources, src, next);
> > + src->intr_handle = rte_intr_instance_alloc();
> > + if (src->intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Can not create
> intr instance\n");
> > + free(callback);
> > + ret = -ENOMEM;
>
> goto fail?
I think goto not required, as we not setting wake_thread = 1 here,
API will just return error after unlocking the spinlock and trace.
>
> > + } else {
> > + rte_intr_instance_copy(src-
> >intr_handle,
> > + intr_handle);
> > + TAILQ_INIT(&src->callbacks);
> > + TAILQ_INSERT_TAIL(&intr_sources,
> src,
> > + next);
> > + }
> > }
> > }
> >
> [...]
> > @@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct
> rte_intr_handle *intr_handle,
> > struct rte_intr_callback *cb, *next;
> >
> > /* do parameter checking first */
> > - if (intr_handle == NULL || intr_handle->fd < 0) {
> > + if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
>
> The handle is checked for NULL inside the accessor, here and in other places:
> grep -R 'intr_handle == NULL ||' lib/eal
Ack, I will remove these NULL checks.
>
> > RTE_LOG(ERR, EAL,
> > "Unregistering with invalid input parameter\n");
> > return -EINVAL;
>
> > diff --git a/lib/eal/linux/eal_interrupts.c
> > b/lib/eal/linux/eal_interrupts.c
> [...]
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] port: eventdev port api promoted
@ 2021-10-20 9:55 3% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-10-20 9:55 UTC (permalink / raw)
To: Thomas Monjalon, David Marchand, Rahul Shah; +Cc: dev, Cristian Dumitrescu
On 13/10/2021 13:12, Thomas Monjalon wrote:
> +Cc Cristian, the maintainer
>
> 10/09/2021 15:40, Kinsella, Ray:
>> On 10/09/2021 08:36, David Marchand wrote:
>>> On Fri, Sep 10, 2021 at 9:31 AM Kinsella, Ray <mdr@ashroe.eu> wrote:
>>>> On 09/09/2021 17:40, Rahul Shah wrote:
>>>>> rte_port_eventdev_reader_ops, rte_port_eventdev_writer_nodrops_ops,
>>>>> rte_port_eventdev_writer_ops symbols promoted
>>>>>
>>>>> Signed-off-by: Rahul Shah <rahul.r.shah@intel.com>
>>>>> ---
>>>>> lib/port/version.map | 8 +++-----
>>>>> 1 file changed, 3 insertions(+), 5 deletions(-)
>>>>
>>>> Hi Rahul,
>>>>
>>>> You need to strip the __rte_experimental attribute in the header file also.
>>>
>>> That's what I first thought... but those are variables, and there were
>>> not marked in the header.
>>
>> My mistake - should have checked.
>>
>>> At least, those symbols must be alphabetically sorted in version.map.
>>>
>>> About checking for experimental mark on variables... I had a patch,
>>> but never got it in.
>>> I think we should instead (forbid such exports and|insist on) rework
>>> API / libraries that rely on public variables.
>>
>> I'll pull together a script to identify all the variables in DPDK.
>> Are you expecting the rework on the port api to be done prior to 21.11?
>
> Does it mean we should not promote these variables?
>
>
So the net-net is that variables are almost impossible to version.
Think about maintaining two parallel versions of the same variable, and having to track and reconcile state between them.
So variables are make ABI versioning (and maintenance) harder, and are best avoided.
In this particular case.
I would suggest leaving these as experimental and improving the API, post 21.11.
Ray K
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 3/8] cryptodev: move inline APIs into separate structure
@ 2021-10-20 11:27 2% ` Akhil Goyal
2021-10-20 11:27 3% ` [dpdk-dev] [PATCH v4 7/8] cryptodev: update fast path APIs to use new flat array Akhil Goyal
2021-10-20 11:27 7% ` [dpdk-dev] [PATCH v4 8/8] cryptodev: move device specific structures Akhil Goyal
2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-20 11:27 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Akhil Goyal, Rebecca Troy
Move fastpath inline function pointers from rte_cryptodev into a
separate structure accessed via a flat array.
The intension is to make rte_cryptodev and related structures private
to avoid future API/ABI breakages.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/cryptodev/cryptodev_pmd.c | 53 +++++++++++++++++++++++++++++-
lib/cryptodev/cryptodev_pmd.h | 11 +++++++
lib/cryptodev/rte_cryptodev.c | 19 +++++++++++
lib/cryptodev/rte_cryptodev_core.h | 29 ++++++++++++++++
lib/cryptodev/version.map | 5 +++
5 files changed, 116 insertions(+), 1 deletion(-)
diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index 44a70ecb35..fd74543682 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -3,7 +3,7 @@
*/
#include <sys/queue.h>
-
+#include <rte_errno.h>
#include <rte_string_fns.h>
#include <rte_malloc.h>
@@ -160,3 +160,54 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
return 0;
}
+
+static uint16_t
+dummy_crypto_enqueue_burst(__rte_unused void *qp,
+ __rte_unused struct rte_crypto_op **ops,
+ __rte_unused uint16_t nb_ops)
+{
+ CDEV_LOG_ERR(
+ "crypto enqueue burst requested for unconfigured device");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+static uint16_t
+dummy_crypto_dequeue_burst(__rte_unused void *qp,
+ __rte_unused struct rte_crypto_op **ops,
+ __rte_unused uint16_t nb_ops)
+{
+ CDEV_LOG_ERR(
+ "crypto dequeue burst requested for unconfigured device");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
+{
+ static struct rte_cryptodev_cb_rcu dummy_cb[RTE_MAX_QUEUES_PER_PORT];
+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+ static const struct rte_crypto_fp_ops dummy = {
+ .enqueue_burst = dummy_crypto_enqueue_burst,
+ .dequeue_burst = dummy_crypto_dequeue_burst,
+ .qp = {
+ .data = dummy_data,
+ .enq_cb = dummy_cb,
+ .deq_cb = dummy_cb,
+ },
+ };
+
+ *fp_ops = dummy;
+}
+
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+ const struct rte_cryptodev *dev)
+{
+ fp_ops->enqueue_burst = dev->enqueue_burst;
+ fp_ops->dequeue_burst = dev->dequeue_burst;
+ fp_ops->qp.data = dev->data->queue_pairs;
+ fp_ops->qp.enq_cb = dev->enq_cbs;
+ fp_ops->qp.deq_cb = dev->deq_cbs;
+}
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 36606dd10b..a71edbb991 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -516,6 +516,17 @@ RTE_INIT(init_ ##driver_id)\
driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
}
+/* Reset crypto device fastpath APIs to dummy values. */
+__rte_internal
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops);
+
+/* Setup crypto device fastpath APIs. */
+__rte_internal
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+ const struct rte_cryptodev *dev);
+
static inline void *
get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess,
uint8_t driver_id) {
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index eb86e629aa..305e013ebb 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -53,6 +53,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
.nb_devs = 0
};
+/* Public fastpath APIs. */
+struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
/* spinlock for crypto device callbacks */
static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
@@ -917,6 +920,8 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
dev_id = cryptodev->data->dev_id;
+ cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
/* Close device only if device operations have been set */
if (cryptodev->dev_ops) {
ret = rte_cryptodev_close(dev_id);
@@ -1080,6 +1085,9 @@ rte_cryptodev_start(uint8_t dev_id)
}
diag = (*dev->dev_ops->dev_start)(dev);
+ /* expose selection of PMD fast-path functions */
+ cryptodev_fp_ops_set(rte_crypto_fp_ops + dev_id, dev);
+
rte_cryptodev_trace_start(dev_id, diag);
if (diag == 0)
dev->data->dev_started = 1;
@@ -1109,6 +1117,9 @@ rte_cryptodev_stop(uint8_t dev_id)
return;
}
+ /* point fast-path functions to dummy ones */
+ cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
(*dev->dev_ops->dev_stop)(dev);
rte_cryptodev_trace_stop(dev_id);
dev->data->dev_started = 0;
@@ -2411,3 +2422,11 @@ rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
return nb_drivers++;
}
+
+RTE_INIT(cryptodev_init_fp_ops)
+{
+ uint32_t i;
+
+ for (i = 0; i != RTE_DIM(rte_crypto_fp_ops); i++)
+ cryptodev_fp_ops_reset(rte_crypto_fp_ops + i);
+}
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 1633e55889..2bb9a228c1 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -25,6 +25,35 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
struct rte_crypto_op **ops, uint16_t nb_ops);
/**< Enqueue packets for processing on queue pair of a device. */
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path cryptodev inline functions in advance.
+ */
+struct rte_cryptodev_qpdata {
+ /** points to array of internal queue pair data pointers. */
+ void **data;
+ /** points to array of enqueue callback data pointers */
+ struct rte_cryptodev_cb_rcu *enq_cb;
+ /** points to array of dequeue callback data pointers */
+ struct rte_cryptodev_cb_rcu *deq_cb;
+};
+
+struct rte_crypto_fp_ops {
+ /** PMD enqueue burst function. */
+ enqueue_pkt_burst_t enqueue_burst;
+ /** PMD dequeue burst function. */
+ dequeue_pkt_burst_t dequeue_burst;
+ /** Internal queue pair data pointers. */
+ struct rte_cryptodev_qpdata qp;
+ /** Reserved for future ops. */
+ uintptr_t reserved[3];
+} __rte_cache_aligned;
+
+extern struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
/**
* @internal
* The data part, with no function pointers, associated with each device.
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 43cf937e40..ed62ced221 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -45,6 +45,9 @@ DPDK_22 {
rte_cryptodev_sym_session_init;
rte_cryptodevs;
+ #added in 21.11
+ rte_crypto_fp_ops;
+
local: *;
};
@@ -109,6 +112,8 @@ EXPERIMENTAL {
INTERNAL {
global:
+ cryptodev_fp_ops_reset;
+ cryptodev_fp_ops_set;
rte_cryptodev_allocate_driver;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
--
2.25.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v4 7/8] cryptodev: update fast path APIs to use new flat array
2021-10-20 11:27 2% ` [dpdk-dev] [PATCH v4 3/8] cryptodev: move inline APIs into separate structure Akhil Goyal
@ 2021-10-20 11:27 3% ` Akhil Goyal
2021-10-20 11:27 7% ` [dpdk-dev] [PATCH v4 8/8] cryptodev: move device specific structures Akhil Goyal
2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-20 11:27 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Akhil Goyal
Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index ce0dca72be..56e3868ada 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -1832,13 +1832,18 @@ static inline uint16_t
rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_crypto_op **ops, uint16_t nb_ops)
{
- struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+ const struct rte_crypto_fp_ops *fp_ops;
+ void *qp;
rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
- nb_ops = (*dev->dequeue_burst)
- (dev->data->queue_pairs[qp_id], ops, nb_ops);
+
+ fp_ops = &rte_crypto_fp_ops[dev_id];
+ qp = fp_ops->qp.data[qp_id];
+
+ nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
+
#ifdef RTE_CRYPTO_CALLBACKS
- if (unlikely(dev->deq_cbs != NULL)) {
+ if (unlikely(fp_ops->qp.deq_cb != NULL)) {
struct rte_cryptodev_cb_rcu *list;
struct rte_cryptodev_cb *cb;
@@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
* cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
* not required.
*/
- list = &dev->deq_cbs[qp_id];
+ list = &fp_ops->qp.deq_cb[qp_id];
rte_rcu_qsbr_thread_online(list->qsbr, 0);
cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
@@ -1899,10 +1904,13 @@ static inline uint16_t
rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_crypto_op **ops, uint16_t nb_ops)
{
- struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+ const struct rte_crypto_fp_ops *fp_ops;
+ void *qp;
+ fp_ops = &rte_crypto_fp_ops[dev_id];
+ qp = fp_ops->qp.data[qp_id];
#ifdef RTE_CRYPTO_CALLBACKS
- if (unlikely(dev->enq_cbs != NULL)) {
+ if (unlikely(fp_ops->qp.enq_cb != NULL)) {
struct rte_cryptodev_cb_rcu *list;
struct rte_cryptodev_cb *cb;
@@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
* cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
* not required.
*/
- list = &dev->enq_cbs[qp_id];
+ list = &fp_ops->qp.enq_cb[qp_id];
rte_rcu_qsbr_thread_online(list->qsbr, 0);
cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
@@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
#endif
rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
- return (*dev->enqueue_burst)(
- dev->data->queue_pairs[qp_id], ops, nb_ops);
+ return fp_ops->enqueue_burst(qp, ops, nb_ops);
}
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 8/8] cryptodev: move device specific structures
2021-10-20 11:27 2% ` [dpdk-dev] [PATCH v4 3/8] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-10-20 11:27 3% ` [dpdk-dev] [PATCH v4 7/8] cryptodev: update fast path APIs to use new flat array Akhil Goyal
@ 2021-10-20 11:27 7% ` Akhil Goyal
2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-20 11:27 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Akhil Goyal, Rebecca Troy
The device specific structures - rte_cryptodev
and rte_cryptodev_data are moved to cryptodev_pmd.h
to hide it from the applications.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 ++
drivers/crypto/ccp/ccp_dev.h | 2 +-
drivers/crypto/cnxk/cn10k_ipsec.c | 2 +-
drivers/crypto/cnxk/cn9k_ipsec.c | 2 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 2 +-
drivers/crypto/cnxk/cnxk_cryptodev_sec.c | 2 +-
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 2 +-
drivers/crypto/octeontx/otx_cryptodev.c | 1 -
.../crypto/octeontx/otx_cryptodev_hw_access.c | 2 +-
.../crypto/octeontx/otx_cryptodev_hw_access.h | 2 +-
drivers/crypto/octeontx/otx_cryptodev_ops.h | 2 +-
.../crypto/octeontx2/otx2_cryptodev_mbox.c | 2 +-
drivers/crypto/scheduler/scheduler_failover.c | 2 +-
.../crypto/scheduler/scheduler_multicore.c | 2 +-
.../scheduler/scheduler_pkt_size_distr.c | 2 +-
.../crypto/scheduler/scheduler_roundrobin.c | 2 +-
drivers/event/cnxk/cnxk_eventdev.h | 2 +-
drivers/event/dpaa/dpaa_eventdev.c | 2 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/octeontx/ssovf_evdev.c | 2 +-
.../event/octeontx2/otx2_evdev_crypto_adptr.c | 2 +-
lib/cryptodev/cryptodev_pmd.h | 65 ++++++++++++++++++
lib/cryptodev/rte_cryptodev_core.h | 67 -------------------
lib/cryptodev/version.map | 2 +-
24 files changed, 91 insertions(+), 88 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index faa9164546..23bc854d16 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -328,6 +328,12 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cryptodev: Made ``rte_cryptodev``, ``rte_cryptodev_data`` private
+ structures internal to DPDK. ``rte_cryptodevs`` can't be accessed directly
+ by user any more. While it is an ABI breakage, this change is intended
+ to be transparent for both users (no changes in user app is required) and
+ PMD developers (no changes in PMD is required).
+
* security: ``rte_security_set_pkt_metadata`` and ``rte_security_get_userdata``
routines used by inline outbound and inline inbound security processing were
made inline and enhanced to do simple 64-bit set/get for PMDs that do not
diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h
index ca5145c278..85c8fc47a2 100644
--- a/drivers/crypto/ccp/ccp_dev.h
+++ b/drivers/crypto/ccp/ccp_dev.h
@@ -17,7 +17,7 @@
#include <rte_pci.h>
#include <rte_spinlock.h>
#include <rte_crypto_sym.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
/**< CCP sspecific */
#define MAX_HW_QUEUES 5
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index defc792aa8..27df1dcd64 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -3,7 +3,7 @@
*/
#include <rte_malloc.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_esp.h>
#include <rte_ip.h>
#include <rte_security.h>
diff --git a/drivers/crypto/cnxk/cn9k_ipsec.c b/drivers/crypto/cnxk/cn9k_ipsec.c
index 9ca4d20c62..53fb793654 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec.c
+++ b/drivers/crypto/cnxk/cn9k_ipsec.c
@@ -2,7 +2,7 @@
* Copyright(C) 2021 Marvell.
*/
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_ip.h>
#include <rte_security.h>
#include <rte_security_driver.h>
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index a227e6981c..a53b489a04 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -2,7 +2,7 @@
* Copyright(C) 2021 Marvell.
*/
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_security.h>
#include "roc_api.h"
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_sec.c b/drivers/crypto/cnxk/cnxk_cryptodev_sec.c
index 8d04d4b575..2021d5c77e 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_sec.c
@@ -2,7 +2,7 @@
* Copyright(C) 2021 Marvell.
*/
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include <rte_security.h>
#include <rte_security_driver.h>
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index fe3ca25a0c..9edb0cc00f 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -3,7 +3,7 @@
*/
#include <rte_crypto.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_cycles.h>
#include <rte_errno.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev.c b/drivers/crypto/octeontx/otx_cryptodev.c
index 05b78329d6..337d06aab8 100644
--- a/drivers/crypto/octeontx/otx_cryptodev.c
+++ b/drivers/crypto/octeontx/otx_cryptodev.c
@@ -4,7 +4,6 @@
#include <rte_bus_pci.h>
#include <rte_common.h>
-#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
#include <rte_log.h>
#include <rte_pci.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c
index 7b89a62d81..20b288334a 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c
@@ -7,7 +7,7 @@
#include <rte_branch_prediction.h>
#include <rte_common.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_errno.h>
#include <rte_mempool.h>
#include <rte_memzone.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
index 7c6b1e45b4..e48805fb09 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
@@ -7,7 +7,7 @@
#include <stdbool.h>
#include <rte_branch_prediction.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_cycles.h>
#include <rte_io.h>
#include <rte_memory.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.h b/drivers/crypto/octeontx/otx_cryptodev_ops.h
index f234f16970..83b82ea059 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.h
@@ -5,7 +5,7 @@
#ifndef _OTX_CRYPTODEV_OPS_H_
#define _OTX_CRYPTODEV_OPS_H_
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#define OTX_CPT_MIN_HEADROOM_REQ (24)
#define OTX_CPT_MIN_TAILROOM_REQ (8)
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
index 1a8edae7eb..f9e7b0b474 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2019 Marvell International Ltd.
*/
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_ethdev.h>
#include "otx2_cryptodev.h"
diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c
index 844312dd1b..5023577ef8 100644
--- a/drivers/crypto/scheduler/scheduler_failover.c
+++ b/drivers/crypto/scheduler/scheduler_failover.c
@@ -2,7 +2,7 @@
* Copyright(c) 2017 Intel Corporation
*/
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include "rte_cryptodev_scheduler_operations.h"
diff --git a/drivers/crypto/scheduler/scheduler_multicore.c b/drivers/crypto/scheduler/scheduler_multicore.c
index 1e2e8dbf9f..900ab4049d 100644
--- a/drivers/crypto/scheduler/scheduler_multicore.c
+++ b/drivers/crypto/scheduler/scheduler_multicore.c
@@ -3,7 +3,7 @@
*/
#include <unistd.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include "rte_cryptodev_scheduler_operations.h"
diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
index 57e330a744..933a5c6978 100644
--- a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
+++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
@@ -2,7 +2,7 @@
* Copyright(c) 2017 Intel Corporation
*/
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include "rte_cryptodev_scheduler_operations.h"
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
index bc4a632106..ace2dec2ec 100644
--- a/drivers/crypto/scheduler/scheduler_roundrobin.c
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -2,7 +2,7 @@
* Copyright(c) 2017 Intel Corporation
*/
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include "rte_cryptodev_scheduler_operations.h"
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 8a5c737e4b..b57004c0dc 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -7,7 +7,7 @@
#include <string.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_devargs.h>
#include <rte_ethdev.h>
#include <rte_event_eth_rx_adapter.h>
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index ec74160325..1d7ddfe1d1 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -28,7 +28,7 @@
#include <rte_ethdev.h>
#include <rte_event_eth_rx_adapter.h>
#include <rte_event_eth_tx_adapter.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_dpaa_bus.h>
#include <rte_dpaa_logs.h>
#include <rte_cycles.h>
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 5ccf22f77f..e03afb2958 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -25,7 +25,7 @@
#include <rte_pci.h>
#include <rte_bus_vdev.h>
#include <ethdev_driver.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_event_eth_rx_adapter.h>
#include <rte_event_eth_tx_adapter.h>
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index b93f6ec8c6..9846fce34b 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -5,7 +5,7 @@
#include <inttypes.h>
#include <rte_common.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_debug.h>
#include <rte_dev.h>
#include <rte_eal.h>
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
index d9a002625c..d59d6c53f6 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
@@ -2,7 +2,7 @@
* Copyright (C) 2020-2021 Marvell.
*/
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
#include <rte_eventdev.h>
#include "otx2_cryptodev.h"
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 9bb1e47ae4..89bf2af399 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -52,6 +52,71 @@ struct rte_cryptodev_pmd_init_params {
unsigned int max_nb_queue_pairs;
};
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+ /** Device ID for this instance */
+ uint8_t dev_id;
+ /** Socket ID where memory is allocated */
+ uint8_t socket_id;
+ /** Unique identifier name */
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+ __extension__
+ /** Device state: STARTED(1)/STOPPED(0) */
+ uint8_t dev_started : 1;
+
+ /** Session memory pool */
+ struct rte_mempool *session_pool;
+ /** Array of pointers to queue pairs. */
+ void **queue_pairs;
+ /** Number of device queue pairs. */
+ uint16_t nb_queue_pairs;
+
+ /** PMD-specific private data */
+ void *dev_private;
+} __rte_cache_aligned;
+
+/** @internal The data structure associated with each crypto device. */
+struct rte_cryptodev {
+ /** Pointer to PMD dequeue function. */
+ dequeue_pkt_burst_t dequeue_burst;
+ /** Pointer to PMD enqueue function. */
+ enqueue_pkt_burst_t enqueue_burst;
+
+ /** Pointer to device data */
+ struct rte_cryptodev_data *data;
+ /** Functions exported by PMD */
+ struct rte_cryptodev_ops *dev_ops;
+ /** Feature flags exposes HW/SW features for the given device */
+ uint64_t feature_flags;
+ /** Backing device */
+ struct rte_device *device;
+
+ /** Crypto driver identifier*/
+ uint8_t driver_id;
+
+ /** User application callback for interrupts if present */
+ struct rte_cryptodev_cb_list link_intr_cbs;
+
+ /** Context for security ops */
+ void *security_ctx;
+
+ __extension__
+ /** Flag indicating the device is attached */
+ uint8_t attached : 1;
+
+ /** User application callback for pre enqueue processing */
+ struct rte_cryptodev_cb_rcu *enq_cbs;
+ /** User application callback for post dequeue processing */
+ struct rte_cryptodev_cb_rcu *deq_cbs;
+} __rte_cache_aligned;
+
/** Global structure used for maintaining state of allocated crypto devices */
struct rte_cryptodev_global {
struct rte_cryptodev *devs; /**< Device information array */
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 2bb9a228c1..16832f645d 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -54,73 +54,6 @@ struct rte_crypto_fp_ops {
extern struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
-/**
- * @internal
- * The data part, with no function pointers, associated with each device.
- *
- * This structure is safe to place in shared memory to be common among
- * different processes in a multi-process configuration.
- */
-struct rte_cryptodev_data {
- uint8_t dev_id;
- /**< Device ID for this instance */
- uint8_t socket_id;
- /**< Socket ID where memory is allocated */
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- /**< Unique identifier name */
-
- __extension__
- uint8_t dev_started : 1;
- /**< Device state: STARTED(1)/STOPPED(0) */
-
- struct rte_mempool *session_pool;
- /**< Session memory pool */
- void **queue_pairs;
- /**< Array of pointers to queue pairs. */
- uint16_t nb_queue_pairs;
- /**< Number of device queue pairs. */
-
- void *dev_private;
- /**< PMD-specific private data */
-} __rte_cache_aligned;
-
-
-/** @internal The data structure associated with each crypto device. */
-struct rte_cryptodev {
- dequeue_pkt_burst_t dequeue_burst;
- /**< Pointer to PMD receive function. */
- enqueue_pkt_burst_t enqueue_burst;
- /**< Pointer to PMD transmit function. */
-
- struct rte_cryptodev_data *data;
- /**< Pointer to device data */
- struct rte_cryptodev_ops *dev_ops;
- /**< Functions exported by PMD */
- uint64_t feature_flags;
- /**< Feature flags exposes HW/SW features for the given device */
- struct rte_device *device;
- /**< Backing device */
-
- uint8_t driver_id;
- /**< Crypto driver identifier*/
-
- struct rte_cryptodev_cb_list link_intr_cbs;
- /**< User application callback for interrupts if present */
-
- void *security_ctx;
- /**< Context for security ops */
-
- __extension__
- uint8_t attached : 1;
- /**< Flag indicating the device is attached */
-
- struct rte_cryptodev_cb_rcu *enq_cbs;
- /**< User application callback for pre enqueue processing */
-
- struct rte_cryptodev_cb_rcu *deq_cbs;
- /**< User application callback for post dequeue processing */
-} __rte_cache_aligned;
-
/**
* The pool of rte_cryptodev structures.
*/
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 157dac521d..b55b4b8e7e 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -43,7 +43,6 @@ DPDK_22 {
rte_cryptodev_sym_session_create;
rte_cryptodev_sym_session_free;
rte_cryptodev_sym_session_init;
- rte_cryptodevs;
#added in 21.11
rte_crypto_fp_ops;
@@ -125,4 +124,5 @@ INTERNAL {
rte_cryptodev_pmd_parse_input_args;
rte_cryptodev_pmd_probing_finish;
rte_cryptodev_pmd_release_device;
+ rte_cryptodevs;
};
--
2.25.1
^ permalink raw reply [relevance 7%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
@ 2021-10-20 15:30 3% ` Dmitry Kozlyuk
2021-10-21 9:16 0% ` Harman Kalra
0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-10-20 15:30 UTC (permalink / raw)
To: Harman Kalra
Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella
2021-10-19 08:32 (UTC+0000), Harman Kalra:
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Tuesday, October 19, 2021 4:27 AM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Ray Kinsella
> > <mdr@ashroe.eu>; david.marchand@redhat.com;
> > dmitry.kozliuk@gmail.com
> > Subject: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get
> > set APIs
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > On Tue, 19 Oct 2021 01:07:02 +0530
> > Harman Kalra <hkalra@marvell.com> wrote:
> >
> > > + /* Detect if DPDK malloc APIs are ready to be used. */
> > > + mem_allocator = rte_malloc_is_ready();
> > > + if (mem_allocator)
> > > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> > rte_intr_handle),
> > > + 0);
> > > + else
> > > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> >
> > This is problematic way to do this.
> > The reason to use rte_malloc vs malloc should be determined by usage.
> >
> > If the pointer will be shared between primary/secondary process then it has
> > to be in hugepages (ie rte_malloc). If it is not shared then then use regular
> > malloc.
> >
> > But what you have done is created a method which will be a latent bug for
> > anyone using primary/secondary process.
> >
> > Either:
> > intr_handle is not allowed to be used in secondary.
> > Then always use malloc().
> > Or.
> > intr_handle can be used by both primary and secondary.
> > Then always use rte_malloc().
> > Any code path that allocates intr_handle before pool is
> > ready is broken.
>
> Hi Stephan,
>
> Till V2, I implemented this API in a way where user of the API can choose
> If he wants intr handle to be allocated using malloc or rte_malloc by passing
> a flag arg to the rte_intr_instanc_alloc API. User of the API will best know if
> the intr handle is to be shared with secondary or not.
>
> But after some discussions and suggestions from the community we decided
> to drop that flag argument and auto detect on whether rte_malloc APIs are
> ready to be used and thereafter make all further allocations via rte_malloc.
> Currently alarm subsystem (or any driver doing allocation in constructor) gets
> interrupt instance allocated using glibc malloc that too because rte_malloc*
> is not ready by rte_eal_alarm_init(), while all further consumers gets instance
> allocated via rte_malloc.
Just as a comment, bus scanning is the real issue, not the alarms.
Alarms could be initialized after the memory management
(but it's irrelevant because their handle is not accessed from the outside).
However, MM needs to know bus IOVA requirements to initialize,
which is usually determined by at least bus device requirements.
> I think this should not cause any issue in primary/secondary model as all interrupt
> instance pointer will be shared.
What do you mean? Aren't we discussing the issue
that those allocated early are not shared?
> Infact to avoid any surprises of primary/secondary
> not working we thought of making all allocations via rte_malloc.
I don't see why anyone would not make them shared.
In order to only use rte_malloc(), we need:
1. In bus drivers, move handle allocation from scan to probe stage.
2. In EAL, move alarm initialization to after the MM.
It all can be done later with v3 design---but there are out-of-tree drivers.
We need to force them to make step 1 at some point.
I see two options:
a) Right now have an external API that only works with rte_malloc()
and internal API with autodetection. Fix DPDK and drop internal API.
b) Have external API with autodetection. Fix DPDK.
At the next ABI breakage drop autodetection and libc-malloc.
> David, Thomas, Dmitry, please add if I missed anything.
>
> Can we please conclude on this series APIs as API freeze deadline (rc1) is very near.
I support v3 design with no options and autodetection,
because that's the interface we want in the end.
Implementation can be improved later.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework
@ 2021-10-20 16:41 3% ` Akhil Goyal
2021-10-20 16:48 0% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-20 16:41 UTC (permalink / raw)
To: Power, Ciara, dev, Ananyev, Konstantin, thomas, roy.fan.zhang,
pablo.de.lara.guarch
Cc: david.marchand, hemant.agrawal, Anoob Joseph, Trahe, Fiona,
Doherty, Declan, matan, g.singh, jianjay.zhou, asomalap,
ruifeng.wang, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
Ankur Dwivedi, Wang, Haiyue, jiawenwu, jianwang,
Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram
> Hi Akhil,
>
> >Subject: [PATCH v3 0/8] crypto/security session framework rework
> >
> >As discussed in last release deprecation notice, crypto and security session
> >framework are reworked to reduce the need of two mempool objects and
> >remove the requirement to expose the rte_security_session and
> >rte_cryptodev_sym_session structures.
> >Design methodology is explained in the patch description.
> >
> >Similar work will need to be done for asymmetric sessions as well.
> Asymmetric
> >session need another rework and is postponed to next release. Since it is
> still
> >in experimental stage, we can modify the APIs in next release as well.
> >
> >The patches are compilable with all affected PMDs and tested with dpdk-
> test
> >and test-crypto-perf app on CN9k platform.
> <snip>
>
> I am seeing test failures for cryptodev_scheduler_autotest:
> + Tests Total : 638
> + Tests Skipped : 280
> + Tests Executed : 638
> + Tests Unsupported: 0
> + Tests Passed : 18
> + Tests Failed : 340
>
> The error showing for each testcase:
> scheduler_pmd_sym_session_configure() line 487: unable to config sym
> session
> CRYPTODEV: rte_cryptodev_sym_session_init() line 1743: dev_id 2 failed to
> configure session details
>
> I believe the problem happens in scheduler_pmd_sym_session_configure.
> The full sess object is no longer accessible in here, but it is required to be
> passed to rte_cryptodev_sym_session_init.
> The init function expects access to sess rather than the private data, and now
> fails as a result.
>
> static int
> scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> struct rte_crypto_sym_xform *xform, void *sess,
> rte_iova_t sess_iova __rte_unused)
> {
> struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> uint32_t i;
> int ret;
> for (i = 0; i < sched_ctx->nb_workers; i++) {
> struct scheduler_worker *worker = &sched_ctx->workers[i];
> ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> xform);
> if (ret < 0) {
> CR_SCHED_LOG(ERR, "unable to config sym session");
> return ret;
> }
> }
> return 0;
> }
>
It looks like scheduler PMD is managing the stuff on its own for other PMDs.
The APIs are designed such that the app can call session_init multiple times
With different dev_id on same sess.
But here scheduler PMD internally want to configure other PMDs sess_priv
By calling session_init.
I wonder, why we have this 2 step session_create and session_init?
Why can't we have it similar to security session create and let the scheduler
PMD have its big session private data which can hold priv_data of as many PMDs
as it want to schedule.
Konstantin/Fan/Pablo what are your thoughts on this issue?
Can we resolve this issue at priority in RC1(or probably RC2) for this release or
else we defer it for next ABI break release?
Thomas,
Can we defer this for RC2? It does not seem to be fixed in 1 day.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework
2021-10-20 16:41 3% ` Akhil Goyal
@ 2021-10-20 16:48 0% ` Akhil Goyal
2021-10-20 18:04 0% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-20 16:48 UTC (permalink / raw)
To: Power, Ciara, dev, Ananyev, Konstantin, thomas, roy.fan.zhang,
pablo.de.lara.guarch
Cc: david.marchand, hemant.agrawal, Anoob Joseph, Trahe, Fiona,
Doherty, Declan, matan, g.singh, jianjay.zhou, asomalap,
ruifeng.wang, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
Ankur Dwivedi, Wang, Haiyue, jiawenwu, jianwang,
Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram
> > Hi Akhil,
> >
> > >Subject: [PATCH v3 0/8] crypto/security session framework rework
> > >
> > >As discussed in last release deprecation notice, crypto and security session
> > >framework are reworked to reduce the need of two mempool objects and
> > >remove the requirement to expose the rte_security_session and
> > >rte_cryptodev_sym_session structures.
> > >Design methodology is explained in the patch description.
> > >
> > >Similar work will need to be done for asymmetric sessions as well.
> > Asymmetric
> > >session need another rework and is postponed to next release. Since it is
> > still
> > >in experimental stage, we can modify the APIs in next release as well.
> > >
> > >The patches are compilable with all affected PMDs and tested with dpdk-
> > test
> > >and test-crypto-perf app on CN9k platform.
> > <snip>
> >
> > I am seeing test failures for cryptodev_scheduler_autotest:
> > + Tests Total : 638
> > + Tests Skipped : 280
> > + Tests Executed : 638
> > + Tests Unsupported: 0
> > + Tests Passed : 18
> > + Tests Failed : 340
> >
> > The error showing for each testcase:
> > scheduler_pmd_sym_session_configure() line 487: unable to config sym
> > session
> > CRYPTODEV: rte_cryptodev_sym_session_init() line 1743: dev_id 2 failed to
> > configure session details
> >
> > I believe the problem happens in scheduler_pmd_sym_session_configure.
> > The full sess object is no longer accessible in here, but it is required to be
> > passed to rte_cryptodev_sym_session_init.
> > The init function expects access to sess rather than the private data, and
> now
> > fails as a result.
> >
> > static int
> > scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform, void *sess,
> > rte_iova_t sess_iova __rte_unused)
> > {
> > struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> > uint32_t i;
> > int ret;
> > for (i = 0; i < sched_ctx->nb_workers; i++) {
> > struct scheduler_worker *worker = &sched_ctx->workers[i];
> > ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> > xform);
> > if (ret < 0) {
> > CR_SCHED_LOG(ERR, "unable to config sym session");
> > return ret;
> > }
> > }
> > return 0;
> > }
> >
> It looks like scheduler PMD is managing the stuff on its own for other PMDs.
> The APIs are designed such that the app can call session_init multiple times
> With different dev_id on same sess.
> But here scheduler PMD internally want to configure other PMDs sess_priv
> By calling session_init.
>
> I wonder, why we have this 2 step session_create and session_init?
> Why can't we have it similar to security session create and let the scheduler
> PMD have its big session private data which can hold priv_data of as many
> PMDs
> as it want to schedule.
>
> Konstantin/Fan/Pablo what are your thoughts on this issue?
> Can we resolve this issue at priority in RC1(or probably RC2) for this release
> or
> else we defer it for next ABI break release?
>
> Thomas,
> Can we defer this for RC2? It does not seem to be fixed in 1 day.
On another thought, this can be fixed with current patch also by having a big session
Private data for scheduler PMD which is big enough to hold all other PMDs data which
it want to schedule and then call the sess_configure function pointer of dev directly.
What say? And this PMD change can be done in RC2. And this patchset go as is in RC1.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework
2021-10-20 16:48 0% ` Akhil Goyal
@ 2021-10-20 18:04 0% ` Akhil Goyal
2021-10-21 8:43 0% ` Zhang, Roy Fan
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-20 18:04 UTC (permalink / raw)
To: Power, Ciara, dev, Ananyev, Konstantin, thomas, roy.fan.zhang,
pablo.de.lara.guarch
Cc: david.marchand, hemant.agrawal, Anoob Joseph, Trahe, Fiona,
Doherty, Declan, matan, g.singh, jianjay.zhou, asomalap,
ruifeng.wang, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
Ankur Dwivedi, Wang, Haiyue, jiawenwu, jianwang,
Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram
> > > I am seeing test failures for cryptodev_scheduler_autotest:
> > > + Tests Total : 638
> > > + Tests Skipped : 280
> > > + Tests Executed : 638
> > > + Tests Unsupported: 0
> > > + Tests Passed : 18
> > > + Tests Failed : 340
> > >
> > > The error showing for each testcase:
> > > scheduler_pmd_sym_session_configure() line 487: unable to config sym
> > > session
> > > CRYPTODEV: rte_cryptodev_sym_session_init() line 1743: dev_id 2 failed
> to
> > > configure session details
> > >
> > > I believe the problem happens in
> scheduler_pmd_sym_session_configure.
> > > The full sess object is no longer accessible in here, but it is required to be
> > > passed to rte_cryptodev_sym_session_init.
> > > The init function expects access to sess rather than the private data, and
> > now
> > > fails as a result.
> > >
> > > static int
> > > scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > > struct rte_crypto_sym_xform *xform, void *sess,
> > > rte_iova_t sess_iova __rte_unused)
> > > {
> > > struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> > > uint32_t i;
> > > int ret;
> > > for (i = 0; i < sched_ctx->nb_workers; i++) {
> > > struct scheduler_worker *worker = &sched_ctx->workers[i];
> > > ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> > > xform);
> > > if (ret < 0) {
> > > CR_SCHED_LOG(ERR, "unable to config sym session");
> > > return ret;
> > > }
> > > }
> > > return 0;
> > > }
> > >
> > It looks like scheduler PMD is managing the stuff on its own for other
> PMDs.
> > The APIs are designed such that the app can call session_init multiple times
> > With different dev_id on same sess.
> > But here scheduler PMD internally want to configure other PMDs sess_priv
> > By calling session_init.
> >
> > I wonder, why we have this 2 step session_create and session_init?
> > Why can't we have it similar to security session create and let the scheduler
> > PMD have its big session private data which can hold priv_data of as many
> > PMDs
> > as it want to schedule.
> >
> > Konstantin/Fan/Pablo what are your thoughts on this issue?
> > Can we resolve this issue at priority in RC1(or probably RC2) for this release
> > or
> > else we defer it for next ABI break release?
> >
> > Thomas,
> > Can we defer this for RC2? It does not seem to be fixed in 1 day.
>
> On another thought, this can be fixed with current patch also by having a big
> session
> Private data for scheduler PMD which is big enough to hold all other PMDs
> data which
> it want to schedule and then call the sess_configure function pointer of dev
> directly.
> What say? And this PMD change can be done in RC2. And this patchset go as
> is in RC1.
Here is the diff in scheduler PMD which should fix this issue in current patchset.
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index b92ffd6026..0611ea2c6a 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -450,9 +450,8 @@ scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
}
static uint32_t
-scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
+get_max_session_priv_size(struct scheduler_ctx *sched_ctx)
{
- struct scheduler_ctx *sched_ctx = dev->data->dev_private;
uint8_t i = 0;
uint32_t max_priv_sess_size = 0;
@@ -469,20 +468,35 @@ scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
return max_priv_sess_size;
}
+static uint32_t
+scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev)
+{
+ struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+ return get_max_session_priv_size(sched_ctx) * sched_ctx->nb_workers;
+}
+
static int
scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform, void *sess,
rte_iova_t sess_iova __rte_unused)
{
struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+ uint32_t worker_sess_priv_sz = get_max_session_priv_size(sched_ctx);
uint32_t i;
int ret;
for (i = 0; i < sched_ctx->nb_workers; i++) {
struct scheduler_worker *worker = &sched_ctx->workers[i];
+ struct rte_cryptodev *worker_dev =
+ rte_cryptodev_pmd_get_dev(worker->dev_id);
+ uint8_t index = worker_dev->driver_id;
- ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
- xform);
+ ret = worker_dev->dev_ops->sym_session_configure(
+ worker_dev,
+ xform,
+ (uint8_t *)sess + (index * worker_sess_priv_sz),
+ sess_iova + (index * worker_sess_priv_sz));
if (ret < 0) {
CR_SCHED_LOG(ERR, "unable to config sym session");
return ret;
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5] ethdev: add namespace
@ 2021-10-20 19:23 1% ` Ferruh Yigit
2021-10-22 2:02 1% ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-20 19:23 UTC (permalink / raw)
To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
Min Hu (Connor),
Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Haiyue Wang,
Beilei Xing, Matan Azrad, Viacheslav Ovsiienko, Keith Wiles,
Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal, Declan Doherty,
Ray Kinsella, Radu Nicolau, Hemant Agrawal, Sachin Saxena,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
John W. Linville, Ciara Loftus, Shepard Siegel, Ed Czeck,
John Miller, Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Bruce Richardson, Konstantin Ananyev, Ruifeng Wang,
Rahul Lakkireddy, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, Gaetan Rivet,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
Srisivasubramanian Srinivasan, Jakub Grajciar, Zyta Szpak,
Liron Himi, Stephen Hemminger, Long Li, Martin Spinler,
Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa, Harman Kalra,
Anoob Joseph, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Jasvinder Singh,
Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Nicolas Chautru, David Hunt, Harry van Haaren, Bernard Iremonger,
Anatoly Burakov, John McNamara, Kirill Rybalchenko, Byron Marohn,
Yipeng Wang
Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 1214846 bytes --]
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
v2:
* Updated internal components
* Removed deprecation notice
v3:
* Updated missing macros / structs that David highlighted
* Added release notes update
v4:
* rebased on latest next-net
* depends on https://patches.dpdk.org/user/todo/dpdk/?series=19744
* Not able to complete scripts to update user code, although some
shared by Aman:
https://patches.dpdk.org/project/dpdk/patch/20211008102949.70716-1-aman.deep.singh@intel.com/
Sending new version for possible option to get this patch for -rc1 and
work for scripts later, before release.
v5:
* rebased on latest next-net
---
app/proc-info/main.c | 8 +-
app/test-eventdev/test_perf_common.c | 4 +-
app/test-eventdev/test_pipeline_common.c | 10 +-
app/test-flow-perf/config.h | 2 +-
app/test-pipeline/init.c | 8 +-
app/test-pmd/cmdline.c | 286 ++---
app/test-pmd/config.c | 200 ++--
app/test-pmd/csumonly.c | 28 +-
app/test-pmd/flowgen.c | 6 +-
app/test-pmd/macfwd.c | 6 +-
app/test-pmd/macswap_common.h | 6 +-
app/test-pmd/parameters.c | 54 +-
app/test-pmd/testpmd.c | 52 +-
app/test-pmd/testpmd.h | 2 +-
app/test-pmd/txonly.c | 6 +-
app/test/test_ethdev_link.c | 68 +-
app/test/test_event_eth_rx_adapter.c | 4 +-
app/test/test_kni.c | 2 +-
app/test/test_link_bonding.c | 4 +-
app/test/test_link_bonding_mode4.c | 4 +-
| 28 +-
app/test/test_pmd_perf.c | 12 +-
app/test/virtual_pmd.c | 10 +-
doc/guides/eventdevs/cnxk.rst | 2 +-
doc/guides/eventdevs/octeontx2.rst | 2 +-
doc/guides/nics/af_packet.rst | 2 +-
doc/guides/nics/bnxt.rst | 24 +-
doc/guides/nics/enic.rst | 2 +-
doc/guides/nics/features.rst | 114 +-
doc/guides/nics/fm10k.rst | 6 +-
doc/guides/nics/intel_vf.rst | 10 +-
doc/guides/nics/ixgbe.rst | 12 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/tap.rst | 2 +-
.../generic_segmentation_offload_lib.rst | 8 +-
doc/guides/prog_guide/mbuf_lib.rst | 18 +-
doc/guides/prog_guide/poll_mode_drv.rst | 8 +-
doc/guides/prog_guide/rte_flow.rst | 34 +-
doc/guides/prog_guide/rte_security.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 10 +-
doc/guides/rel_notes/release_21_11.rst | 3 +
doc/guides/sample_app_ug/ipsec_secgw.rst | 4 +-
doc/guides/testpmd_app_ug/run_app.rst | 2 +-
drivers/bus/dpaa/include/process.h | 16 +-
drivers/common/cnxk/roc_npc.h | 2 +-
drivers/net/af_packet/rte_eth_af_packet.c | 20 +-
drivers/net/af_xdp/rte_eth_af_xdp.c | 12 +-
drivers/net/ark/ark_ethdev.c | 16 +-
drivers/net/atlantic/atl_ethdev.c | 88 +-
drivers/net/atlantic/atl_ethdev.h | 18 +-
drivers/net/atlantic/atl_rxtx.c | 6 +-
drivers/net/avp/avp_ethdev.c | 26 +-
drivers/net/axgbe/axgbe_dev.c | 6 +-
drivers/net/axgbe/axgbe_ethdev.c | 104 +-
drivers/net/axgbe/axgbe_ethdev.h | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 2 +-
drivers/net/axgbe/axgbe_rxtx.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 12 +-
drivers/net/bnxt/bnxt.h | 62 +-
drivers/net/bnxt/bnxt_ethdev.c | 172 +--
drivers/net/bnxt/bnxt_flow.c | 6 +-
drivers/net/bnxt/bnxt_hwrm.c | 112 +-
drivers/net/bnxt/bnxt_reps.c | 2 +-
drivers/net/bnxt/bnxt_ring.c | 4 +-
drivers/net/bnxt/bnxt_rxq.c | 28 +-
drivers/net/bnxt/bnxt_rxr.c | 4 +-
drivers/net/bnxt/bnxt_rxtx_vec_avx2.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_common.h | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_neon.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
drivers/net/bnxt/bnxt_txr.c | 4 +-
drivers/net/bnxt/bnxt_vnic.c | 30 +-
drivers/net/bnxt/rte_pmd_bnxt.c | 8 +-
drivers/net/bonding/eth_bond_private.h | 4 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 16 +-
drivers/net/bonding/rte_eth_bond_api.c | 6 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 50 +-
drivers/net/cnxk/cn10k_ethdev.c | 42 +-
drivers/net/cnxk/cn10k_rte_flow.c | 2 +-
drivers/net/cnxk/cn10k_rx.c | 4 +-
drivers/net/cnxk/cn10k_tx.c | 4 +-
drivers/net/cnxk/cn9k_ethdev.c | 60 +-
drivers/net/cnxk/cn9k_rx.c | 4 +-
drivers/net/cnxk/cn9k_tx.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 112 +-
drivers/net/cnxk/cnxk_ethdev.h | 49 +-
drivers/net/cnxk/cnxk_ethdev_devargs.c | 6 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 106 +-
drivers/net/cnxk/cnxk_link.c | 14 +-
drivers/net/cnxk/cnxk_ptp.c | 4 +-
drivers/net/cnxk/cnxk_rte_flow.c | 2 +-
drivers/net/cxgbe/cxgbe.h | 46 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 42 +-
drivers/net/cxgbe/cxgbe_main.c | 12 +-
drivers/net/dpaa/dpaa_ethdev.c | 180 +--
drivers/net/dpaa/dpaa_ethdev.h | 10 +-
drivers/net/dpaa/dpaa_flow.c | 32 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 138 +--
drivers/net/dpaa2/dpaa2_ethdev.h | 22 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 8 +-
drivers/net/e1000/e1000_ethdev.h | 18 +-
drivers/net/e1000/em_ethdev.c | 64 +-
drivers/net/e1000/em_rxtx.c | 38 +-
drivers/net/e1000/igb_ethdev.c | 158 +--
drivers/net/e1000/igb_pf.c | 2 +-
drivers/net/e1000/igb_rxtx.c | 116 +-
drivers/net/ena/ena_ethdev.c | 70 +-
drivers/net/ena/ena_ethdev.h | 4 +-
| 74 +-
drivers/net/enetc/enetc_ethdev.c | 30 +-
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 88 +-
drivers/net/enic/enic_main.c | 40 +-
drivers/net/enic/enic_res.c | 50 +-
drivers/net/failsafe/failsafe.c | 8 +-
drivers/net/failsafe/failsafe_intr.c | 4 +-
drivers/net/failsafe/failsafe_ops.c | 78 +-
drivers/net/fm10k/fm10k.h | 4 +-
drivers/net/fm10k/fm10k_ethdev.c | 146 +--
drivers/net/fm10k/fm10k_rxtx_vec.c | 6 +-
drivers/net/hinic/base/hinic_pmd_hwdev.c | 22 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 136 +--
drivers/net/hinic/hinic_pmd_rx.c | 36 +-
drivers/net/hinic/hinic_pmd_rx.h | 22 +-
drivers/net/hns3/hns3_dcb.c | 14 +-
drivers/net/hns3/hns3_ethdev.c | 352 +++---
drivers/net/hns3/hns3_ethdev.h | 12 +-
drivers/net/hns3/hns3_ethdev_vf.c | 100 +-
drivers/net/hns3/hns3_flow.c | 6 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
| 108 +-
| 28 +-
drivers/net/hns3/hns3_rxtx.c | 30 +-
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/hns3/hns3_rxtx_vec.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 272 ++---
drivers/net/i40e/i40e_ethdev.h | 24 +-
drivers/net/i40e/i40e_flow.c | 32 +-
drivers/net/i40e/i40e_hash.c | 158 +--
drivers/net/i40e/i40e_pf.c | 14 +-
drivers/net/i40e/i40e_rxtx.c | 8 +-
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 8 +-
drivers/net/i40e/i40e_vf_representor.c | 48 +-
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 178 +--
drivers/net/iavf/iavf_hash.c | 320 +++---
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/iavf/iavf_rxtx.h | 24 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 6 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 86 +-
drivers/net/ice/ice_dcf_vf_representor.c | 56 +-
drivers/net/ice/ice_ethdev.c | 180 +--
drivers/net/ice/ice_ethdev.h | 26 +-
drivers/net/ice/ice_hash.c | 290 ++---
drivers/net/ice/ice_rxtx.c | 16 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 4 +-
drivers/net/ice/ice_rxtx_vec_common.h | 28 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
drivers/net/igc/igc_ethdev.c | 138 +--
drivers/net/igc/igc_ethdev.h | 54 +-
drivers/net/igc/igc_txrx.c | 48 +-
drivers/net/ionic/ionic_ethdev.c | 138 +--
drivers/net/ionic/ionic_ethdev.h | 12 +-
drivers/net/ionic/ionic_lif.c | 36 +-
drivers/net/ionic/ionic_rxtx.c | 10 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 64 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 285 +++--
drivers/net/ixgbe/ixgbe_ethdev.h | 18 +-
drivers/net/ixgbe/ixgbe_fdir.c | 24 +-
drivers/net/ixgbe/ixgbe_flow.c | 2 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 12 +-
drivers/net/ixgbe/ixgbe_pf.c | 34 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 249 ++--
drivers/net/ixgbe/ixgbe_rxtx.h | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 2 +-
drivers/net/ixgbe/ixgbe_tm.c | 16 +-
drivers/net/ixgbe/ixgbe_vf_representor.c | 16 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 14 +-
drivers/net/ixgbe/rte_pmd_ixgbe.h | 4 +-
drivers/net/kni/rte_eth_kni.c | 8 +-
drivers/net/liquidio/lio_ethdev.c | 114 +-
drivers/net/memif/memif_socket.c | 2 +-
drivers/net/memif/rte_eth_memif.c | 16 +-
drivers/net/mlx4/mlx4_ethdev.c | 32 +-
drivers/net/mlx4/mlx4_flow.c | 30 +-
drivers/net/mlx4/mlx4_intr.c | 8 +-
drivers/net/mlx4/mlx4_rxq.c | 18 +-
drivers/net/mlx4/mlx4_txq.c | 24 +-
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 54 +-
drivers/net/mlx5/linux/mlx5_os.c | 6 +-
drivers/net/mlx5/mlx5.c | 4 +-
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_defs.h | 6 +-
drivers/net/mlx5/mlx5_ethdev.c | 6 +-
drivers/net/mlx5/mlx5_flow.c | 54 +-
drivers/net/mlx5/mlx5_flow.h | 12 +-
drivers/net/mlx5/mlx5_flow_dv.c | 44 +-
drivers/net/mlx5/mlx5_flow_verbs.c | 4 +-
| 10 +-
drivers/net/mlx5/mlx5_rxq.c | 40 +-
drivers/net/mlx5/mlx5_rxtx_vec.h | 8 +-
drivers/net/mlx5/mlx5_tx.c | 30 +-
drivers/net/mlx5/mlx5_txq.c | 58 +-
drivers/net/mlx5/mlx5_vlan.c | 4 +-
drivers/net/mlx5/windows/mlx5_os.c | 4 +-
drivers/net/mvneta/mvneta_ethdev.c | 32 +-
drivers/net/mvneta/mvneta_ethdev.h | 10 +-
drivers/net/mvneta/mvneta_rxtx.c | 2 +-
drivers/net/mvpp2/mrvl_ethdev.c | 112 +-
drivers/net/netvsc/hn_ethdev.c | 70 +-
drivers/net/netvsc/hn_rndis.c | 50 +-
drivers/net/nfb/nfb_ethdev.c | 20 +-
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfp/nfp_common.c | 122 +-
drivers/net/nfp/nfp_ethdev.c | 2 +-
drivers/net/nfp/nfp_ethdev_vf.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 50 +-
drivers/net/null/rte_eth_null.c | 28 +-
drivers/net/octeontx/octeontx_ethdev.c | 74 +-
drivers/net/octeontx/octeontx_ethdev.h | 30 +-
drivers/net/octeontx/octeontx_ethdev_ops.c | 26 +-
drivers/net/octeontx2/otx2_ethdev.c | 96 +-
drivers/net/octeontx2/otx2_ethdev.h | 64 +-
drivers/net/octeontx2/otx2_ethdev_devargs.c | 12 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 14 +-
drivers/net/octeontx2/otx2_ethdev_sec.c | 8 +-
drivers/net/octeontx2/otx2_flow.c | 2 +-
drivers/net/octeontx2/otx2_flow_ctrl.c | 36 +-
drivers/net/octeontx2/otx2_flow_parse.c | 4 +-
drivers/net/octeontx2/otx2_link.c | 40 +-
drivers/net/octeontx2/otx2_mcast.c | 2 +-
drivers/net/octeontx2/otx2_ptp.c | 4 +-
| 70 +-
drivers/net/octeontx2/otx2_rx.c | 4 +-
drivers/net/octeontx2/otx2_tx.c | 2 +-
drivers/net/octeontx2/otx2_vlan.c | 42 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 6 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 +-
drivers/net/pcap/pcap_ethdev.c | 12 +-
drivers/net/pfe/pfe_ethdev.c | 18 +-
drivers/net/qede/base/mcp_public.h | 4 +-
drivers/net/qede/qede_ethdev.c | 156 +--
drivers/net/qede/qede_filter.c | 42 +-
drivers/net/qede/qede_rxtx.c | 2 +-
drivers/net/qede/qede_rxtx.h | 16 +-
drivers/net/ring/rte_eth_ring.c | 20 +-
drivers/net/sfc/sfc.c | 30 +-
drivers/net/sfc/sfc_ef100_rx.c | 10 +-
drivers/net/sfc/sfc_ef100_tx.c | 20 +-
drivers/net/sfc/sfc_ef10_essb_rx.c | 4 +-
drivers/net/sfc/sfc_ef10_rx.c | 8 +-
drivers/net/sfc/sfc_ef10_tx.c | 32 +-
drivers/net/sfc/sfc_ethdev.c | 50 +-
drivers/net/sfc/sfc_flow.c | 2 +-
drivers/net/sfc/sfc_port.c | 52 +-
drivers/net/sfc/sfc_repr.c | 10 +-
drivers/net/sfc/sfc_rx.c | 50 +-
drivers/net/sfc/sfc_tx.c | 50 +-
drivers/net/softnic/rte_eth_softnic.c | 12 +-
drivers/net/szedata2/rte_eth_szedata2.c | 14 +-
drivers/net/tap/rte_eth_tap.c | 104 +-
| 2 +-
drivers/net/thunderx/nicvf_ethdev.c | 102 +-
drivers/net/thunderx/nicvf_ethdev.h | 40 +-
drivers/net/txgbe/txgbe_ethdev.c | 242 ++--
drivers/net/txgbe/txgbe_ethdev.h | 18 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 24 +-
drivers/net/txgbe/txgbe_fdir.c | 20 +-
drivers/net/txgbe/txgbe_flow.c | 2 +-
drivers/net/txgbe/txgbe_ipsec.c | 12 +-
drivers/net/txgbe/txgbe_pf.c | 34 +-
drivers/net/txgbe/txgbe_rxtx.c | 308 ++---
drivers/net/txgbe/txgbe_rxtx.h | 4 +-
drivers/net/txgbe/txgbe_tm.c | 16 +-
drivers/net/vhost/rte_eth_vhost.c | 16 +-
drivers/net/virtio/virtio_ethdev.c | 124 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 72 +-
drivers/net/vmxnet3/vmxnet3_ethdev.h | 16 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 16 +-
examples/bbdev_app/main.c | 6 +-
examples/bond/main.c | 14 +-
examples/distributor/main.c | 12 +-
examples/ethtool/ethtool-app/main.c | 2 +-
examples/ethtool/lib/rte_ethtool.c | 18 +-
.../pipeline_worker_generic.c | 16 +-
.../eventdev_pipeline/pipeline_worker_tx.c | 12 +-
examples/flow_classify/flow_classify.c | 4 +-
examples/flow_filtering/main.c | 16 +-
examples/ioat/ioatfwd.c | 8 +-
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 20 +-
examples/ip_reassembly/main.c | 18 +-
examples/ipsec-secgw/ipsec-secgw.c | 32 +-
examples/ipsec-secgw/sa.c | 8 +-
examples/ipv4_multicast/main.c | 6 +-
examples/kni/main.c | 8 +-
examples/l2fwd-crypto/main.c | 10 +-
examples/l2fwd-event/l2fwd_common.c | 10 +-
examples/l2fwd-event/main.c | 2 +-
examples/l2fwd-jobstats/main.c | 8 +-
examples/l2fwd-keepalive/main.c | 8 +-
examples/l2fwd/main.c | 8 +-
examples/l3fwd-acl/main.c | 18 +-
examples/l3fwd-graph/main.c | 14 +-
examples/l3fwd-power/main.c | 16 +-
examples/l3fwd/l3fwd_event.c | 4 +-
examples/l3fwd/main.c | 18 +-
examples/link_status_interrupt/main.c | 10 +-
.../client_server_mp/mp_server/init.c | 4 +-
examples/multi_process/symmetric_mp/main.c | 14 +-
examples/ntb/ntb_fwd.c | 6 +-
examples/packet_ordering/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 16 +-
examples/pipeline/obj.c | 20 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 16 +-
examples/qos_sched/init.c | 6 +-
examples/rxtx_callbacks/main.c | 8 +-
examples/server_node_efd/server/init.c | 8 +-
examples/skeleton/basicfwd.c | 4 +-
examples/vhost/main.c | 26 +-
examples/vm_power_manager/main.c | 6 +-
examples/vmdq/main.c | 20 +-
examples/vmdq_dcb/main.c | 40 +-
lib/ethdev/ethdev_driver.h | 36 +-
lib/ethdev/rte_ethdev.c | 181 ++-
lib/ethdev/rte_ethdev.h | 1021 +++++++++++------
lib/ethdev/rte_flow.h | 2 +-
lib/gso/rte_gso.c | 20 +-
lib/gso/rte_gso.h | 4 +-
lib/mbuf/rte_mbuf_core.h | 8 +-
lib/mbuf/rte_mbuf_dyn.h | 2 +-
339 files changed, 6639 insertions(+), 6382 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9ff3..963b6aa5c589 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
}
ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
- if (ret == 0 && fc_conf.mode != RTE_FC_NONE) {
+ if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE) {
printf("\t -- flow control mode %s%s high %u low %u pause %u%s%s\n",
- fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
- fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
- fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+ fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+ fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
fc_conf.autoneg ? " auto" : "",
fc_conf.high_water,
fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 660d5a0364b6..31d1b0e14653 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,13 +668,13 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct test_perf *t = evt_test_priv(test);
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 2775e72c580d..d202091077a6 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_rxconf rx_conf;
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
@@ -223,7 +223,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
local_port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = rte_eth_dev_info_get(i, &dev_info);
if (ret != 0) {
@@ -233,9 +233,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
}
/* Enable mbuf fast free if PMD has the capability. */
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
#define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
/* Configuration */
#define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -178,7 +178,7 @@ app_ports_check_link(void)
RTE_LOG(INFO, USER1, "Port %u %s\n",
port,
link_status_text);
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
all_ports_up = 0;
}
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3221f6e1aa40..ebea13f86ab0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1478,51 +1478,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
int duplex;
if (!strcmp(duplexstr, "half")) {
- duplex = ETH_LINK_HALF_DUPLEX;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
} else if (!strcmp(duplexstr, "full")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else if (!strcmp(duplexstr, "auto")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
fprintf(stderr, "Unknown duplex parameter\n");
return -1;
}
if (!strcmp(speedstr, "10")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
} else if (!strcmp(speedstr, "100")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
} else {
- if (duplex != ETH_LINK_FULL_DUPLEX) {
+ if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
fprintf(stderr, "Invalid speed/duplex parameters\n");
return -1;
}
if (!strcmp(speedstr, "1000")) {
- *speed = ETH_LINK_SPEED_1G;
+ *speed = RTE_ETH_LINK_SPEED_1G;
} else if (!strcmp(speedstr, "10000")) {
- *speed = ETH_LINK_SPEED_10G;
+ *speed = RTE_ETH_LINK_SPEED_10G;
} else if (!strcmp(speedstr, "25000")) {
- *speed = ETH_LINK_SPEED_25G;
+ *speed = RTE_ETH_LINK_SPEED_25G;
} else if (!strcmp(speedstr, "40000")) {
- *speed = ETH_LINK_SPEED_40G;
+ *speed = RTE_ETH_LINK_SPEED_40G;
} else if (!strcmp(speedstr, "50000")) {
- *speed = ETH_LINK_SPEED_50G;
+ *speed = RTE_ETH_LINK_SPEED_50G;
} else if (!strcmp(speedstr, "100000")) {
- *speed = ETH_LINK_SPEED_100G;
+ *speed = RTE_ETH_LINK_SPEED_100G;
} else if (!strcmp(speedstr, "200000")) {
- *speed = ETH_LINK_SPEED_200G;
+ *speed = RTE_ETH_LINK_SPEED_200G;
} else if (!strcmp(speedstr, "auto")) {
- *speed = ETH_LINK_SPEED_AUTONEG;
+ *speed = RTE_ETH_LINK_SPEED_AUTONEG;
} else {
fprintf(stderr, "Unknown speed parameter\n");
return -1;
}
}
- if (*speed != ETH_LINK_SPEED_AUTONEG)
- *speed |= ETH_LINK_SPEED_FIXED;
+ if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+ *speed |= RTE_ETH_LINK_SPEED_FIXED;
return 0;
}
@@ -2166,33 +2166,33 @@ cmd_config_rss_parsed(void *parsed_result,
int ret;
if (!strcmp(res->value, "all"))
- rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
- ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
- ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
- ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
- ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+ RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+ RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+ RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "eth"))
- rss_conf.rss_hf = ETH_RSS_ETH;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH;
else if (!strcmp(res->value, "vlan"))
- rss_conf.rss_hf = ETH_RSS_VLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
else if (!strcmp(res->value, "ip"))
- rss_conf.rss_hf = ETH_RSS_IP;
+ rss_conf.rss_hf = RTE_ETH_RSS_IP;
else if (!strcmp(res->value, "udp"))
- rss_conf.rss_hf = ETH_RSS_UDP;
+ rss_conf.rss_hf = RTE_ETH_RSS_UDP;
else if (!strcmp(res->value, "tcp"))
- rss_conf.rss_hf = ETH_RSS_TCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_TCP;
else if (!strcmp(res->value, "sctp"))
- rss_conf.rss_hf = ETH_RSS_SCTP;
+ rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
else if (!strcmp(res->value, "ether"))
- rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
else if (!strcmp(res->value, "port"))
- rss_conf.rss_hf = ETH_RSS_PORT;
+ rss_conf.rss_hf = RTE_ETH_RSS_PORT;
else if (!strcmp(res->value, "vxlan"))
- rss_conf.rss_hf = ETH_RSS_VXLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
else if (!strcmp(res->value, "geneve"))
- rss_conf.rss_hf = ETH_RSS_GENEVE;
+ rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
else if (!strcmp(res->value, "nvgre"))
- rss_conf.rss_hf = ETH_RSS_NVGRE;
+ rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
else if (!strcmp(res->value, "l3-pre32"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
else if (!strcmp(res->value, "l3-pre40"))
@@ -2206,46 +2206,46 @@ cmd_config_rss_parsed(void *parsed_result,
else if (!strcmp(res->value, "l3-pre96"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
else if (!strcmp(res->value, "l3-src-only"))
- rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
else if (!strcmp(res->value, "l3-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
else if (!strcmp(res->value, "l4-src-only"))
- rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
else if (!strcmp(res->value, "l4-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
else if (!strcmp(res->value, "l2-src-only"))
- rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
else if (!strcmp(res->value, "l2-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
else if (!strcmp(res->value, "l2tpv3"))
- rss_conf.rss_hf = ETH_RSS_L2TPV3;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
else if (!strcmp(res->value, "esp"))
- rss_conf.rss_hf = ETH_RSS_ESP;
+ rss_conf.rss_hf = RTE_ETH_RSS_ESP;
else if (!strcmp(res->value, "ah"))
- rss_conf.rss_hf = ETH_RSS_AH;
+ rss_conf.rss_hf = RTE_ETH_RSS_AH;
else if (!strcmp(res->value, "pfcp"))
- rss_conf.rss_hf = ETH_RSS_PFCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
else if (!strcmp(res->value, "pppoe"))
- rss_conf.rss_hf = ETH_RSS_PPPOE;
+ rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
else if (!strcmp(res->value, "gtpu"))
- rss_conf.rss_hf = ETH_RSS_GTPU;
+ rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
else if (!strcmp(res->value, "ecpri"))
- rss_conf.rss_hf = ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "mpls"))
- rss_conf.rss_hf = ETH_RSS_MPLS;
+ rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
else if (!strcmp(res->value, "ipv4-chksum"))
- rss_conf.rss_hf = ETH_RSS_IPV4_CHKSUM;
+ rss_conf.rss_hf = RTE_ETH_RSS_IPV4_CHKSUM;
else if (!strcmp(res->value, "none"))
rss_conf.rss_hf = 0;
else if (!strcmp(res->value, "level-default")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
} else if (!strcmp(res->value, "level-outer")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
} else if (!strcmp(res->value, "level-inner")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
} else if (!strcmp(res->value, "default"))
use_default = 1;
else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2982,8 +2982,8 @@ parse_reta_config(const char *str,
return -1;
}
- idx = hash_index / RTE_RETA_GROUP_SIZE;
- shift = hash_index % RTE_RETA_GROUP_SIZE;
+ idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+ shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
reta_conf[idx].mask |= (1ULL << shift);
reta_conf[idx].reta[shift] = nb_queue;
}
@@ -3012,10 +3012,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
} else
printf("The reta size of port %d is %u\n",
res->port_id, dev_info.reta_size);
- if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+ if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
fprintf(stderr,
"Currently do not support more than %u entries of redirection table\n",
- ETH_RSS_RETA_SIZE_512);
+ RTE_ETH_RSS_RETA_SIZE_512);
return;
}
@@ -3086,8 +3086,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
char *end;
char *str_fld[8];
uint16_t i;
- uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
- RTE_RETA_GROUP_SIZE;
+ uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+ RTE_ETH_RETA_GROUP_SIZE;
int ret;
p = strchr(p0, '(');
@@ -3132,7 +3132,7 @@ cmd_showport_reta_parsed(void *parsed_result,
if (ret != 0)
return;
- max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+ max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
if (res->size == 0 || res->size > max_reta_size) {
fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
res->size, max_reta_size);
@@ -3272,7 +3272,7 @@ cmd_config_dcb_parsed(void *parsed_result,
return;
}
- if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+ if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
fprintf(stderr,
"The invalid number of traffic class, only 4 or 8 allowed.\n");
return;
@@ -4276,9 +4276,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
enum rte_vlan_type vlan_type;
if (!strcmp(res->vlan_type, "inner"))
- vlan_type = ETH_VLAN_TYPE_INNER;
+ vlan_type = RTE_ETH_VLAN_TYPE_INNER;
else if (!strcmp(res->vlan_type, "outer"))
- vlan_type = ETH_VLAN_TYPE_OUTER;
+ vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
else {
fprintf(stderr, "Unknown vlan type\n");
return;
@@ -4615,55 +4615,55 @@ csum_show(int port_id)
printf("Parse tunnel is %s\n",
(ports[port_id].parse_tunnel) ? "on" : "off");
printf("IP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
printf("UDP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
printf("TCP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
printf("SCTP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
printf("Outer-Ip checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
printf("Outer-Udp checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
/* display warnings if configuration is not supported by the NIC */
ret = eth_dev_info_get_print_err(port_id, &dev_info);
if (ret != 0)
return;
- if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware UDP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware TCP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
== 0) {
fprintf(stderr,
"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4713,8 +4713,8 @@ cmd_csum_parsed(void *parsed_result,
if (!strcmp(res->proto, "ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_IPV4_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
} else {
fprintf(stderr,
"IP checksum offload is not supported by port %u\n",
@@ -4722,8 +4722,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_UDP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
} else {
fprintf(stderr,
"UDP checksum offload is not supported by port %u\n",
@@ -4731,8 +4731,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "tcp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_TCP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
} else {
fprintf(stderr,
"TCP checksum offload is not supported by port %u\n",
@@ -4740,8 +4740,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "sctp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SCTP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
} else {
fprintf(stderr,
"SCTP checksum offload is not supported by port %u\n",
@@ -4749,9 +4749,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
} else {
fprintf(stderr,
"Outer IP checksum offload is not supported by port %u\n",
@@ -4759,9 +4759,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
} else {
fprintf(stderr,
"Outer UDP checksum offload is not supported by port %u\n",
@@ -4916,7 +4916,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr, "Error: TSO is not supported by port %d\n",
res->port_id);
return;
@@ -4924,11 +4924,11 @@ cmd_tso_set_parsed(void *parsed_result,
if (ports[res->port_id].tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_TCP_TSO;
+ ~RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO for non-tunneled packets is disabled\n");
} else {
ports[res->port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO segment size for non-tunneled packets is %d\n",
ports[res->port_id].tso_segsz);
}
@@ -4940,7 +4940,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr,
"Warning: TSO enabled but not supported by port %d\n",
res->port_id);
@@ -5011,27 +5011,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
return dev_info;
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
fprintf(stderr,
"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
fprintf(stderr,
"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
fprintf(stderr,
"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
fprintf(stderr,
"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
fprintf(stderr,
"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
fprintf(stderr,
"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
@@ -5059,20 +5059,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
dev_info = check_tunnel_tso_nic_support(res->port_id);
if (ports[res->port_id].tunnel_tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ ~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
printf("TSO for tunneled packets is disabled\n");
} else {
- uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
ports[res->port_id].dev_conf.txmode.offloads |=
(tso_offloads & dev_info.tx_offload_capa);
@@ -5095,7 +5095,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
fprintf(stderr,
"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
if (!(ports[res->port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
fprintf(stderr,
"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
}
@@ -7227,9 +7227,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
return;
}
- if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
rx_fc_en = true;
- if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
tx_fc_en = true;
printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7507,12 +7507,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
/* Partial command line, retrieve current configuration */
@@ -7525,11 +7525,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
return;
}
- if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
rx_fc_en = 1;
- if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
tx_fc_en = 1;
}
@@ -7597,12 +7597,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -9250,13 +9250,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
if (!strcmp(res->what,"rxmode")) {
if (!strcmp(res->mode, "AUPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
else if (!strcmp(res->mode, "ROPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
else if (!strcmp(res->mode, "BAM"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
else if (!strncmp(res->mode, "MPE",3))
- vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
}
RTE_SET_USED(is_on);
@@ -9656,7 +9656,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
int ret;
tunnel_udp.udp_port = res->udp_port;
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
if (!strcmp(res->what, "add"))
ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9722,13 +9722,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
tunnel_udp.udp_port = res->udp_port;
if (!strcmp(res->tunnel_type, "vxlan")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
} else if (!strcmp(res->tunnel_type, "geneve")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
} else if (!strcmp(res->tunnel_type, "ecpri")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
} else {
fprintf(stderr, "Invalid tunnel type\n");
return;
@@ -11859,7 +11859,7 @@ cmd_set_macsec_offload_on_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
#endif
@@ -11870,7 +11870,7 @@ cmd_set_macsec_offload_on_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MACSEC_INSERT;
+ RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
@@ -11956,7 +11956,7 @@ cmd_set_macsec_offload_off_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_disable(port_id);
#endif
@@ -11964,7 +11964,7 @@ cmd_set_macsec_offload_off_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MACSEC_INSERT;
+ ~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 23aa334cda0f..f8ddfe60cd58 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,62 +86,62 @@ static const struct {
};
const struct rss_type_info rss_type_table[] = {
- { "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
- ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
- ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
- ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+ { "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+ RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+ RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
{ "none", 0 },
- { "eth", ETH_RSS_ETH },
- { "l2-src-only", ETH_RSS_L2_SRC_ONLY },
- { "l2-dst-only", ETH_RSS_L2_DST_ONLY },
- { "vlan", ETH_RSS_VLAN },
- { "s-vlan", ETH_RSS_S_VLAN },
- { "c-vlan", ETH_RSS_C_VLAN },
- { "ipv4", ETH_RSS_IPV4 },
- { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
- { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
- { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
- { "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
- { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
- { "ipv6", ETH_RSS_IPV6 },
- { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
- { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
- { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
- { "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
- { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
- { "l2-payload", ETH_RSS_L2_PAYLOAD },
- { "ipv6-ex", ETH_RSS_IPV6_EX },
- { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
- { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
- { "port", ETH_RSS_PORT },
- { "vxlan", ETH_RSS_VXLAN },
- { "geneve", ETH_RSS_GENEVE },
- { "nvgre", ETH_RSS_NVGRE },
- { "ip", ETH_RSS_IP },
- { "udp", ETH_RSS_UDP },
- { "tcp", ETH_RSS_TCP },
- { "sctp", ETH_RSS_SCTP },
- { "tunnel", ETH_RSS_TUNNEL },
+ { "eth", RTE_ETH_RSS_ETH },
+ { "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+ { "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+ { "vlan", RTE_ETH_RSS_VLAN },
+ { "s-vlan", RTE_ETH_RSS_S_VLAN },
+ { "c-vlan", RTE_ETH_RSS_C_VLAN },
+ { "ipv4", RTE_ETH_RSS_IPV4 },
+ { "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+ { "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+ { "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+ { "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+ { "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+ { "ipv6", RTE_ETH_RSS_IPV6 },
+ { "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+ { "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+ { "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+ { "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+ { "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+ { "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+ { "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+ { "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+ { "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+ { "port", RTE_ETH_RSS_PORT },
+ { "vxlan", RTE_ETH_RSS_VXLAN },
+ { "geneve", RTE_ETH_RSS_GENEVE },
+ { "nvgre", RTE_ETH_RSS_NVGRE },
+ { "ip", RTE_ETH_RSS_IP },
+ { "udp", RTE_ETH_RSS_UDP },
+ { "tcp", RTE_ETH_RSS_TCP },
+ { "sctp", RTE_ETH_RSS_SCTP },
+ { "tunnel", RTE_ETH_RSS_TUNNEL },
{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
- { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
- { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
- { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
- { "l4-dst-only", ETH_RSS_L4_DST_ONLY },
- { "esp", ETH_RSS_ESP },
- { "ah", ETH_RSS_AH },
- { "l2tpv3", ETH_RSS_L2TPV3 },
- { "pfcp", ETH_RSS_PFCP },
- { "pppoe", ETH_RSS_PPPOE },
- { "gtpu", ETH_RSS_GTPU },
- { "ecpri", ETH_RSS_ECPRI },
- { "mpls", ETH_RSS_MPLS },
- { "ipv4-chksum", ETH_RSS_IPV4_CHKSUM },
- { "l4-chksum", ETH_RSS_L4_CHKSUM },
+ { "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+ { "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+ { "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+ { "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+ { "esp", RTE_ETH_RSS_ESP },
+ { "ah", RTE_ETH_RSS_AH },
+ { "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+ { "pfcp", RTE_ETH_RSS_PFCP },
+ { "pppoe", RTE_ETH_RSS_PPPOE },
+ { "gtpu", RTE_ETH_RSS_GTPU },
+ { "ecpri", RTE_ETH_RSS_ECPRI },
+ { "mpls", RTE_ETH_RSS_MPLS },
+ { "ipv4-chksum", RTE_ETH_RSS_IPV4_CHKSUM },
+ { "l4-chksum", RTE_ETH_RSS_L4_CHKSUM },
{ NULL, 0 },
};
@@ -538,39 +538,39 @@ static void
device_infos_display_speeds(uint32_t speed_capa)
{
printf("\n\tDevice speed capability:");
- if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+ if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
printf(" Autonegotiate (all speeds)");
- if (speed_capa & ETH_LINK_SPEED_FIXED)
+ if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
printf(" Disable autonegotiate (fixed speed) ");
- if (speed_capa & ETH_LINK_SPEED_10M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
printf(" 10 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_10M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M)
printf(" 10 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
printf(" 100 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M)
printf(" 100 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_1G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_1G)
printf(" 1 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_2_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
printf(" 2.5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_5G)
printf(" 5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_10G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10G)
printf(" 10 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_20G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_20G)
printf(" 20 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_25G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_25G)
printf(" 25 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_40G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_40G)
printf(" 40 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_50G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_50G)
printf(" 50 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_56G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_56G)
printf(" 56 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_100G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100G)
printf(" 100 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_200G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_200G)
printf(" 200 Gbps ");
}
@@ -700,9 +700,9 @@ port_infos_display(portid_t port_id)
printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
- printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex"));
- printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+ printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
("On") : ("Off"));
if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -720,22 +720,22 @@ port_infos_display(portid_t port_id)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (vlan_offload >= 0){
printf("VLAN offload: \n");
- if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
printf(" strip on, ");
else
printf(" strip off, ");
- if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
printf("filter on, ");
else
printf("filter off, ");
- if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
printf("extend on, ");
else
printf("extend off, ");
- if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
printf("qinq strip on\n");
else
printf("qinq strip off\n");
@@ -2919,8 +2919,8 @@ port_rss_reta_info(portid_t port_id,
}
for (i = 0; i < nb_entries; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3288,7 +3288,7 @@ dcb_fwd_config_setup(void)
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
fwd_lcores[lc_id]->stream_nb = 0;
fwd_lcores[lc_id]->stream_idx = sm_id;
- for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+ for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
@@ -4351,11 +4351,11 @@ vlan_extend_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
} else {
- vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4381,11 +4381,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
- vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4426,11 +4426,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
- vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4456,11 +4456,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
} else {
- vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4530,7 +4530,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (ports[port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_QINQ_INSERT) {
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
fprintf(stderr, "Error, as QinQ has been enabled.\n");
return;
}
@@ -4539,7 +4539,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
fprintf(stderr,
"Error: vlan insert is not supported by port %d\n",
port_id);
@@ -4547,7 +4547,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
ports[port_id].tx_vlan_id = vlan_id;
}
@@ -4566,7 +4566,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
fprintf(stderr,
"Error: qinq insert not supported by port %d\n",
port_id);
@@ -4574,8 +4574,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = vlan_id;
ports[port_id].tx_vlan_id_outer = vlan_id_outer;
}
@@ -4584,8 +4584,8 @@ void
tx_vlan_reset(portid_t port_id)
{
ports[port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = 0;
ports[port_id].tx_vlan_id_outer = 0;
}
@@ -4991,7 +4991,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
ret = eth_link_get_nowait_print_err(port_id, &link);
if (ret < 0)
return 1;
- if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+ if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
rate > link.link_speed) {
fprintf(stderr,
"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a35..75b24487e72e 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
/* do not recalculate udp cksum if it was 0 */
if (udp_hdr->dgram_cksum != 0) {
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
ol_flags |= PKT_TX_UDP_CKSUM;
} else {
udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
if (tso_segsz)
ol_flags |= PKT_TX_TCP_SEG;
- else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
ol_flags |= PKT_TX_TCP_CKSUM;
} else {
tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
((char *)l3_hdr + info->l3_len);
/* sctp payload must be a multiple of 4 to be
* offloaded */
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
((ipv4_hdr->total_length & 0x3) == 0)) {
ol_flags |= PKT_TX_SCTP_CKSUM;
} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ipv4_hdr->hdr_checksum = 0;
ol_flags |= PKT_TX_OUTER_IPV4;
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
ol_flags |= PKT_TX_OUTER_IP_CKSUM;
else
ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ol_flags |= PKT_TX_TCP_SEG;
/* Skip SW outer UDP checksum generation if HW supports it */
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
udp_hdr->dgram_cksum
= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (info.is_tunnel == 1) {
if (info.tunnel_tso_segsz ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
m->outer_l2_len = info.outer_l2_len;
m->outer_l3_len = info.outer_l3_len;
m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
rte_be_to_cpu_16(info.outer_ethertype),
info.outer_l3_len);
/* dump tx packet info */
- if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+ if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
info.tso_segsz != 0)
printf("tx: m->l2_len=%d m->l3_len=%d "
"m->l4_len=%d\n",
m->l2_len, m->l3_len, m->l4_len);
if (info.is_tunnel == 1) {
if ((tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
(tx_ol_flags & PKT_TX_OUTER_IPV6))
printf("tx: m->outer_l2_len=%d "
"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 7ebed9fed334..03d026dec169 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -99,11 +99,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags |= PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ee76df7f0323..57e00bca20e7 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
fs->rx_packets += nb_rx;
txp = &ports[fs->tx_port];
tx_offloads = txp->dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (i = 0; i < nb_rx; i++) {
if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
{
uint64_t ol_flags = 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
PKT_TX_VLAN : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
PKT_TX_QINQ : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
PKT_TX_MACSEC : 0;
return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index ab8e8f7e694a..693e77eff2c0 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -546,29 +546,29 @@ parse_xstats_list(const char *in_str, struct rte_eth_xstat_name **xstats,
static int
parse_link_speed(int n)
{
- uint32_t speed = ETH_LINK_SPEED_FIXED;
+ uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
switch (n) {
case 1000:
- speed |= ETH_LINK_SPEED_1G;
+ speed |= RTE_ETH_LINK_SPEED_1G;
break;
case 10000:
- speed |= ETH_LINK_SPEED_10G;
+ speed |= RTE_ETH_LINK_SPEED_10G;
break;
case 25000:
- speed |= ETH_LINK_SPEED_25G;
+ speed |= RTE_ETH_LINK_SPEED_25G;
break;
case 40000:
- speed |= ETH_LINK_SPEED_40G;
+ speed |= RTE_ETH_LINK_SPEED_40G;
break;
case 50000:
- speed |= ETH_LINK_SPEED_50G;
+ speed |= RTE_ETH_LINK_SPEED_50G;
break;
case 100000:
- speed |= ETH_LINK_SPEED_100G;
+ speed |= RTE_ETH_LINK_SPEED_100G;
break;
case 200000:
- speed |= ETH_LINK_SPEED_200G;
+ speed |= RTE_ETH_LINK_SPEED_200G;
break;
case 100:
case 10:
@@ -1000,13 +1000,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
if (!strcmp(optarg, "64K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_64K;
+ RTE_ETH_FDIR_PBALLOC_64K;
else if (!strcmp(optarg, "128K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_128K;
+ RTE_ETH_FDIR_PBALLOC_128K;
else if (!strcmp(optarg, "256K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_256K;
+ RTE_ETH_FDIR_PBALLOC_256K;
else
rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
" must be: 64K or 128K or 256K\n",
@@ -1048,34 +1048,34 @@ launch_args_parse(int argc, char** argv)
}
#endif
if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
- rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
- rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
- rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
if (!strcmp(lgopts[opt_idx].name,
"enable-rx-timestamp"))
- rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-filter"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-extend"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-qinq-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
rx_drop_en = 1;
@@ -1097,13 +1097,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
set_pkt_forwarding_mode(optarg);
if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
- rss_hf = ETH_RSS_IP;
+ rss_hf = RTE_ETH_RSS_IP;
if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
- rss_hf = ETH_RSS_UDP;
+ rss_hf = RTE_ETH_RSS_UDP;
if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
- rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
- rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
if (!strcmp(lgopts[opt_idx].name, "rxq")) {
n = atoi(optarg);
if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1482,12 +1482,12 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
char *end = NULL;
n = strtoul(optarg, &end, 16);
- if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+ if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
else
rte_exit(EXIT_FAILURE,
"rx-mq-mode must be >= 0 and <= %d\n",
- ETH_MQ_RX_VMDQ_DCB_RSS);
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
}
if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index af0e79fe6d51..bf2420db0da6 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -348,7 +348,7 @@ uint64_t noisy_lkup_num_reads_writes;
/*
* Receive Side Scaling (RSS) configuration.
*/
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
/*
* Port topology configuration
@@ -459,12 +459,12 @@ lcoreid_t latencystats_lcore_id = -1;
struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
};
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
.mode = RTE_FDIR_MODE_NONE,
- .pballoc = RTE_FDIR_PBALLOC_64K,
+ .pballoc = RTE_ETH_FDIR_PBALLOC_64K,
.status = RTE_FDIR_REPORT_STATUS,
.mask = {
.vlan_tci_mask = 0xFFEF,
@@ -518,7 +518,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
/*
* hexadecimal bitmask of RX mq mode can be enabled.
*/
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
/*
* Used to set forced link speed
@@ -1572,9 +1572,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Apply Rx offloads configuration */
for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1711,8 +1711,8 @@ init_config(void)
init_port_config();
- gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
/*
* Records which Mbuf pool to use by each logical core, if needed.
*/
@@ -3457,7 +3457,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3751,17 +3751,17 @@ init_port_config(void)
if (port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0) {
port->dev_conf.rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_RSS);
} else {
- port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_RSS_HASH;
+ ~RTE_ETH_RX_OFFLOAD_RSS_HASH;
for (i = 0;
i < port->dev_info.nb_rx_queues;
i++)
port->rx_conf[i].offloads &=
- ~DEV_RX_OFFLOAD_RSS_HASH;
+ ~RTE_ETH_RX_OFFLOAD_RSS_HASH;
}
}
@@ -3849,9 +3849,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->enable_default_pool = 0;
vmdq_rx_conf->default_pool = 0;
vmdq_rx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_tx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3859,7 +3859,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->pool_map[i].pools =
1 << (i % vmdq_rx_conf->nb_queue_pools);
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
}
@@ -3867,8 +3867,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
/* set DCB mode of RX and TX of multiple queues */
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
- eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ (rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
} else {
struct rte_eth_dcb_rx_conf *rx_conf =
ð_conf->rx_adv_conf.dcb_rx_conf;
@@ -3884,23 +3884,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
rx_conf->nb_tcs = num_tcs;
tx_conf->nb_tcs = num_tcs;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
rx_conf->dcb_tc[i] = i % num_tcs;
tx_conf->dcb_tc[i] = i % num_tcs;
}
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
eth_conf->rx_adv_conf.rss_conf = rss_conf;
- eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
}
if (pfc_en)
eth_conf->dcb_capability_en =
- ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+ RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
else
- eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+ eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
return 0;
}
@@ -3929,7 +3929,7 @@ init_port_dcb_config(portid_t pid,
retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
if (retval < 0)
return retval;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
/* re-configure the device . */
retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3979,7 +3979,7 @@ init_port_dcb_config(portid_t pid,
rxtx_port_config(rte_port);
/* VLAN filter */
- rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
for (i = 0; i < RTE_DIM(vlan_tags); i++)
rx_vft_set(pid, vlan_tags[i], 1);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index e3995d24ab53..ccd025d5e0f5 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -491,7 +491,7 @@ extern lcoreid_t bitrate_lcore_id;
extern uint8_t bitrate_enabled;
#endif
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
extern uint32_t max_rx_pkt_len;
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index e45f8840c91c..9eb7992815e8 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -354,11 +354,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
tx_offloads = txp->dev_conf.txmode.offloads;
vlan_tci = txp->tx_vlan_id;
vlan_tci_outer = txp->tx_vlan_id_outer;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
text, strlen(text), "Invalid default link status string");
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_FIXED;
- link_status.link_speed = ETH_SPEED_NUM_10M,
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #2: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_NONE;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
"string with HDX");
/* test max str len */
- link_status.link_speed = ETH_SPEED_NUM_200G;
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_AUTONEG;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #4:len = %d, %s\n", ret, text);
RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
int ret = 0;
struct rte_eth_link link_status = {
.link_speed = 55555,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
const char *value;
uint32_t link_speed;
} speed_str_map[] = {
- { "None", ETH_SPEED_NUM_NONE },
- { "10 Mbps", ETH_SPEED_NUM_10M },
- { "100 Mbps", ETH_SPEED_NUM_100M },
- { "1 Gbps", ETH_SPEED_NUM_1G },
- { "2.5 Gbps", ETH_SPEED_NUM_2_5G },
- { "5 Gbps", ETH_SPEED_NUM_5G },
- { "10 Gbps", ETH_SPEED_NUM_10G },
- { "20 Gbps", ETH_SPEED_NUM_20G },
- { "25 Gbps", ETH_SPEED_NUM_25G },
- { "40 Gbps", ETH_SPEED_NUM_40G },
- { "50 Gbps", ETH_SPEED_NUM_50G },
- { "56 Gbps", ETH_SPEED_NUM_56G },
- { "100 Gbps", ETH_SPEED_NUM_100G },
- { "200 Gbps", ETH_SPEED_NUM_200G },
- { "Unknown", ETH_SPEED_NUM_UNKNOWN },
+ { "None", RTE_ETH_SPEED_NUM_NONE },
+ { "10 Mbps", RTE_ETH_SPEED_NUM_10M },
+ { "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+ { "1 Gbps", RTE_ETH_SPEED_NUM_1G },
+ { "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+ { "5 Gbps", RTE_ETH_SPEED_NUM_5G },
+ { "10 Gbps", RTE_ETH_SPEED_NUM_10G },
+ { "20 Gbps", RTE_ETH_SPEED_NUM_20G },
+ { "25 Gbps", RTE_ETH_SPEED_NUM_25G },
+ { "40 Gbps", RTE_ETH_SPEED_NUM_40G },
+ { "50 Gbps", RTE_ETH_SPEED_NUM_50G },
+ { "56 Gbps", RTE_ETH_SPEED_NUM_56G },
+ { "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+ { "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+ { "Unknown", RTE_ETH_SPEED_NUM_UNKNOWN },
{ "Invalid", 50505 }
};
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index add4d8a67821..a09253e91814 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -103,7 +103,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
.intr_conf = {
.rxq = 1,
@@ -118,7 +118,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
};
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
static const struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5388d18125a6..8a9ef851789f 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,11 +134,11 @@ static uint16_t vlan_id = 0x100;
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 189d2430f27e..351129de2f9b 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,11 +107,11 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e7bb0497b663..f9eae9397386 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
struct rte_eth_rss_conf rss_conf;
uint8_t rss_key[40];
- struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
uint8_t is_slave;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
- struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
struct slave_conf slave_ports[SLAVE_COUNT];
struct rte_mempool *mbuf_pool;
@@ -80,27 +80,27 @@ static struct link_bonding_rssconf_unittest_params test_params = {
*/
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IPV6,
+ .rss_hf = RTE_ETH_RSS_IPV6,
},
},
.lpbk_mode = 0,
@@ -207,13 +207,13 @@ bond_slaves(void)
static int
reta_set(uint16_t port_id, uint8_t value, int reta_size)
{
- struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
int i, j;
- for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+ for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
/* select all fields to set */
reta_conf[i].mask = ~0LL;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
reta_conf[i].reta[j] = value;
}
@@ -232,8 +232,8 @@ reta_check_synced(struct slave_conf *port)
for (i = 0; i < test_params.bond_dev_info.reta_size;
i++) {
- int index = i / RTE_RETA_GROUP_SIZE;
- int shift = i % RTE_RETA_GROUP_SIZE;
+ int index = i / RTE_ETH_RETA_GROUP_SIZE;
+ int shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (port->reta_conf[index].reta[shift] !=
test_params.bond_reta_conf[index].reta[shift])
@@ -251,7 +251,7 @@ static int
bond_reta_fetch(void) {
unsigned j;
- for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+ for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
j++)
test_params.bond_reta_conf[j].mask = ~0LL;
@@ -268,7 +268,7 @@ static int
slave_reta_fetch(struct slave_conf *port) {
unsigned j;
- for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
port->reta_conf[j].mask = ~0LL;
TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index a3b4f52c65e6..1df86ce080e5 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,11 +62,11 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 1, /* enable loopback */
};
@@ -155,7 +155,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -822,7 +822,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
/* bulk alloc rx, full-featured tx */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "hybrid")) {
/* bulk alloc rx, vector tx
@@ -831,13 +831,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
*/
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "full")) {
/* full feature rx,tx pair */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
return 0;
}
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7e15b47eb0fb..d9f2e4f66bde 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
void *pkt = NULL;
struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
rte_pktmbuf_free(pkt);
@@ -168,7 +168,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
int wait_to_complete __rte_unused)
{
if (!bonded_eth_dev->data->dev_started)
- bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -562,9 +562,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
eth_dev->data->nb_rx_queues = (uint16_t)1;
eth_dev->data->nb_tx_queues = (uint16_t)1;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
- eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue configuration.
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue config.
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index bdd6e7263c85..54feffdef4bd 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -70,5 +70,5 @@ Features and Limitations
------------------------
The PMD will re-insert the VLAN tag transparently to the packet if the kernel
-strips it, as long as the ``DEV_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
+strips it, as long as the ``RTE_ETH_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
application.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index aa6032889a55..b3d10f30dc77 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,21 +877,21 @@ processing. This improved performance is derived from a number of optimizations:
* TX: only the following reduced set of transmit offloads is supported in
vector mode::
- DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* RX: only the following reduced set of receive offloads is supported in
vector mode (note that jumbo MTU is allowed only when the MTU setting
- does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
- DEV_RX_OFFLOAD_VLAN_STRIP
- DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_IPV4_CKSUM
- DEV_RX_OFFLOAD_UDP_CKSUM
- DEV_RX_OFFLOAD_TCP_CKSUM
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
- DEV_RX_OFFLOAD_RSS_HASH
- DEV_RX_OFFLOAD_VLAN_FILTER
+ does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_RSS_HASH
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER
The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
.. code-block:: console
vlan_offload = rte_eth_dev_get_vlan_offload(port);
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
rte_eth_dev_set_vlan_offload(port, vlan_offload);
Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 8dd421ca013b..b48d9dcb9591 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
Supports getting the speed capabilities that the current device is capable of.
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
* **[related] API**: ``rte_eth_dev_info_get()``.
@@ -101,11 +101,11 @@ Supports Rx interrupts.
Lock-free Tx queue
------------------
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
* **[related] API**: ``rte_eth_tx_burst()``.
@@ -117,8 +117,8 @@ Fast mbuf free
Supports optimization for fast release of mbufs following successful Tx.
Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
.. _nic_features_free_tx_mbuf_on_demand:
@@ -177,7 +177,7 @@ Scattered Rx
Supports receiving segmented mbufs.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
* **[implements] datapath**: ``Scattered Rx function``.
* **[implements] rte_eth_dev_data**: ``scattered_rx``.
* **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -205,12 +205,12 @@ LRO
Supports Large Receive Offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
@@ -221,12 +221,12 @@ TSO
Supports TCP Segmentation Offloading.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
* **[uses] rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
* **[uses] mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
* **[uses] mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
* **[implements] datapath**: ``TSO functionality``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
.. _nic_features_promiscuous_mode:
@@ -287,9 +287,9 @@ RSS hash
Supports RSS hashing on RX.
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -302,7 +302,7 @@ Inner RSS
Supports RX RSS hashing on Inner headers.
* **[uses] rte_flow_action_rss**: ``level``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -339,7 +339,7 @@ VMDq
Supports Virtual Machine Device Queues (VMDq).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -362,7 +362,7 @@ DCB
Supports Data Center Bridging (DCB).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -378,7 +378,7 @@ VLAN filter
Supports filtering of a VLAN Tag identifier.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
* **[implements] eth_dev_ops**: ``vlan_filter_set``.
* **[related] API**: ``rte_eth_dev_vlan_filter()``.
@@ -416,13 +416,13 @@ Supports inline crypto processing defined by rte_security library to perform cry
operations of security protocol while packet is received in NIC. NIC is not aware
of protocol operations. See Security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[uses] mbuf**: ``mbuf.l2_len``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -438,14 +438,14 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
packet is received at NIC. The NIC is capable of understanding the security
protocol operations. See security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[uses] mbuf**: ``mbuf.l2_len``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -459,7 +459,7 @@ CRC offload
Supports CRC stripping by hardware.
A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
.. _nic_features_vlan_offload:
@@ -469,13 +469,13 @@ VLAN offload
Supports VLAN offload to hardware.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
* **[implements] eth_dev_ops**: ``vlan_offload_set``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[related] API**: ``rte_eth_dev_set_vlan_offload()``,
``rte_eth_dev_get_vlan_offload()``.
@@ -487,14 +487,14 @@ QinQ offload
Supports QinQ (queue in queue) offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
.. _nic_features_fec:
@@ -508,7 +508,7 @@ information to correct the bit errors generated during data packet transmission
improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
* **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides] rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides] rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
* **[related] API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
@@ -519,16 +519,16 @@ L3 checksum offload
Supports L3 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
* **[uses] mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
.. _nic_features_l4_checksum_offload:
@@ -538,8 +538,8 @@ L4 checksum offload
Supports L4 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -547,8 +547,8 @@ Supports L4 checksum offload.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
.. _nic_features_hw_timestamp:
@@ -557,10 +557,10 @@ Timestamp offload
Supports Timestamp.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[related] eth_dev_ops**: ``read_clock``.
.. _nic_features_macsec_offload:
@@ -570,11 +570,11 @@ MACsec offload
Supports MACsec.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
.. _nic_features_inner_l3_checksum:
@@ -584,16 +584,16 @@ Inner L3 checksum
Supports inner packet L3 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
.. _nic_features_inner_l4_checksum:
@@ -603,15 +603,15 @@ Inner L4 checksum
Supports inner packet L4 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
.. _nic_features_packet_type_parsing:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index ed6afd62703d..bba53f5a64ee 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
will be checked:
-* ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+* ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
-* ``DEV_RX_OFFLOAD_CHECKSUM``
+* ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
-* ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+* ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
* ``fdir_conf->mode``
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 2efdd1a41bb4..a1e236ad75e5 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -216,21 +216,21 @@ For example,
* If the max number of VFs (max_vfs) is set in the range of 1 to 32:
If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
* If the max number of VFs (max_vfs) is in the range of 33 to 64:
If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
as ``rxq`` is not correct at this case;
- If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+ If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
and each VF have 2 Rx queues;
- On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
- or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+ On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+ or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
It also needs config VF RSS information like hash function, RSS key, RSS key length.
.. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5bcd..148d2f5fc2be 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,13 +89,13 @@ Other features are supported using optional MACRO configuration. They include:
To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
-* DEV_RX_OFFLOAD_VLAN_STRIP
+* RTE_ETH_RX_OFFLOAD_VLAN_STRIP
-* DEV_RX_OFFLOAD_VLAN_EXTEND
+* RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
-* DEV_RX_OFFLOAD_CHECKSUM
+* RTE_ETH_RX_OFFLOAD_CHECKSUM
-* DEV_RX_OFFLOAD_HEADER_SPLIT
+* RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
* dev_conf
@@ -163,13 +163,13 @@ l3fwd
~~~~~
When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
Otherwise, by default, RX vPMD is disabled.
load_balancer
~~~~~~~~~~~~~
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index e4f58c899031..cc1726207f6c 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
- CRC:
- - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+ - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
@@ -607,7 +607,7 @@ Driver options
small-packet traffic.
When MPRQ is enabled, MTU can be larger than the size of
- user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+ user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
be added in next releases
TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
**Known limitation:** TAP supports all of the above hash functions together
and not in partial combinations.
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
- the bit mask of required GSO types. The GSO library uses the same macros as
those that describe a physical device's TX offloading capabilities (i.e.
- ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+ ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
wants to segment TCP/IPv4 packets, it should set gso_types to
- ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
- supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
- ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+ ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+ supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+ ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
allowed.
- a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
set out_ip checksum to 0 in the packet
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
- calculate checksum of out_ip and out_udp::
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
set out_ip checksum to 0 in the packet
set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
- and DEV_TX_OFFLOAD_UDP_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+ and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
- calculate checksum of in_ip::
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
This is similar to case 1), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
This is similar to case 2), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
- DEV_TX_OFFLOAD_TCP_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header without including the IP
payload length using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
- DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
The list of flags and their precise meaning is described in the mbuf API
documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
Avoiding lock contention is a key issue in a multi-core environment.
To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
enables more scaling as all workers can send the packets.
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
Device Identification, Ownership and Configuration
--------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
Any requested offloading by an application must be within the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index aeba3741825e..063ff388476a 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1968,23 +1968,23 @@ only matching traffic goes through.
.. table:: RSS
- +---------------+---------------------------------------------+
- | Field | Value |
- +===============+=============================================+
- | ``func`` | RSS hash function to apply |
- +---------------+---------------------------------------------+
- | ``level`` | encapsulation level for ``types`` |
- +---------------+---------------------------------------------+
- | ``types`` | specific RSS hash types (see ``ETH_RSS_*``) |
- +---------------+---------------------------------------------+
- | ``key_len`` | hash key length in bytes |
- +---------------+---------------------------------------------+
- | ``queue_num`` | number of entries in ``queue`` |
- +---------------+---------------------------------------------+
- | ``key`` | hash key |
- +---------------+---------------------------------------------+
- | ``queue`` | queue indices to use |
- +---------------+---------------------------------------------+
+ +---------------+-------------------------------------------------+
+ | Field | Value |
+ +===============+=================================================+
+ | ``func`` | RSS hash function to apply |
+ +---------------+-------------------------------------------------+
+ | ``level`` | encapsulation level for ``types`` |
+ +---------------+-------------------------------------------------+
+ | ``types`` | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+ +---------------+-------------------------------------------------+
+ | ``key_len`` | hash key length in bytes |
+ +---------------+-------------------------------------------------+
+ | ``queue_num`` | number of entries in ``queue`` |
+ +---------------+-------------------------------------------------+
+ | ``key`` | hash key |
+ +---------------+-------------------------------------------------+
+ | ``queue`` | queue indices to use |
+ +---------------+-------------------------------------------------+
Action: ``PF``
^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index ad92c16868c1..46c9b51d1bf9 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -569,7 +569,7 @@ created by the application is attached to the security session by the API
For Inline Crypto and Inline protocol offload, device specific defined metadata is
updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
For inline protocol offloaded ingress traffic, the application can register a
pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0b4d03fb961f..199c3fa0bd70 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,22 +58,16 @@ Deprecation Notices
``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
usage in following public struct hierarchy:
- ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+ ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
Need to identify this kind of usages and fix in 20.11, otherwise this blocks
us extending existing enum/define.
One solution can be using a fixed size array instead of ``.*MAX.*`` value.
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
- Macros will be added for backward compatibility.
- Backward compatibility macros will be removed on v22.11.
- A few old backward compatibility macros from 2013 that does not have
- proper prefix will be removed on v21.11.
-
* ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
will be removed in DPDK 20.11.
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
This will allow application to enable or disable PMDs from updating
``rte_mbuf::hash::fdir``.
This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 041383ee2a73..707352099b13 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -368,6 +368,9 @@ ABI Changes
to be transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+ updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
Known Issues
------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
* ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
- (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the RX HW offload capabilities.
By default all HW RX offloads are enabled.
* ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
- (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the TX HW offload capabilities.
By default all HW TX offloads are enabled.
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 8ff7ab85369c..2e1446ee461b 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -537,7 +537,7 @@ The command line options are:
Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
The default value is 0x7::
- ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+ RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
* ``--record-core-cycles``
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
struct usdpaa_ioctl_link_status_args_old {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
- /* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+ /* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
int link_autoneg;
};
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
struct usdpaa_ioctl_update_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_update_link_speed {
/* network device node name*/
char if_name[IF_NAME_MAX_LEN];
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
};
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index 10d1ac82a4bd..21883f6b3f66 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -160,7 +160,7 @@ enum roc_npc_rss_hash_function {
struct roc_npc_action_rss {
enum roc_npc_rss_hash_function func;
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index a077376dc0fb..8f778f0c2419 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -93,10 +93,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -290,7 +290,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -320,7 +320,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
internals->tx_queue[i].sockfd = -1;
}
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -331,7 +331,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
const struct rte_eth_rxmode *rxmode = &dev_conf->rxmode;
struct pmd_internals *internals = dev->data->dev_private;
- internals->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ internals->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -346,9 +346,9 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return 0;
}
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index b362ccdcd38c..e156246f24df 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
/* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -652,7 +652,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -661,7 +661,7 @@ eth_dev_start(struct rte_eth_dev *dev)
static int
eth_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
/* ARK PMD supports all line rates, how do we indicate that here ?? */
- dev_info->speed_capa = (ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G);
-
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G);
+
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return 0;
}
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 5a198f53fce7..f7bfac796c07 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,20 +154,20 @@ static struct rte_pci_driver rte_atl_pmd = {
.remove = eth_atl_pci_remove,
};
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
- | DEV_RX_OFFLOAD_IPV4_CKSUM \
- | DEV_RX_OFFLOAD_UDP_CKSUM \
- | DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_MACSEC_STRIP \
- | DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
- | DEV_TX_OFFLOAD_IPV4_CKSUM \
- | DEV_TX_OFFLOAD_UDP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_TSO \
- | DEV_TX_OFFLOAD_MACSEC_INSERT \
- | DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+ | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+ | RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+ | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_TSO \
+ | RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+ | RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define SFP_EEPROM_SIZE 0x100
@@ -488,7 +488,7 @@ atl_dev_start(struct rte_eth_dev *dev)
/* set adapter started */
hw->adapter_stopped = 0;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR,
"Invalid link_speeds for port %u, fix speed not supported",
dev->data->port_id);
@@ -655,18 +655,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
uint32_t link_speeds = dev->data->dev_conf.link_speeds;
uint32_t speed_mask = 0;
- if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed_mask = hw->aq_nic_cfg->link_speed_msk;
} else {
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
speed_mask |= AQ_NIC_RATE_10G;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
speed_mask |= AQ_NIC_RATE_5G;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
speed_mask |= AQ_NIC_RATE_1G;
- if (link_speeds & ETH_LINK_SPEED_2_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed_mask |= AQ_NIC_RATE_2G5;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
speed_mask |= AQ_NIC_RATE_100M;
}
@@ -1127,10 +1127,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
return 0;
}
@@ -1175,10 +1175,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
u32 fc = AQ_NIC_FC_OFF;
int err = 0;
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
memset(&old, 0, sizeof(old));
/* load old link status */
@@ -1198,8 +1198,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
return 0;
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = hw->aq_link_status.mbps;
rte_eth_linkstatus_set(dev, &link);
@@ -1333,7 +1333,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1532,13 +1532,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
hw->aq_fw_ops->get_flow_control(hw, &fc);
if (fc == AQ_NIC_FC_OFF)
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (fc & AQ_NIC_FC_RX)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (fc & AQ_NIC_FC_TX)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
return 0;
}
@@ -1553,13 +1553,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
if (hw->aq_fw_ops->set_flow_control == NULL)
return -ENOTSUP;
- if (fc_conf->mode == RTE_FC_NONE)
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
- else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
- else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
- else if (fc_conf->mode == RTE_FC_FULL)
+ else if (fc_conf->mode == RTE_ETH_FC_FULL)
hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1727,14 +1727,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+ ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
- cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+ cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
for (i = 0; i < dev->data->nb_rx_queues; i++)
hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
- if (mask & ETH_VLAN_EXTEND_MASK)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK)
ret = -ENOTSUP;
return ret;
@@ -1750,10 +1750,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
PMD_INIT_FUNC_TRACE();
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
break;
default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index fbc9917ed30d..ed9ef9f0cc52 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
#include "hw_atl/hw_atl_utils.h"
#define ATL_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define ATL_DEV_PRIVATE_TO_HW(adapter) \
(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 0d3460383a50..2ff426892df2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 932ec90265cf..5d94db02c506 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1998,9 +1998,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
/* Setup required number of queues */
_avp_set_queue_counts(eth_dev);
- mask = (ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ mask = (RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
ret = avp_vlan_offload_set(eth_dev, mask);
if (ret < 0) {
PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2140,8 +2140,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_eth_link *link = ð_dev->data->dev_link;
- link->link_speed = ETH_SPEED_NUM_10G;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_status = !!(avp->flags & AVP_F_LINKUP);
return -1;
@@ -2191,8 +2191,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
}
return 0;
@@ -2205,9 +2205,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
uint64_t offloads = dev_conf->rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
else
avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2216,13 +2216,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
}
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index ca32ad641873..3aaa2193272f 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
pdata->rss_hf = rss_conf->rss_hf;
rss_hf = rss_conf->rss_hf;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
}
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 0250256830ac..dab0c6775d1d 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
/* Checksum offload to hardware */
pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
}
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
{
struct axgbe_port *pdata = dev->data->dev_private;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
pdata->rss_enable = 1;
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
pdata->rss_enable = 0;
else
return -1;
@@ -385,7 +385,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -521,8 +521,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
continue;
pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -552,8 +552,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
continue;
reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -590,13 +590,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
- if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
/* Set the RSS options */
@@ -765,7 +765,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
link.link_status = pdata->phy_link;
link.link_speed = pdata->phy_speed;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1208,24 +1208,24 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (pdata->hw_feat.rss) {
dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1262,13 +1262,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
fc.autoneg = pdata->pause_autoneg;
if (pdata->rx_pause && pdata->tx_pause)
- fc.mode = RTE_FC_FULL;
+ fc.mode = RTE_ETH_FC_FULL;
else if (pdata->rx_pause)
- fc.mode = RTE_FC_RX_PAUSE;
+ fc.mode = RTE_ETH_FC_RX_PAUSE;
else if (pdata->tx_pause)
- fc.mode = RTE_FC_TX_PAUSE;
+ fc.mode = RTE_ETH_FC_TX_PAUSE;
else
- fc.mode = RTE_FC_NONE;
+ fc.mode = RTE_ETH_FC_NONE;
fc_conf->high_water = (1024 + (fc.low_water[0] << 9)) / 1024;
fc_conf->low_water = (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1298,13 +1298,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
AXGMAC_IOWRITE(pdata, reg, reg_val);
fc.mode = fc_conf->mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
} else {
@@ -1386,15 +1386,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
fc.mode = pfc_conf->fc.mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1830,8 +1830,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+ case RTE_ETH_VLAN_TYPE_INNER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
if (qinq) {
if (tpid != 0x8100 && tpid != 0x88a8)
PMD_DRV_LOG(ERR,
@@ -1848,8 +1848,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"Inner type not supported in single tag\n");
}
break;
- case ETH_VLAN_TYPE_OUTER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+ case RTE_ETH_VLAN_TYPE_OUTER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
if (qinq) {
PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
/*Enable outer VLAN tag*/
@@ -1866,11 +1866,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"tag supported 0x8100/0x88A8\n");
}
break;
- case ETH_VLAN_TYPE_MAX:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+ case RTE_ETH_VLAN_TYPE_MAX:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
break;
- case ETH_VLAN_TYPE_UNKNOWN:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+ case RTE_ETH_VLAN_TYPE_UNKNOWN:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
break;
}
return 0;
@@ -1904,8 +1904,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1915,8 +1915,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_stripping(pdata);
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1926,14 +1926,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_filtering(pdata);
}
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
axgbe_vlan_extend_enable(pdata);
/* Set global registers with default ethertype*/
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
} else {
PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
/* Receive Side Scaling */
#define AXGBE_RSS_OFFLOAD ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define AXGBE_RSS_HASH_KEY_SIZE 40
#define AXGBE_RSS_MAX_TABLE_SIZE 256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
pdata->an_int = 0;
axgbe_an73_clear_interrupts(pdata);
pdata->eth_dev->data->dev_link.link_status =
- ETH_LINK_DOWN;
+ RTE_ETH_LINK_DOWN;
} else if (pdata->an_state == AXGBE_AN_ERROR) {
PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c8618d2d6daa..aa2c27ebaa49 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
(DMA_CH_INC * rxq->queue_id));
rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
DMA_CH_RDTR_LO);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 567ea2382864..78fc717ec44a 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
link.link_speed = sc->link_vars.line_speed;
switch (sc->link_vars.duplex) {
case DUPLEX_FULL:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case DUPLEX_HALF:
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
link.link_status = sc->link_vars.link_up;
return rte_eth_linkstatus_set(dev, &link);
@@ -408,7 +408,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
"VF device is no longer operational");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
}
return ret;
@@ -534,7 +534,7 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
- dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -669,7 +669,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
bnx2x_load_firmware(sc);
assert(sc->firmware);
- if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
sc->udp_rss = 1;
sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 6743cf92b0e6..39bd739c7bc9 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,37 +569,37 @@ struct bnxt_rep_info {
#define BNXT_FW_STATUS_SHUTDOWN 0x100000
#define BNXT_ETH_RSS_SUPPORT ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define BNXT_HWRM_SHORT_REQ_LEN sizeof(struct hwrm_short_input)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f65..2791a5c62db1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
goto err_out;
/* Alloc RSS context only if RSS mode is enabled */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int j, nr_ctxs = bnxt_rss_ctxts(bp);
/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
* setting is not available at this time, it will not be
* configured correctly in the CFA.
*/
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
- (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
true : false);
if (rc)
goto err_out;
@@ -923,35 +923,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
link_speed = bp->link_info->support_pam4_speeds;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
- speed_capa |= ETH_LINK_SPEED_2_5G;
+ speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
- speed_capa |= ETH_LINK_SPEED_20G;
+ speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
if (bp->link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -995,14 +995,14 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
dev_info->tx_queue_offload_capa;
if (bp->fw_cap & BNXT_FW_CAP_VLAN_TX_INSERT)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
@@ -1049,8 +1049,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
*/
/* VMDq resources */
- vpool = 64; /* ETH_64_POOLS */
- vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+ vpool = 64; /* RTE_ETH_64_POOLS */
+ vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
for (i = 0; i < 4; vpool >>= 1, i++) {
if (max_vnics > vpool) {
for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1145,15 +1145,15 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
(uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
goto resource_error;
- if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+ if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
bp->max_vnics < eth_dev->data->nb_rx_queues)
goto resource_error;
bp->rx_cp_nr_rings = bp->rx_nr_rings;
bp->tx_cp_nr_rings = bp->tx_nr_rings;
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
@@ -1182,7 +1182,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
eth_dev->data->port_id,
(uint32_t)link->link_speed,
- (link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex\n"));
else
PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1199,10 +1199,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
uint16_t buf_size;
int i;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return 1;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
return 1;
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1247,15 +1247,15 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
* a limited subset have been enabled.
*/
if (eth_dev->data->dev_conf.rxmode.offloads &
- ~(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_VLAN_FILTER))
+ ~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
goto use_scalar_rx;
#if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1307,7 +1307,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
* or tx offloads.
*/
if (eth_dev->data->scattered_rx ||
- (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+ (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
BNXT_TRUFLOW_EN(bp))
goto use_scalar_tx;
@@ -1608,10 +1608,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
bnxt_link_update_op(eth_dev, 1);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- vlan_mask |= ETH_VLAN_FILTER_MASK;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- vlan_mask |= ETH_VLAN_STRIP_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
if (rc)
goto error;
@@ -1833,8 +1833,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
/* Retrieve link info from hardware */
rc = bnxt_get_hwrm_link_config(bp, &new);
if (rc) {
- new.link_speed = ETH_LINK_SPEED_100M;
- new.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new.link_speed = RTE_ETH_LINK_SPEED_100M;
+ new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR,
"Failed to retrieve link rc = 0x%x!\n", rc);
goto out;
@@ -2028,7 +2028,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
if (!vnic->rss_table)
return -EINVAL;
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return -EINVAL;
if (reta_size != tbl_size) {
@@ -2041,8 +2041,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
for (i = 0; i < reta_size; i++) {
struct bnxt_rx_queue *rxq;
- idx = i / RTE_RETA_GROUP_SIZE;
- sft = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ sft = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << sft)))
continue;
@@ -2095,8 +2095,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
}
for (idx = 0, i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- sft = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ sft = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << sft)) {
uint16_t qid;
@@ -2134,7 +2134,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
* If RSS enablement were different than dev_configure,
* then return -EINVAL
*/
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (!rss_conf->rss_hf)
PMD_DRV_LOG(ERR, "Hash type NONE\n");
} else {
@@ -2152,7 +2152,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
vnic->hash_mode =
bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
- ETH_RSS_LEVEL(rss_conf->rss_hf));
+ RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
/*
* If hashkey is not specified, use the previously configured
@@ -2197,30 +2197,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
hash_types = vnic->hash_type;
rss_conf->rss_hf = 0;
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
}
@@ -2260,17 +2260,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
fc_conf->autoneg = 1;
switch (bp->link_info->pause) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
}
return 0;
@@ -2293,11 +2293,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
bp->link_info->auto_pause = 0;
bp->link_info->force_pause = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2308,7 +2308,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
}
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2319,7 +2319,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
}
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2350,7 +2350,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2364,7 +2364,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
tunnel_type =
HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2413,7 +2413,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (!bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2430,7 +2430,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
port = bp->vxlan_fw_dst_port_id;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (!bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2608,7 +2608,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
int rc;
vnic = BNXT_GET_DEFAULT_VNIC(bp);
- if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
/* Remove any VLAN filters programmed */
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
@@ -2628,7 +2628,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
bnxt_add_vlan_filter(bp, 0);
}
PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
return 0;
}
@@ -2641,7 +2641,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
/* Destroy vnic filters and vnic */
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
}
@@ -2680,7 +2680,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = bnxt_add_vlan_filter(bp, 0);
if (rc)
return rc;
@@ -2698,7 +2698,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
return rc;
}
@@ -2718,22 +2718,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
if (!dev->data->dev_started)
return 0;
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering */
rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
else
PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2748,10 +2748,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
{
struct bnxt *bp = dev->data->dev_private;
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
- if (vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -2763,7 +2763,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
return -EINVAL;
}
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
switch (tpid) {
case RTE_ETHER_TYPE_QINQ:
bp->outer_tpid_bd =
@@ -2791,7 +2791,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
}
bp->outer_tpid_bd |= tpid;
PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer vlan in QinQ\n");
return -EINVAL;
@@ -2831,7 +2831,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
bnxt_del_dflt_mac_filter(bp, vnic);
memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
/* This filter will allow only untagged packets */
rc = bnxt_add_vlan_filter(bp, 0);
} else {
@@ -6556,4 +6556,4 @@ bool is_bnxt_supported(struct rte_eth_dev *dev)
RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE);
RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_bnxt, "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index b2ebb5634e3a..ced697a73980 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -978,7 +978,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -1177,7 +1177,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp,
}
/* If RSS types is 0, use a best effort configuration */
- types = rss->types ? rss->types : ETH_RSS_IPV4;
+ types = rss->types ? rss->types : RTE_ETH_RSS_IPV4;
hash_type = bnxt_rte_to_hwrm_hash_types(types);
@@ -1322,7 +1322,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
rxq = bp->rx_queues[act_q->index];
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
vnic->fw_vnic_id != INVALID_HW_RING_ID)
goto use_vnic;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 181e607d7bf8..82e89b7c8af7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
uint16_t j = dst_id - 1;
//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
- if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+ if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
conf->pool_map[j].pools & (1UL << j)) {
PMD_DRV_LOG(DEBUG,
"Add vlan %u to vmdq pool %u\n",
@@ -2979,12 +2979,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
{
uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
- if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+ if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
switch (conf_link_speed) {
- case ETH_LINK_SPEED_10M_HD:
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
}
@@ -3001,51 +3001,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
{
uint16_t eth_link_speed = 0;
- if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
- return ETH_LINK_SPEED_AUTONEG;
+ if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+ return RTE_ETH_LINK_SPEED_AUTONEG;
- switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_100M:
- case ETH_LINK_SPEED_100M_HD:
+ switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
break;
- case ETH_LINK_SPEED_2_5G:
+ case RTE_ETH_LINK_SPEED_2_5G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
break;
- case ETH_LINK_SPEED_20G:
+ case RTE_ETH_LINK_SPEED_20G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
break;
@@ -3058,11 +3058,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
return eth_link_speed;
}
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
- ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+ RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
static int bnxt_validate_link_speed(struct bnxt *bp)
{
@@ -3071,13 +3071,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
uint32_t link_speed_capa;
uint32_t one_speed;
- if (link_speed == ETH_LINK_SPEED_AUTONEG)
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
return 0;
link_speed_capa = bnxt_get_speed_capabilities(bp);
- if (link_speed & ETH_LINK_SPEED_FIXED) {
- one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+ if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+ one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
if (one_speed & (one_speed - 1)) {
PMD_DRV_LOG(ERR,
@@ -3107,71 +3107,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
{
uint16_t ret = 0;
- if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
if (bp->link_info->support_speeds)
return bp->link_info->support_speeds;
link_speed = BNXT_SUPPORTED_SPEEDS;
}
- if (link_speed & ETH_LINK_SPEED_100M)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_100M_HD)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_1G)
+ if (link_speed & RTE_ETH_LINK_SPEED_1G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
- if (link_speed & ETH_LINK_SPEED_2_5G)
+ if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
- if (link_speed & ETH_LINK_SPEED_10G)
+ if (link_speed & RTE_ETH_LINK_SPEED_10G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
- if (link_speed & ETH_LINK_SPEED_20G)
+ if (link_speed & RTE_ETH_LINK_SPEED_20G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
- if (link_speed & ETH_LINK_SPEED_25G)
+ if (link_speed & RTE_ETH_LINK_SPEED_25G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
- if (link_speed & ETH_LINK_SPEED_40G)
+ if (link_speed & RTE_ETH_LINK_SPEED_40G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
- if (link_speed & ETH_LINK_SPEED_50G)
+ if (link_speed & RTE_ETH_LINK_SPEED_50G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
- if (link_speed & ETH_LINK_SPEED_100G)
+ if (link_speed & RTE_ETH_LINK_SPEED_100G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
- if (link_speed & ETH_LINK_SPEED_200G)
+ if (link_speed & RTE_ETH_LINK_SPEED_200G)
ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
return ret;
}
static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
{
- uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+ uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
switch (hw_link_speed) {
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
- eth_link_speed = ETH_SPEED_NUM_100M;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
- eth_link_speed = ETH_SPEED_NUM_1G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
- eth_link_speed = ETH_SPEED_NUM_2_5G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
- eth_link_speed = ETH_SPEED_NUM_10G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
- eth_link_speed = ETH_SPEED_NUM_20G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
- eth_link_speed = ETH_SPEED_NUM_25G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
- eth_link_speed = ETH_SPEED_NUM_40G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
- eth_link_speed = ETH_SPEED_NUM_50G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
- eth_link_speed = ETH_SPEED_NUM_100G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
- eth_link_speed = ETH_SPEED_NUM_200G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_200G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
default:
@@ -3184,16 +3184,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
{
- uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (hw_link_duplex) {
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
/* FALLTHROUGH */
- eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
- eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
default:
PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3222,12 +3222,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
link->link_speed =
bnxt_parse_hw_link_speed(link_info->link_speed);
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
link->link_status = link_info->link_up;
link->link_autoneg = link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
- ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+ RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
exit:
return rc;
}
@@ -3253,7 +3253,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
if (BNXT_CHIP_P5(bp) &&
- dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+ dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
/* 40G is not supported as part of media auto detect.
* The speed should be forced and autoneg disabled
* to configure 40G speed.
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
HWRM_CHECK_RESULT();
- bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+ bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
svif_info = rte_le_to_cpu_16(resp->svif_info);
if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a84..1c07db3ca9c5 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -537,7 +537,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 08cefa1baaef..7940d489a102 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -187,7 +187,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
rx_ring_info->rx_ring_struct->ring_size *
AGG_RING_SIZE_FACTOR)) : 0;
- if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
int tpa_max = BNXT_TPA_MAX_AGGS(bp);
tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -283,7 +283,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
ag_bitmap_start, ag_bitmap_len);
/* TPA info */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rx_ring_info->tpa_info =
((struct bnxt_tpa_info *)
((char *)mz->addr + tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 38ec4aa14b77..1456f8b54ffa 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -52,13 +52,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->nr_vnics = 0;
/* Multi-queue mode */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_RSS:
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* FALLTHROUGH */
/* ETH_8/64_POOLs */
pools = conf->nb_queue_pools;
@@ -66,14 +66,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
max_pools = RTE_MIN(bp->max_vnics,
RTE_MIN(bp->max_l2_ctx,
RTE_MIN(bp->max_rsscos_ctx,
- ETH_64_POOLS)));
+ RTE_ETH_64_POOLS)));
PMD_DRV_LOG(DEBUG,
"pools = %u max_pools = %u\n",
pools, max_pools);
if (pools > max_pools)
pools = max_pools;
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
break;
default:
@@ -111,7 +111,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
ring_idx, rxq, i, vnic);
}
if (i == 0) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
bp->eth_dev->data->promiscuous = 1;
vnic->flags |= BNXT_VNIC_INFO_PROMISC;
}
@@ -121,8 +121,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
vnic->end_grp_id = end_grp_id;
if (i) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
- !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+ !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
vnic->rss_dflt_cr = true;
goto skip_filter_allocation;
}
@@ -147,14 +147,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->rx_num_qs_per_vnic = nb_q_per_grp;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
if (bp->flags & BNXT_FLAG_UPDATE_HASH)
bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
for (i = 0; i < bp->nr_vnics; i++) {
- uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+ uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
vnic = &bp->vnic_info[i];
vnic->hash_type =
@@ -363,7 +363,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
rxq->queue_id = queue_idx;
rxq->port_id = eth_dev->data->port_id;
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -478,7 +478,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
vnic = rxq->vnic;
if (BNXT_HAS_RING_GRPS(bp)) {
@@ -549,7 +549,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
rxq->rx_started = false;
PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (BNXT_HAS_RING_GRPS(bp))
vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index aeacc60a0127..eb555c4545e6 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
dev_conf = &rxq->bp->eth_dev->data->dev_conf;
offloads = dev_conf->rxmode.offloads;
- outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+ outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
/* Initialize ol_flags table. */
pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 9e45ddd7a82e..f2fcaf53021c 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -353,7 +353,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -479,7 +479,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
{
uint16_t hwrm_type = 0;
- if (rte_type & ETH_RSS_IPV4)
+ if (rte_type & RTE_ETH_RSS_IPV4)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
- if (rte_type & ETH_RSS_IPV6)
+ if (rte_type & RTE_ETH_RSS_IPV6)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
{
uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
- bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
- bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP));
+ bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+ bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP));
bool l3_only = l3 && !l4;
bool l3_and_l4 = l3 && l4;
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
* return default hash mode.
*/
if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
- return ETH_RSS_LEVEL_PMD_DEFAULT;
+ return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
- rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
- rss_level |= ETH_RSS_LEVEL_INNERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
else
- rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+ rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
return rss_level;
}
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
if (vf >= bp->pdev->max_vfs)
return -EINVAL;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
return -ENOTSUP;
}
/* Is this really the correct mapping? VFd seems to think it is. */
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
flag |= BNXT_VNIC_INFO_PROMISC;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
flag |= BNXT_VNIC_INFO_BCAST;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptor limits */
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
- RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+ RTE_ETH_RETA_GROUP_SIZE];
uint8_t rss_key[52]; /**< 52-byte hash key buffer. */
uint8_t rss_key_len; /**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2029955c1092..ca50583d62d8 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
uint16_t key_speed;
switch (speed) {
- case ETH_SPEED_NUM_NONE:
+ case RTE_ETH_SPEED_NUM_NONE:
key_speed = 0x00;
break;
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
key_speed = BOND_LINK_SPEED_KEY_10M;
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
key_speed = BOND_LINK_SPEED_KEY_100M;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
key_speed = BOND_LINK_SPEED_KEY_1000M;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
key_speed = BOND_LINK_SPEED_KEY_10G;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
key_speed = BOND_LINK_SPEED_KEY_20G;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
key_speed = BOND_LINK_SPEED_KEY_40G;
break;
default:
@@ -887,7 +887,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (ret >= 0 && link_info.link_status != 0) {
key = link_speed_key(link_info.link_speed) << 1;
- if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+ if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
key |= BOND_LINK_FULL_DUPLEX_KEY;
} else {
key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 5140ef14c2ee..84943cffe2bb 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
return 0;
internals = bonded_eth_dev->data->dev_private;
@@ -592,7 +592,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
return -1;
}
- if (link_props.link_status == ETH_LINK_UP) {
+ if (link_props.link_status == RTE_ETH_LINK_UP) {
if (internals->active_slave_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
@@ -727,7 +727,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
internals->tx_queue_offload_capa = 0;
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
internals->reta_size = 0;
internals->candidate_max_rx_pktlen = 0;
internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 8d038ba6b6c4..834a5937b3aa 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1369,8 +1369,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
* In any other mode the link properties are set to default
* values of AUTONEG/DUPLEX
*/
- ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
- ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+ ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
}
}
@@ -1700,7 +1700,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
/* If RSS is enabled for bonding, try to enable it for slaves */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
/* rss_key won't be empty if RSS is configured in bonded dev */
slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
@@ -1714,12 +1714,12 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
@@ -1823,7 +1823,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* If RSS is enabled for bonding, synchronize RETA */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int i;
struct bond_dev_private *internals;
@@ -1946,7 +1946,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
return -1;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 1;
internals = eth_dev->data->dev_private;
@@ -2086,7 +2086,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
tlb_last_obytets[internals->active_slaves[i]] = 0;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
@@ -2416,15 +2416,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
bond_ctx = ethdev->data->dev_private;
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
bond_ctx->active_slave_count == 0) {
- ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
- ethdev->data->dev_link.link_status = ETH_LINK_UP;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
if (wait_to_complete)
link_update = rte_eth_link_get;
@@ -2449,7 +2449,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
&slave_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
- ETH_SPEED_NUM_NONE;
+ RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
"Slave (port %u) link get failed: %s",
bond_ctx->active_slaves[idx],
@@ -2491,7 +2491,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
* In theses mode the maximum theoretical link speed is the sum
* of all the slaves
*/
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2865,7 +2865,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
goto link_update;
/* check link state properties if bonded link is up*/
- if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
"for slave %d in bonding mode %d",
@@ -2881,7 +2881,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (internals->active_slave_count < 1) {
/* If first active slave, then change link status */
bonded_eth_dev->data->dev_link.link_status =
- ETH_LINK_UP;
+ RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
@@ -2973,12 +2973,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
/* Copy RETA table */
- reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
- RTE_RETA_GROUP_SIZE;
+ reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+ RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < reta_count; i++) {
internals->reta_conf[i].mask = reta_conf[i].mask;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
}
@@ -3011,8 +3011,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
/* Copy RETA table */
- for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
@@ -3274,7 +3274,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->max_rx_pktlen = 0;
/* Initially allow to choose any offload type */
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
memset(&internals->default_rxconf, 0,
sizeof(internals->default_rxconf));
@@ -3501,7 +3501,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
* set key to the the value specified in port RSS configuration.
* Fall back to default RSS key if the key is not specified
*/
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
struct rte_eth_rss_conf *rss_conf =
&dev->data->dev_conf.rx_adv_conf.rss_conf;
if (rss_conf->rss_key != NULL) {
@@ -3526,9 +3526,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
internals->reta_conf[i].mask = ~0LL;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
internals->reta_conf[i].reta[j] =
- (i * RTE_RETA_GROUP_SIZE + j) %
+ (i * RTE_ETH_RETA_GROUP_SIZE + j) %
dev->data->nb_rx_queues;
}
}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 9dfea99db9b2..d52f8ffecf23 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
flags |= NIX_RX_OFFLOAD_PTYPE_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
- if (conf & DEV_TX_OFFLOAD_SECURITY)
+ if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
return flags;
diff --git a/drivers/net/cnxk/cn10k_rte_flow.c b/drivers/net/cnxk/cn10k_rte_flow.c
index 8c87452934eb..dff4c7746cf5 100644
--- a/drivers/net/cnxk/cn10k_rte_flow.c
+++ b/drivers/net/cnxk/cn10k_rte_flow.c
@@ -98,7 +98,7 @@ cn10k_rss_action_validate(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("multi-queue mode is disabled");
return -ENOTSUP;
}
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index d6af54b56de6..5d603514c045 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -77,12 +77,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index eb962ef08cab..5e6c5ee11188 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -78,11 +78,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b7b..17f8f6debbc8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
flags |= NIX_RX_OFFLOAD_PTYPE_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
return flags;
@@ -298,9 +298,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
/* Platform specific checks */
if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
plt_err("Outer IP and SCTP checksum unsupported");
return -EINVAL;
}
@@ -553,17 +553,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* TSO not supported for earlier chip revisions
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
- dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
/* 50G and 100G to be supported for board version C0
* and above of CN9K.
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
}
dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 5c4387e74e0b..8d504c4a6d92 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -77,12 +77,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index e5691a2a7e16..f3f19fed9780 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -77,11 +77,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 2e05d8bf1552..db54468dbca1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
if (roc_nix_is_vf_or_sdp(&dev->nix) ||
dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
uint32_t speed_capa;
/* Auto negotiation disabled */
- speed_capa = ETH_LINK_SPEED_FIXED;
+ speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
- speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
}
return speed_capa;
@@ -65,7 +65,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
struct roc_nix *nix = &dev->nix;
int i, rc = 0;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Setup Inline Inbound */
rc = roc_nix_inl_inb_init(nix);
if (rc) {
@@ -80,8 +80,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
cnxk_nix_inb_mode_set(dev, true);
}
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
- dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+ dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
struct plt_bitmap *bmap;
size_t bmap_sz;
void *mem;
@@ -100,8 +100,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
- /* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+ /* Skip the rest if RTE_ETH_TX_OFFLOAD_SECURITY is not enabled */
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY))
goto done;
rc = -ENOMEM;
@@ -136,7 +136,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
done:
return 0;
cleanup:
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
rc |= roc_nix_inl_inb_fini(nix);
return rc;
}
@@ -182,7 +182,7 @@ nix_security_release(struct cnxk_eth_dev *dev)
int rc, ret = 0;
/* Cleanup Inline inbound */
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Destroy inbound sessions */
tvar = NULL;
RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
@@ -199,8 +199,8 @@ nix_security_release(struct cnxk_eth_dev *dev)
}
/* Cleanup Inline outbound */
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
- dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+ dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Destroy outbound sessions */
tvar = NULL;
RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
@@ -242,8 +242,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
}
@@ -273,7 +273,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
struct rte_eth_fc_conf fc_conf = {0};
int rc;
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -281,10 +281,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
@@ -305,11 +305,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (roc_model_is_cn96_ax() &&
dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
- (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+ (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_cfg.mode =
- (fc_cfg.mode == RTE_FC_FULL ||
- fc_cfg.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_cfg.mode == RTE_ETH_FC_FULL ||
+ fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -352,7 +352,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -380,7 +380,7 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
/* When Tx Security offload is enabled, increase tx desc count by
* max possible outbound desc count.
*/
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
nb_desc += dev->outb.nb_desc;
/* Setup ROC SQ */
@@ -499,7 +499,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
* to avoid meta packet drop as LBK does not currently support
* backpressure.
*/
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
/* Use current RQ's aura limit if inl rq is not available */
@@ -561,7 +561,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
rxq_sp->qconf.nb_desc = nb_desc;
rxq_sp->qconf.mp = mp;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Setup rq reference for inline dev if present */
rc = roc_nix_inl_dev_rq_get(rq);
if (rc)
@@ -579,7 +579,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
rc = cnxk_nix_tsc_convert(dev);
if (rc) {
plt_err("Failed to calculate delta and freq mult");
@@ -618,7 +618,7 @@ cnxk_nix_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
plt_nix_dbg("Releasing rxq %u", qid);
/* Release rq reference for inline dev if present */
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
roc_nix_inl_dev_rq_put(rq);
/* Cleanup ROC RQ */
@@ -657,24 +657,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
dev->ethdev_rss_hf = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -683,34 +683,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -746,7 +746,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
uint64_t rss_hf;
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
@@ -958,8 +958,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
/* Nothing much to do if offload is not enabled */
if (!(dev->tx_offloads &
- (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+ (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
return 0;
/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -1007,13 +1007,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
@@ -1054,7 +1054,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
/* Prepare rx cfg */
rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
}
@@ -1062,7 +1062,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
/* Disable drop re if rx offload security is enabled and
* platform does not support it.
@@ -1454,12 +1454,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled on PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
cnxk_eth_dev_ops.timesync_enable(eth_dev);
else
cnxk_eth_dev_ops.timesync_disable(eth_dev);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
rc = rte_mbuf_dyn_rx_timestamp_register
(&dev->tstamp.tstamp_dynfield_offset,
&dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 72f80ae948cf..29a3540ed3f8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -58,41 +58,44 @@
CNXK_NIX_TX_NB_SEG_MAX)
#define CNXK_NIX_RSS_L3_L4_SRC_DST \
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
#define CNXK_NIX_RSS_OFFLOAD \
- (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
- ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+ (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL | \
+ RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST | \
+ RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
#define CNXK_NIX_TX_OFFLOAD_CAPA \
- (DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
+ (RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_SECURITY)
#define CNXK_NIX_RX_OFFLOAD_CAPA \
- (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
- DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_SECURITY)
+ (RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_SECURITY)
#define RSS_IPV4_ENABLE \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE \
- (ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+ (RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index c0b949e21ab0..e068f553495c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -104,11 +104,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
val = ROC_NIX_RSS_RETA_SZ_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
val = ROC_NIX_RSS_RETA_SZ_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
val = ROC_NIX_RSS_RETA_SZ_256;
else
val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df76152..67464302653d 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,24 +81,24 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
- {DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
- {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
- {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_SECURITY, " Security,"},
- {DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+ {RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+ {RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
"Scalar, Rx Offloads:"
@@ -142,28 +142,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
- {DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
- {DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
- {DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
- {DEV_TX_OFFLOAD_SECURITY, " Security,"},
- {DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+ {RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
};
static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
"Scalar, Tx Offloads:"
@@ -203,8 +203,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
enum rte_eth_fc_mode mode_map[] = {
- RTE_FC_NONE, RTE_FC_RX_PAUSE,
- RTE_FC_TX_PAUSE, RTE_FC_FULL
+ RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+ RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
};
struct roc_nix *nix = &dev->nix;
int mode;
@@ -264,10 +264,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -408,13 +408,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
plt_err("Scatter offload is not enabled for mtu");
goto exit;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
plt_err("Greater than maximum supported packet length");
goto exit;
@@ -734,8 +734,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
reta[idx] = reta_conf[i].reta[j];
idx++;
@@ -770,8 +770,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
goto fail;
/* Copy RETA table */
- for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = reta[idx];
idx++;
@@ -804,7 +804,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_conf->rss_key)
roc_nix_rss_key_set(nix, rss_conf->rss_key);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 6a7080167598..f10a502826c6 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
plt_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex"
: "half-duplex");
else
@@ -89,7 +89,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
eth_link.link_status = link->status;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
/* Print link info */
@@ -117,17 +117,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
return 0;
if (roc_nix_is_lbk(&dev->nix)) {
- link.link_status = ETH_LINK_UP;
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_autoneg = ETH_LINK_FIXED;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
rc = roc_nix_mac_link_info_get(&dev->nix, &info);
if (rc)
return rc;
link.link_status = info.status;
link.link_speed = info.speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (info.full_duplex)
link.link_duplex = info.full_duplex;
}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rc = roc_nix_ptp_rx_ena_dis(nix, true);
if (!rc) {
@@ -257,7 +257,7 @@ int
cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
- uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+ uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
struct roc_nix *nix = &dev->nix;
int rc = 0;
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index ad89a2e105b1..c86c92ce4c2f 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("multi-queue mode is disabled");
return -ENOTSUP;
}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 37625c5bfb69..dbcbfaf68a30 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,31 +28,31 @@
#define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
#define CXGBE_DEFAULT_RSS_KEY_LEN 40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
/* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
/* Devargs filtermode and filtermask representation */
enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b2976002c..4758321778d1 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
}
new_link.link_status = cxgbe_force_linkup(adapter) ?
- ETH_LINK_UP : pi->link_cfg.link_ok;
+ RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -374,7 +374,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
goto out;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
else
eth_dev->data->scattered_rx = 0;
@@ -438,9 +438,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
CXGBE_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!(adapter->flags & FW_QUEUE_BOUND)) {
err = cxgbe_setup_sge_fwevtq(adapter);
@@ -1080,13 +1080,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
rx_pause = 1;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1099,12 +1099,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
u8 tx_pause = 0, rx_pause = 0;
int ret;
- if (fc_conf->mode == RTE_FC_FULL) {
+ if (fc_conf->mode == RTE_ETH_FC_FULL) {
tx_pause = 1;
rx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
tx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
rx_pause = 1;
}
@@ -1200,9 +1200,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
}
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1246,8 +1246,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
@@ -1277,8 +1277,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
@@ -1479,7 +1479,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_100G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
}
@@ -1488,7 +1488,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_50G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
}
@@ -1497,7 +1497,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_25G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 91d6bb9bbcb0..f1ac32270961 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1670,7 +1670,7 @@ int cxgbe_link_start(struct port_info *pi)
* that step explicitly.
*/
ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
- !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+ !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
true);
if (ret == 0) {
ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1694,7 +1694,7 @@ int cxgbe_link_start(struct port_info *pi)
}
if (ret == 0 && cxgbe_force_linkup(adapter))
- pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return ret;
}
@@ -1725,10 +1725,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
F_FW_RSS_VI_CONFIG_CMD_UDPEN;
@@ -1865,7 +1865,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
{
#define SET_SPEED(__speed_name) \
do { \
- *speed_caps |= ETH_LINK_ ## __speed_name; \
+ *speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
} while (0)
#define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1952,7 +1952,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
speed_caps);
if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
- *speed_caps |= ETH_LINK_SPEED_FIXED;
+ *speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
}
/**
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c79cdb8d8ad7..89ea7dd47c0b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,29 +54,29 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
- if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
dev->data->scattered_rx = 1;
@@ -283,43 +283,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
/* Configure link only if link is UP*/
if (link->link_status) {
- if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
/* Start autoneg only if link is not in autoneg mode */
if (!link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- } else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
- switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M_HD:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ } else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ switch (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_10M:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10M:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_100M_HD:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M_HD:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_100M:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_1G:
- speed = ETH_SPEED_NUM_1G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_1G:
+ speed = RTE_ETH_SPEED_NUM_1G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_2_5G:
- speed = ETH_SPEED_NUM_2_5G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_2_5G:
+ speed = RTE_ETH_SPEED_NUM_2_5G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_10G:
- speed = ETH_SPEED_NUM_10G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10G:
+ speed = RTE_ETH_SPEED_NUM_10G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
- speed = ETH_SPEED_NUM_NONE;
- duplex = ETH_LINK_FULL_DUPLEX;
+ speed = RTE_ETH_SPEED_NUM_NONE;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
}
/* Set link speed */
@@ -535,30 +535,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
if (fif->mac_type == fman_mac_1g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G;
} else if (fif->mac_type == fman_mac_2_5g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G
- | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G
+ | RTE_ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -591,12 +591,12 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
/* Update Rx offload info */
@@ -623,14 +623,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -664,7 +664,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
return ret;
- if (link->link_status == ETH_LINK_DOWN &&
+ if (link->link_status == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -675,15 +675,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
if (ioctl_version < 2) {
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (fif->mac_type == fman_mac_1g)
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
else if (fif->mac_type == fman_mac_2_5g)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else if (fif->mac_type == fman_mac_10g)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -962,7 +962,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
@@ -1268,7 +1268,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
return 0;
@@ -1284,7 +1284,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
return 0;
@@ -1314,10 +1314,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- if (fc_conf->mode == RTE_FC_NONE) {
+ if (fc_conf->mode == RTE_ETH_FC_NONE) {
return 0;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
- fc_conf->mode == RTE_FC_FULL) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+ fc_conf->mode == RTE_ETH_FC_FULL) {
fman_if_set_fc_threshold(dev->process_private,
fc_conf->high_water,
fc_conf->low_water,
@@ -1361,11 +1361,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
}
ret = fman_if_get_fc_threshold(dev->process_private);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time =
fman_if_get_fc_quanta(dev->process_private);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -1626,10 +1626,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
fc_conf = dpaa_intf->fc_conf;
ret = fman_if_get_fc_threshold(fman_intf);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
#define DPAA_DEBUG_FQ_TX_ERROR 1
#define DPAA_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP)
#define DPAA_TX_CKSUM_OFFLOAD_MASK ( \
PKT_TX_IP_CKSUM | \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
if (req_dist_set % 2 != 0) {
dist_field = 1U << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_L2_PAYLOAD:
if (l2_configured)
break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_ETH;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
if (ipv4_configured)
break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV4;
break;
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (ipv6_configured)
break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV6;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
if (tcp_configured)
break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_TCP;
break;
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (udp_configured)
break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_UDP;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 08f49af7685d..3170694841df 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -220,9 +220,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
if (req_dist_set % 2 != 0) {
dist_field = 1ULL << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
- case ETH_RSS_ETH:
-
+ case RTE_ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_ETH:
if (l2_configured)
break;
l2_configured = 1;
@@ -238,7 +237,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_PPPOE:
+ case RTE_ETH_RSS_PPPOE:
if (pppoe_configured)
break;
kg_cfg->extracts[i].extract.from_hdr.prot =
@@ -252,7 +251,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_ESP:
+ case RTE_ETH_RSS_ESP:
if (esp_configured)
break;
esp_configured = 1;
@@ -268,7 +267,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_AH:
+ case RTE_ETH_RSS_AH:
if (ah_configured)
break;
ah_configured = 1;
@@ -284,8 +283,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_C_VLAN:
- case ETH_RSS_S_VLAN:
+ case RTE_ETH_RSS_C_VLAN:
+ case RTE_ETH_RSS_S_VLAN:
if (vlan_configured)
break;
vlan_configured = 1;
@@ -301,7 +300,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_MPLS:
+ case RTE_ETH_RSS_MPLS:
if (mpls_configured)
break;
@@ -338,13 +337,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (l3_configured)
break;
@@ -382,12 +381,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_TCP_EX:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (l4_configured)
break;
@@ -414,8 +413,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e78520e..59e728577f53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,33 +38,33 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* enable timestamp in mbuf */
bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -142,7 +142,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN Filter not avaialble */
if (!priv->max_vlan_filters) {
DPAA2_PMD_INFO("VLAN filter not available");
@@ -150,7 +150,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
priv->token, true);
else
@@ -251,13 +251,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_rx_offloads_nodis;
dev_info->tx_offload_capa = dev_tx_offloads_sup |
dev_tx_offloads_nodis;
- dev_info->speed_capa = ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -270,10 +270,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
if (dpaa2_svr_family == SVR_LX2160A) {
- dev_info->speed_capa |= ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
return 0;
@@ -291,15 +291,15 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+ {RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
};
/* Update Rx offload info */
@@ -326,15 +326,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -573,7 +573,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
return -1;
}
- if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
ret = dpaa2_setup_flow_dist(dev,
eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -587,12 +587,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rx_l3_csum_offload = true;
- if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
rx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -610,7 +610,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
#if !defined(RTE_LIBRTE_IEEE1588)
- if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
#endif
{
ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -623,12 +623,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dpaa2_enable_ts[dev->data->port_id] = true;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
tx_l3_csum_offload = true;
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
tx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -660,8 +660,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
dpaa2_tm_init(dev);
@@ -1856,7 +1856,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
return -1;
}
- if (state.up == ETH_LINK_DOWN &&
+ if (state.up == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -1868,9 +1868,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
link.link_speed = state.rate;
if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
@@ -2031,9 +2031,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* No TX side flow control (send Pause frame disabled)
*/
if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
} else {
/* DPNI_LINK_OPT_PAUSE not set
* if ASYM_PAUSE set,
@@ -2043,9 +2043,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* Flow control disabled
*/
if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return ret;
@@ -2089,14 +2089,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* update cfg with fc_conf */
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
/* Full flow control;
* OPT_PAUSE set, ASYM_PAUSE not set
*/
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
/* Enable RX flow control
* OPT_PAUSE not set;
* ASYM_PAUSE set;
@@ -2104,7 +2104,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_PAUSE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
/* Enable TX Flow control
* OPT_PAUSE set
* ASYM_PAUSE set
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
/* Disable Flow control
* OPT_PAUSE not set
* ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fdc62ec30d22..c5e9267bf04d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,17 +65,17 @@
#define DPAA2_TX_CONF_ENABLE 0x08
#define DPAA2_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP | \
- ETH_RSS_MPLS | \
- ETH_RSS_C_VLAN | \
- ETH_RSS_S_VLAN | \
- ETH_RSS_ESP | \
- ETH_RSS_AH | \
- ETH_RSS_PPPOE)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_MPLS | \
+ RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_S_VLAN | \
+ RTE_ETH_RSS_ESP | \
+ RTE_ETH_RSS_AH | \
+ RTE_ETH_RSS_PPPOE)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
rte_vlan_strip(bufs[num_rx]);
dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
eth_data->port_id);
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP) {
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
rte_vlan_strip(bufs[num_rx]);
}
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags
& PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
int ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 93bee734ae5d..031c92a66fa0 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -81,15 +81,15 @@
#define E1000_FTQF_QUEUE_ENABLE 0x00000100
#define IGB_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
/*
* The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6ed1..9da477e59def 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -597,8 +597,8 @@ eth_em_start(struct rte_eth_dev *dev)
e1000_clear_hw_cntrs_base_generic(hw);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_em_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -611,39 +611,39 @@ eth_em_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -1102,9 +1102,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
/* Preferred queue parameters */
dev_info->default_rxportconf.nb_queues = 1;
@@ -1162,17 +1162,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -1424,15 +1424,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
em_vlan_hw_strip_enable(dev);
else
em_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
em_vlan_hw_filter_enable(dev);
else
em_vlan_hw_filter_disable(dev);
@@ -1601,7 +1601,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
if (link.link_status) {
PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1683,13 +1683,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 344149c19147..648b04154c5b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
struct em_rx_entry *sw_ring; /**< address of RX software ring. */
struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
- uint64_t offloads; /**< Offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
uint16_t nb_rx_desc; /**< number of RX descriptors. */
uint16_t rx_tail; /**< current value of RDT register. */
uint16_t nb_rx_hold; /**< number of held free RX desc. */
@@ -173,7 +173,7 @@ struct em_tx_queue {
uint8_t wthresh; /**< Write-back threshold register. */
struct em_ctx_info ctx_cache;
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -1171,11 +1171,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
RTE_SET_USED(dev);
tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
return tx_offload_capa;
}
@@ -1369,13 +1369,13 @@ em_get_rx_port_offloads_capa(void)
uint64_t rx_offload_capa;
rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
return rx_offload_capa;
}
@@ -1469,7 +1469,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1788,7 +1788,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1831,7 +1831,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
}
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1844,7 +1844,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1870,7 +1870,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
}
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad2f..ae3bc4a9c201 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1073,21 +1073,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
- if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
- tx_mq_mode == ETH_MQ_TX_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+ tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* Check multi-queue mode.
- * To no break software we accept ETH_MQ_RX_NONE as this might
+ * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
* be used to turn off VLAN filter.
*/
- if (rx_mq_mode == ETH_MQ_RX_NONE ||
- rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
} else {
/* Only support one queue on VFs.
@@ -1099,12 +1099,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
/* TX mode is not used here, so mode might be ignored.*/
- if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(WARNING, "SRIOV is active,"
" TX mode %d is not supported. "
" Driver will behave as %d mode.",
- tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+ tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
}
/* check valid queue number */
@@ -1117,17 +1117,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
return -EINVAL;
}
- if (tx_mq_mode != ETH_MQ_TX_NONE &&
- tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+ tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
" Due to txmode is meaningless in this"
" driver, just ignore.",
@@ -1146,8 +1146,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = igb_check_mq_mode(dev);
@@ -1287,8 +1287,8 @@ eth_igb_start(struct rte_eth_dev *dev)
/*
* VLAN Offload Settings
*/
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_igb_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1296,7 +1296,7 @@ eth_igb_start(struct rte_eth_dev *dev)
return ret;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable VLAN filter since VMDq always use VLAN filter */
igb_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1310,39 +1310,39 @@ eth_igb_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -2185,21 +2185,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
case e1000_82576:
dev_info->max_rx_queues = 16;
dev_info->max_tx_queues = 16;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 16;
break;
case e1000_82580:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
case e1000_i350:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
@@ -2225,7 +2225,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
return -EINVAL;
}
dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2251,9 +2251,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2296,12 +2296,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
dev_info->max_rx_pktlen = 0x3FFF; /* See RLPML register. */
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
switch (hw->mac.type) {
case e1000_vfadapt:
dev_info->max_rx_queues = 2;
@@ -2402,17 +2402,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else if (!link_check) {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2588,7 +2588,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= E1000_CTRL_EXT_EXT_VLAN;
/* only outer TPID of double VLAN can be configured*/
- if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg = E1000_READ_REG(hw, E1000_VET);
reg = (reg & (~E1000_VET_VET_EXT)) |
((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2703,22 +2703,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igb_vlan_hw_strip_enable(dev);
else
igb_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igb_vlan_hw_filter_enable(dev);
else
igb_vlan_hw_filter_disable(dev);
}
- if(mask & ETH_VLAN_EXTEND_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
igb_vlan_hw_extend_enable(dev);
else
igb_vlan_hw_extend_disable(dev);
@@ -2870,7 +2870,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3024,13 +3024,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3099,18 +3099,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* on configuration
*/
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
ctrl |= E1000_CTRL_RFCE;
ctrl &= ~E1000_CTRL_TFCE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
ctrl |= E1000_CTRL_TFCE;
ctrl &= ~E1000_CTRL_RFCE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
break;
default:
@@ -3258,22 +3258,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -3571,16 +3571,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGB_4_BIT_MASK);
if (!mask)
@@ -3612,16 +3612,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGB_4_BIT_MASK);
if (!mask)
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
if (*vfinfo == NULL)
rte_panic("Cannot allocate memory for private VF data\n");
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index a1d5eecc14a1..bcce2fc726d8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -186,7 +186,7 @@ struct igb_tx_queue {
/**< Start context position for transmit queue. */
struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -1459,13 +1459,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
RTE_SET_USED(dev);
- tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return tx_offload_capa;
}
@@ -1640,19 +1640,19 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == e1000_i350 ||
hw->mac.type == e1000_i210 ||
hw->mac.type == e1000_i211)
- rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
return rx_offload_capa;
}
@@ -1733,7 +1733,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1950,23 +1950,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
}
@@ -2032,23 +2032,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -2170,15 +2170,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
E1000_VMOLR_MPME);
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
vmolr |= E1000_VMOLR_AUPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
vmolr |= E1000_VMOLR_ROMPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
vmolr |= E1000_VMOLR_ROPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
vmolr |= E1000_VMOLR_BAM;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
vmolr |= E1000_VMOLR_MPME;
E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2214,9 +2214,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VLVF: set up filters for vlan tags as configured */
for (i = 0; i < cfg->nb_pool_maps; i++) {
/* set vlan id in VF register and set the valid bit */
- E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
- (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
- ((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+ E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+ (cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+ ((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
E1000_VLVF_POOLSEL_MASK)));
}
@@ -2268,7 +2268,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t mrqc;
- if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+ if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
/*
* SRIOV active scheme
* FIXME if support RSS together with VMDq & SRIOV
@@ -2282,14 +2282,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igb_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
/*Configure general VMDQ only RX parameters*/
igb_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode.*/
default:
igb_rss_disable(dev);
@@ -2338,7 +2338,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Set maximum packet length by default, and might be updated
* together with enabling/disabling dual VLAN.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
max_len += VLAN_TAG_SIZE;
E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2374,7 +2374,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2444,7 +2444,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2488,16 +2488,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
rxcsum |= E1000_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
if (rxmode->offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
rxcsum |= E1000_RXCSUM_TUOFL;
else
rxcsum &= ~E1000_RXCSUM_TUOFL;
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_CRCOFL;
else
rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2505,7 +2505,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
/* clear STRCRC bit in all queues */
@@ -2545,7 +2545,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
/* Make sure VLAN Filters are off. */
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
rctl &= ~E1000_RCTL_VFE;
/* Don't store bad packets. */
rctl &= ~E1000_RCTL_SBP;
@@ -2743,7 +2743,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index f3b17d70c9a4..4d2601d15a57 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -117,10 +117,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
#define ENA_STATS_ARRAY_TX ARRAY_SIZE(ena_stats_tx_strings)
#define ENA_STATS_ARRAY_RX ARRAY_SIZE(ena_stats_rx_strings)
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
- DEV_TX_OFFLOAD_UDP_CKSUM |\
- DEV_TX_OFFLOAD_IPV4_CKSUM |\
- DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
PKT_TX_IP_CKSUM |\
PKT_TX_TCP_SEG)
@@ -332,7 +332,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
(queue_offloads & QUEUE_OFFLOADS)) {
/* check if TSO is required */
if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
ena_tx_ctx->tso_enable = true;
ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -340,7 +340,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L3 checksum is needed */
if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
ena_tx_ctx->l3_csum_enable = true;
if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -357,12 +357,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L4 checksum is needed */
if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
ena_tx_ctx->l4_csum_enable = true;
} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
PKT_TX_UDP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
ena_tx_ctx->l4_csum_enable = true;
} else {
@@ -643,9 +643,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
struct rte_eth_link *link = &dev->data->dev_link;
struct ena_adapter *adapter = dev->data->dev_private;
- link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -923,7 +923,7 @@ static int ena_start(struct rte_eth_dev *dev)
if (rc)
goto err_start_tx;
- if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
rc = ena_rss_configure(adapter);
if (rc)
goto err_rss_init;
@@ -2004,9 +2004,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
adapter->state = ENA_ADAPTER_STATE_CONFIG;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
- dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Scattered Rx cannot be turned off in the HW, so this capability must
* be forced.
@@ -2067,17 +2067,17 @@ static uint64_t ena_get_rx_port_offloads(struct ena_adapter *adapter)
uint64_t port_offloads = 0;
if (adapter->offloads.rx_offloads & ENA_L3_IPV4_CSUM)
- port_offloads |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
if (adapter->offloads.rx_offloads &
(ENA_L4_IPV4_CSUM | ENA_L4_IPV6_CSUM))
port_offloads |=
- DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if (adapter->offloads.rx_offloads & ENA_RX_RSS_HASH)
- port_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- port_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
return port_offloads;
}
@@ -2087,17 +2087,17 @@ static uint64_t ena_get_tx_port_offloads(struct ena_adapter *adapter)
uint64_t port_offloads = 0;
if (adapter->offloads.tx_offloads & ENA_IPV4_TSO)
- port_offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (adapter->offloads.tx_offloads & ENA_L3_IPV4_CSUM)
- port_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
if (adapter->offloads.tx_offloads &
(ENA_L4_IPV4_CSUM_PARTIAL | ENA_L4_IPV4_CSUM |
ENA_L4_IPV6_CSUM | ENA_L4_IPV6_CSUM_PARTIAL))
port_offloads |=
- DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
- port_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return port_offloads;
}
@@ -2130,14 +2130,14 @@ static int ena_infos_get(struct rte_eth_dev *dev,
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
dev_info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/* Inform framework about available features */
dev_info->rx_offload_capa = ena_get_rx_port_offloads(adapter);
@@ -2303,7 +2303,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
#endif
- fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+ fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
descs_in_use = rx_ring->ring_size -
ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
@@ -2416,11 +2416,11 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
#ifdef RTE_LIBRTE_ETHDEV_DEBUG
/* Check if requested offload is also enabled for the queue */
if ((ol_flags & PKT_TX_IP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)) ||
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) ||
(l4_csum_flag == PKT_TX_TCP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) ||
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) ||
(l4_csum_flag == PKT_TX_UDP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_UDP_CKSUM))) {
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM))) {
PMD_TX_LOG(DEBUG,
"mbuf[%" PRIu32 "]: requested offloads: %" PRIu16 " are not enabled for the queue[%u]\n",
i, m->nb_segs, tx_ring->id);
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 4f4142ed12d0..865e1241e0ce 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -58,8 +58,8 @@
#define ENA_HASH_KEY_SIZE 40
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define ENA_IO_TXQ_IDX(q) (2 * (q))
#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1)
--git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 152098410fa2..be4007e3f3fe 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
if (reta_size == 0 || reta_conf == NULL)
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
/* Each reta_conf is for 64 entries.
* To support 128 we use 2 conf of 64.
*/
- conf_idx = i / RTE_RETA_GROUP_SIZE;
- idx = i % RTE_RETA_GROUP_SIZE;
+ conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ idx = i % RTE_ETH_RETA_GROUP_SIZE;
if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
entry_value =
ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -139,7 +139,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
if (reta_size == 0 || reta_conf == NULL)
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -154,8 +154,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0 ; i < reta_size ; i++) {
- reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
- reta_idx = i % RTE_RETA_GROUP_SIZE;
+ reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
reta_conf[reta_conf_idx].reta[reta_idx] =
ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -199,34 +199,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Convert proto to ETH flag */
switch (proto) {
case ENA_ADMIN_RSS_TCP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
break;
case ENA_ADMIN_RSS_UDP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
break;
case ENA_ADMIN_RSS_TCP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
break;
case ENA_ADMIN_RSS_UDP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
break;
case ENA_ADMIN_RSS_IP4:
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
break;
case ENA_ADMIN_RSS_IP6:
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
break;
case ENA_ADMIN_RSS_IP4_FRAG:
- rss_hf |= ETH_RSS_FRAG_IPV4;
+ rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
break;
case ENA_ADMIN_RSS_NOT_IP:
- rss_hf |= ETH_RSS_L2_PAYLOAD;
+ rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
break;
case ENA_ADMIN_RSS_TCP6_EX:
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
break;
case ENA_ADMIN_RSS_IP6_EX:
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
break;
default:
break;
@@ -235,10 +235,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L3. */
switch (fields & ENA_HF_RSS_ALL_L3) {
case ENA_ADMIN_RSS_L3_SA:
- rss_hf |= ETH_RSS_L3_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L3_DA:
- rss_hf |= ETH_RSS_L3_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
break;
default:
break;
@@ -247,10 +247,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L4. */
switch (fields & ENA_HF_RSS_ALL_L4) {
case ENA_ADMIN_RSS_L4_SP:
- rss_hf |= ETH_RSS_L4_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L4_DP:
- rss_hf |= ETH_RSS_L4_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
break;
default:
break;
@@ -268,11 +268,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
/* Determine which fields of L3 should be used. */
- switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
- case ETH_RSS_L3_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+ case RTE_ETH_RSS_L3_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_DA;
break;
- case ETH_RSS_L3_SRC_ONLY:
+ case RTE_ETH_RSS_L3_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_SA;
break;
default:
@@ -284,11 +284,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
}
/* Determine which fields of L4 should be used. */
- switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
- case ETH_RSS_L4_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ case RTE_ETH_RSS_L4_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_DP;
break;
- case ETH_RSS_L4_SRC_ONLY:
+ case RTE_ETH_RSS_L4_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_SP;
break;
default:
@@ -334,43 +334,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
int rc, i;
/* Turn on appropriate fields for each requested packet type */
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
- if ((rss_hf & ETH_RSS_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
selected_fields[ENA_ADMIN_RSS_IP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
- if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
- if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+ if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
@@ -541,7 +541,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
uint16_t admin_hf;
static bool warn_once;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
return -ENOTSUP;
}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 1b567f01eae0..7cdb8ce463ed 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
if (status & ENETC_LINK_MODE)
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
else
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
if (status & ENETC_LINK_STATUS)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
switch (status & ENETC_LINK_SPEED_MASK) {
case ENETC_LINK_SPEED_1G:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ENETC_LINK_SPEED_100M:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
default:
case ENETC_LINK_SPEED_10M:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -207,10 +207,10 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
dev_info->max_tx_queues = MAX_TX_RINGS;
dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
dev_info->rx_offload_capa =
- (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC);
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC);
return 0;
}
@@ -463,7 +463,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
RTE_ETH_QUEUE_STATE_STOPPED;
}
- rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0);
return 0;
@@ -705,7 +705,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
int config;
config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -713,10 +713,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
checksum &= ~L3_CKSUM;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
checksum &= ~L4_CKSUM;
enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
*/
uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
uint8_t rss_enable;
- uint64_t rss_hf; /* ETH_RSS flags */
+ uint64_t rss_hf; /* RTE_ETH_RSS flags */
union vnic_rss_key rss_key;
union vnic_rss_cpu rss_cpu;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5e0..c8bdaf1a8e79 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
uint16_t sub_devid;
uint32_t capa;
} vic_speed_capa_map[] = {
- { 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
- { 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
- { 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
- { 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
- { 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
- { 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
- { 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
- { 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
- { 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
- { 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
- { 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
- { 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
- { 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
- { 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
- { 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1440 Mezz */
- { 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1480 MLOM */
- { 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
- { 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
- { 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
- { 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
- { 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
- { 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+ { 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+ { 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+ { 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+ { 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+ { 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+ { 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+ { 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+ { 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+ { 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+ { 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+ { 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+ { 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+ { 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+ { 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+ { 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+ { 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+ { 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+ { 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+ { 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+ { 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+ { 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+ { 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
{ 0, 0 }, /* End marker */
};
@@ -297,8 +297,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
ENICPMD_FUNC_TRACE();
offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
enic->ig_vlan_strip_en = 1;
else
enic->ig_vlan_strip_en = 0;
@@ -323,17 +323,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
return ret;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->mc_count = 0;
enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM);
+ RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* All vlan offload masks to apply the current settings */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = enicpmd_vlan_offload_set(eth_dev, mask);
if (ret) {
dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -435,14 +435,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
}
/* 1300 and later models are at least 40G */
if (id >= 0x0100)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
/* VFs have subsystem id 0, check device id */
if (id == 0) {
/* Newer VF implies at least 40G model */
if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
}
- return ETH_LINK_SPEED_10G;
+ return RTE_ETH_LINK_SPEED_10G;
}
static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -774,8 +774,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -806,8 +806,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
*/
rss_cpu = enic->rss_cpu;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
rss_cpu.cpu[i / 4].b[i % 4] =
enic_rte_rq_idx_to_sop_idx(
@@ -883,7 +883,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
*/
conf->offloads = enic->rx_offload_capa;
if (!enic->ig_vlan_strip_en)
- conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* rx_thresh and other fields are not applicable for enic */
}
@@ -969,8 +969,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
static int udp_tunnel_common_check(struct enic *enic,
struct rte_eth_udp_tunnel *tnl)
{
- if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
- tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+ if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+ tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
return -ENOTSUP;
if (!enic->overlay_offload) {
ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1010,7 +1010,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
@@ -1039,7 +1039,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index dfc7f5d1f94f..21b1fffb14f0 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
memset(&link, 0, sizeof(link));
link.link_status = enic_get_link_status(enic);
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = vnic_dev_port_speed(enic->vdev);
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
}
eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* vnic notification of link status has already been turned on in
* enic_dev_init() which is called during probe time. Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
* and vlan insertion are supported.
*/
simple_tx_offloads = enic->tx_offload_capa &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if ((eth_dev->data->dev_conf.txmode.offloads &
~simple_tx_offloads) == 0) {
ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
@@ -1385,15 +1385,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type = 0;
rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
if (enic->rq_count > 1 &&
- (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+ (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
rss_hf != 0) {
rss_enable = 1;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
if (enic->udp_rss_weak) {
/*
@@ -1404,12 +1404,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
}
}
- if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
if (enic->udp_rss_weak)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1745,9 +1745,9 @@ enic_enable_overlay_offload(struct enic *enic)
return -EINVAL;
}
enic->tx_offload_capa |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- (enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
- (enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ (enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+ (enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
enic->tx_offload_mask |=
PKT_TX_OUTER_IPV6 |
PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index c5777772a09e..918a9e170ff6 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
* IPV4 hash type handles both non-frag and frag packet types.
* TCP/UDP is controlled via a separate flag below.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (ENIC_SETTING(enic, RSSHASH_IPV6))
/*
* The VIC adapter can perform RSS on IPv6 packets with and
* without extension headers. An IPv6 "fragment" is an IPv6
* packet with the fragment extension header.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (enic->udp_rss_weak)
enic->flow_type_rss_offloads |=
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
/* Zero offloads if RSS is not enabled */
if (!ENIC_SETTING(enic, RSS))
@@ -201,19 +201,19 @@ int enic_get_vnic_config(struct enic *enic)
enic->tx_queue_offload_capa = 0;
enic->tx_offload_capa =
enic->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->tx_offload_mask =
PKT_TX_IPV6 |
PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e6014..82d595b1d1a0 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
static const struct rte_eth_link eth_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
};
static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
int qid;
struct rte_eth_dev *fsdev;
struct rxq **rxq;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð(sdev)->data->dev_conf.intr_conf;
fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
failsafe_rx_intr_install(struct rte_eth_dev *dev)
{
struct fs_priv *priv = PRIV(dev);
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
&priv->data->dev_conf.intr_conf;
if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c6e..a3a8a1c82e3a 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1172,51 +1172,51 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
* configuring a sub-device.
*/
infos->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->rx_queue_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
infos->flow_type_rss_offloads =
- ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP;
+ RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP;
infos->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 17c73c4dc5ae..b7522a47a80b 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
uint8_t drop_en;
uint8_t rx_deferred_start; /* don't start this queue in dev start. */
uint16_t rx_ftag_en; /* indicates FTAG RX supported */
- uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
};
/*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
uint16_t next_rs; /* Next pos to set RS flag */
uint16_t next_dd; /* Next pos to check DD flag */
volatile uint32_t *tail_ptr;
- uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
uint16_t nb_desc;
uint16_t port_id;
uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 66f4a5c6df2c..d256334bfde9 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
- if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
return 0;
if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
};
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
*/
hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
if (mrqc == 0) {
PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
if (hw->mac.type != fm10k_mac_pf)
return;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
nb_queue_pools = vmdq_conf->nb_queue_pools;
/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
/* It adds dual VLAN length for supporting dual VLAN */
if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
- rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t reg;
dev->data->scattered_rx = 1;
reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
}
/* Update default vlan when not in VMDQ mode */
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_50G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
dev->data->dev_link.link_status =
- dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+ dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
return 0;
}
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->max_vfs = pdev->max_vfs;
dev_info->vmdq_pool_base = 0;
dev_info->vmdq_queue_base = 0;
- dev_info->max_vmdq_pools = ETH_32_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_32_POOLS;
dev_info->vmdq_queue_num = FM10K_MAX_QUEUES_PF;
dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
dev_info->reta_size = FM10K_MAX_RSS_INDICES;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
return -EINVAL;
}
- if (vlan_id > ETH_VLAN_ID_MAX) {
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
return -EINVAL;
}
@@ -1767,20 +1767,20 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
}
static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_RSS_HASH);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
}
static int
@@ -1965,12 +1965,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO);
+ return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO);
}
static int
@@ -2111,8 +2111,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
* 128-entries in 32 registers
*/
for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
BIT_MASK_PER_UINT32);
if (mask == 0)
@@ -2160,8 +2160,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
* 128-entries in 32 registers
*/
for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
BIT_MASK_PER_UINT32);
if (mask == 0)
@@ -2198,15 +2198,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
return -EINVAL;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
/* If the mapping doesn't fit any supported, return */
if (mrqc == 0)
@@ -2243,15 +2243,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
hf = 0;
- hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV4) ? RTE_ETH_RSS_IPV4 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX : 0;
rss_conf->rss_hf = hf;
@@ -2606,7 +2606,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
/* first clear the internal SW recording structure */
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
false);
@@ -2622,7 +2622,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
MAIN_VSI_POOL_NUMBER);
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
true);
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
#ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
/* whithout rx ol_flags, no VP flag report */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
#endif
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
static int hinic_link_event_process(struct hinic_hwdev *hwdev,
struct rte_eth_dev *eth_dev, u8 status)
{
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
struct nic_port_info port_info;
struct rte_eth_link link;
int rc = HINIC_OK;
if (!status) {
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
memset(&port_info, 0, sizeof(port_info));
rc = hinic_get_port_info(hwdev, &port_info);
if (rc) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
link.link_speed = port_speed[port_info.speed %
LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb6759..4cd5a85d5f8d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
/* init vlan offoad */
err = hinic_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
} else {
*speed_capa = 0;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
- *speed_capa |= ETH_LINK_SPEED_1G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
- *speed_capa |= ETH_LINK_SPEED_10G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
- *speed_capa |= ETH_LINK_SPEED_25G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
- *speed_capa |= ETH_LINK_SPEED_40G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
- *speed_capa |= ETH_LINK_SPEED_100G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_100G;
}
}
@@ -732,24 +732,24 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
hinic_get_speed_capa(dev, &info->speed_capa);
info->rx_queue_offload_capa = 0;
- info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_RSS_HASH;
+ info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
info->tx_queue_offload_capa = 0;
- info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
info->hash_key_size = HINIC_RSS_KEY_SIZE;
info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -846,20 +846,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
u8 port_link_status = 0;
struct nic_port_info port_link_info;
struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
rc = hinic_get_link_status(nic_hwdev, &port_link_status);
if (rc)
return rc;
if (!port_link_status) {
- link->link_status = ETH_LINK_DOWN;
+ link->link_status = RTE_ETH_LINK_DOWN;
link->link_speed = 0;
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
- link->link_autoneg = ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
return HINIC_OK;
}
@@ -901,8 +901,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* Get link status information from hardware */
rc = hinic_priv_get_dev_link_status(nic_dev, &link);
if (rc != HINIC_OK) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Get link status failed");
goto out;
}
@@ -1650,8 +1650,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
int err;
/* Enable or disable VLAN filter */
- if (mask & ETH_VLAN_FILTER_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
TRUE : FALSE;
err = hinic_config_vlan_filter(nic_dev->hwdev, on);
if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1672,8 +1672,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Enable or disable VLAN stripping */
- if (mask & ETH_VLAN_STRIP_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
TRUE : FALSE;
err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
if (err) {
@@ -1859,13 +1859,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
fc_conf->autoneg = nic_pause.auto_neg;
if (nic_pause.tx_pause && nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (nic_pause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else if (nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1879,14 +1879,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
nic_pause.auto_neg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
nic_pause.tx_pause = true;
else
nic_pause.tx_pause = false;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
nic_pause.rx_pause = true;
else
nic_pause.rx_pause = false;
@@ -1930,7 +1930,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err = 0;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_OK;
}
@@ -1951,14 +1951,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
}
}
- rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
if (err) {
@@ -1994,7 +1994,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_ERROR;
}
@@ -2015,15 +2015,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
rss_conf->rss_hf |= rss_type.ipv4 ?
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
rss_conf->rss_hf |= rss_type.ipv6 ?
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
- rss_conf->rss_hf |= rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+ rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
return HINIC_OK;
}
@@ -2053,7 +2053,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
u16 i = 0;
u16 idx, shift;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
return HINIC_OK;
if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2067,8 +2067,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
/* update rss indir_tbl */
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2133,8 +2133,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
{
u64 rss_hf = rss_conf->rss_hf;
- rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
}
static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
{
int err, i;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
nic_dev->num_rss = 0;
if (nic_dev->num_rq > 1) {
/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
PMD_DRV_LOG(WARNING, "Alloc rss template failed");
return err;
}
- nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
for (i = 0; i < nic_dev->num_rq; i++)
hinic_add_rq_to_rx_queue_list(nic_dev, i);
}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
{
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (hinic_rss_template_free(nic_dev->hwdev,
nic_dev->rss_tmpl_idx))
PMD_DRV_LOG(WARNING, "Free rss template failed");
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
}
}
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
int ret = 0;
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
ret = hinic_config_mq_rx_rss(nic_dev, on);
break;
default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
int lro_wqe_num;
int buf_size;
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (rss_conf.rss_hf == 0) {
rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
}
/* Enable both L3/L4 rx checksum offload */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
goto rx_csum_ofl_err;
/* config lro */
- lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+ lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
true : false;
max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
hinic_rss_deinit(nic_dev);
hinic_destroy_num_qps(nic_dev);
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
#define HINIC_DEFAULT_RX_FREE_THRESH 32
#define HINIC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
enum rq_completion_fmt {
RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 8753c340e790..3d0159d78778 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
return ret;
}
- if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
if (dcb_rx_conf->nb_tcs == 0)
hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
uint16_t nb_tx_q = hw->data->nb_tx_queues;
int ret;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
return 0;
ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
{
switch (mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->requested_fc_mode = HNS3_FC_NONE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->requested_fc_mode = HNS3_FC_FULL;
break;
default:
hw->requested_fc_mode = HNS3_FC_NONE;
hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
- "configured to RTE_FC_NONE", mode);
+ "configured to RTE_ETH_FC_NONE", mode);
break;
}
}
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 693048f58704..8e0ccecb57a6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
};
static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
- { ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
};
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
struct hns3_cmd_desc desc;
int ret;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
return -EINVAL;
}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rte_spinlock_lock(&hw->lock);
rxmode = &dev->data->dev_conf.rxmode;
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
/* ignore vlan filter configuration during promiscuous mode */
if (!dev->data->promiscuous) {
/* Enable or disable VLAN filter */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
true : false;
ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
true : false;
ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
return ret;
}
- ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+ ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
if (ret) {
hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
if (!hw->data->promiscuous) {
/* restore vlan filter states */
offloads = hw->data->dev_conf.rxmode.offloads;
- enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+ enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
ret = hns3_enable_vlan_filter(hns, enable);
if (ret) {
hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
txmode->hw_vlan_reject_untagged);
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
ret = hns3_vlan_offload_set(dev, mask);
if (ret) {
hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2213,9 +2213,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
int max_tc = 0;
int i;
- if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
- (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+ (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
rx_mq_mode, tx_mq_mode);
return -EOPNOTSUPP;
@@ -2223,7 +2223,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
if (dcb_rx_conf->nb_tcs > pf->tc_max) {
hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2232,7 +2232,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
- hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+ hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
"nb_tcs(%d) != %d or %d in rx direction.",
dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
return -EINVAL;
@@ -2400,11 +2400,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
* configure link_speeds (default 0), which means auto-negotiation.
* In this case, it should return success.
*/
- if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
hw->mac.support_autoneg == 0)
return 0;
- if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
ret = hns3_check_port_speed(hw, link_speeds);
if (ret)
return ret;
@@ -2464,15 +2464,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
if (ret)
goto cfg_err;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = hns3_setup_dcb(dev);
if (ret)
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rss_conf = conf->rx_adv_conf.rss_conf;
hw->rss_dis_flag = false;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2493,7 +2493,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -2600,15 +2600,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_10M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
- speed_capa |= ETH_LINK_SPEED_10M;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
return speed_capa;
}
@@ -2619,19 +2619,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
return speed_capa;
}
@@ -2650,7 +2650,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
hns3_get_firber_port_speed_capa(mac->supported_speed);
if (mac->support_autoneg == 0)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -2676,40 +2676,40 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_get_support(hw, INDEP_TXRX))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
if (hns3_dev_get_support(hw, PTP))
- info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = HNS3_MAX_RING_DESC,
@@ -2793,7 +2793,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
ret = hns3_update_link_info(eth_dev);
if (ret)
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
return ret;
}
@@ -2806,29 +2806,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link->link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_NONE;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link->link_duplex = mac->link_duplex;
- new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link->link_autoneg = mac->link_autoneg;
}
@@ -2848,8 +2848,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
if (eth_dev->data->dev_started == 0) {
new_link.link_autoneg = mac->link_autoneg;
new_link.link_duplex = mac->link_duplex;
- new_link.link_speed = ETH_SPEED_NUM_NONE;
- new_link.link_status = ETH_LINK_DOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ new_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
@@ -2861,7 +2861,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
break;
}
- if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+ if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3207,31 +3207,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
{
switch (speed_cmd) {
case HNS3_CFG_SPEED_10M:
- *speed = ETH_SPEED_NUM_10M;
+ *speed = RTE_ETH_SPEED_NUM_10M;
break;
case HNS3_CFG_SPEED_100M:
- *speed = ETH_SPEED_NUM_100M;
+ *speed = RTE_ETH_SPEED_NUM_100M;
break;
case HNS3_CFG_SPEED_1G:
- *speed = ETH_SPEED_NUM_1G;
+ *speed = RTE_ETH_SPEED_NUM_1G;
break;
case HNS3_CFG_SPEED_10G:
- *speed = ETH_SPEED_NUM_10G;
+ *speed = RTE_ETH_SPEED_NUM_10G;
break;
case HNS3_CFG_SPEED_25G:
- *speed = ETH_SPEED_NUM_25G;
+ *speed = RTE_ETH_SPEED_NUM_25G;
break;
case HNS3_CFG_SPEED_40G:
- *speed = ETH_SPEED_NUM_40G;
+ *speed = RTE_ETH_SPEED_NUM_40G;
break;
case HNS3_CFG_SPEED_50G:
- *speed = ETH_SPEED_NUM_50G;
+ *speed = RTE_ETH_SPEED_NUM_50G;
break;
case HNS3_CFG_SPEED_100G:
- *speed = ETH_SPEED_NUM_100G;
+ *speed = RTE_ETH_SPEED_NUM_100G;
break;
case HNS3_CFG_SPEED_200G:
- *speed = ETH_SPEED_NUM_200G;
+ *speed = RTE_ETH_SPEED_NUM_200G;
break;
default:
return -EINVAL;
@@ -3559,39 +3559,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
switch (speed) {
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
break;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
break;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
break;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
break;
@@ -4254,14 +4254,14 @@ hns3_mac_init(struct hns3_hw *hw)
int ret;
pf->support_sfp_query = true;
- mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+ mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
if (ret) {
PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
return ret;
}
- mac->link_status = ETH_LINK_DOWN;
+ mac->link_status = RTE_ETH_LINK_DOWN;
return hns3_config_mtu(hw, pf->mps);
}
@@ -4511,7 +4511,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
* all packets coming in in the receiving direction.
*/
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, false);
if (ret) {
hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4552,7 +4552,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
}
/* when promiscuous mode was disabled, restore the vlan filter status */
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, true);
if (ret) {
hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4672,8 +4672,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
mac_info->supported_speed =
rte_le_to_cpu_32(resp->supported_speed);
mac_info->support_autoneg = resp->autoneg_ability;
- mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
- : ETH_LINK_AUTONEG;
+ mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+ : RTE_ETH_LINK_AUTONEG;
} else {
mac_info->query_type = HNS3_DEFAULT_QUERY;
}
@@ -4684,8 +4684,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
static uint8_t
hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
{
- if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
- duplex = ETH_LINK_FULL_DUPLEX;
+ if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
return duplex;
}
@@ -4735,7 +4735,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
return ret;
/* Do nothing if no SFP */
- if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+ if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
return 0;
/*
@@ -4762,7 +4762,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
/* Config full duplex for SFP */
return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
- ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_FULL_DUPLEX);
}
static void
@@ -4881,10 +4881,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
/*
- * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+ * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
* when receiving frames. Otherwise, CRC will be stripped.
*/
- if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
else
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4912,7 +4912,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret) {
hns3_err(hw, "get link status cmd failed %d", ret);
- return ETH_LINK_DOWN;
+ return RTE_ETH_LINK_DOWN;
}
req = (struct hns3_link_status_cmd *)desc.data;
@@ -5094,19 +5094,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
return HNS3_FIBER_LINK_SPEED_1G_BIT;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
return HNS3_FIBER_LINK_SPEED_10G_BIT;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
return HNS3_FIBER_LINK_SPEED_25G_BIT;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
return HNS3_FIBER_LINK_SPEED_40G_BIT;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
return HNS3_FIBER_LINK_SPEED_50G_BIT;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
return HNS3_FIBER_LINK_SPEED_100G_BIT;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
return HNS3_FIBER_LINK_SPEED_200G_BIT;
default:
hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5344,20 +5344,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M:
speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
break;
- case ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
break;
- case ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M:
speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
break;
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
break;
default:
@@ -5373,26 +5373,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_1G:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
break;
default:
@@ -5427,28 +5427,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
static inline uint32_t
hns3_get_link_speed(uint32_t link_speeds)
{
- uint32_t speed = ETH_SPEED_NUM_NONE;
-
- if (link_speeds & ETH_LINK_SPEED_10M ||
- link_speeds & ETH_LINK_SPEED_10M_HD)
- speed = ETH_SPEED_NUM_10M;
- if (link_speeds & ETH_LINK_SPEED_100M ||
- link_speeds & ETH_LINK_SPEED_100M_HD)
- speed = ETH_SPEED_NUM_100M;
- if (link_speeds & ETH_LINK_SPEED_1G)
- speed = ETH_SPEED_NUM_1G;
- if (link_speeds & ETH_LINK_SPEED_10G)
- speed = ETH_SPEED_NUM_10G;
- if (link_speeds & ETH_LINK_SPEED_25G)
- speed = ETH_SPEED_NUM_25G;
- if (link_speeds & ETH_LINK_SPEED_40G)
- speed = ETH_SPEED_NUM_40G;
- if (link_speeds & ETH_LINK_SPEED_50G)
- speed = ETH_SPEED_NUM_50G;
- if (link_speeds & ETH_LINK_SPEED_100G)
- speed = ETH_SPEED_NUM_100G;
- if (link_speeds & ETH_LINK_SPEED_200G)
- speed = ETH_SPEED_NUM_200G;
+ uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+ if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+ link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+ speed = RTE_ETH_SPEED_NUM_10M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+ link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+ speed = RTE_ETH_SPEED_NUM_100M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+ speed = RTE_ETH_SPEED_NUM_1G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+ speed = RTE_ETH_SPEED_NUM_10G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+ speed = RTE_ETH_SPEED_NUM_25G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+ speed = RTE_ETH_SPEED_NUM_40G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+ speed = RTE_ETH_SPEED_NUM_50G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+ speed = RTE_ETH_SPEED_NUM_100G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+ speed = RTE_ETH_SPEED_NUM_200G;
return speed;
}
@@ -5456,11 +5456,11 @@ hns3_get_link_speed(uint32_t link_speeds)
static uint8_t
hns3_get_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
static int
@@ -5594,9 +5594,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
struct hns3_set_link_speed_cfg cfg;
memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
- cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
- if (cfg.autoneg != ETH_LINK_AUTONEG) {
+ cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+ if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
cfg.speed = hns3_get_link_speed(conf->link_speeds);
cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
}
@@ -5869,7 +5869,7 @@ hns3_do_stop(struct hns3_adapter *hns)
ret = hns3_cfg_mac_mode(hw, false);
if (ret)
return ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
hns3_configure_all_mac_addr(hns, true);
@@ -6080,17 +6080,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
current_mode = hns3_get_current_fc_mode(dev);
switch (current_mode) {
case HNS3_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case HNS3_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HNS3_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case HNS3_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
}
@@ -6236,7 +6236,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
int i;
rte_spinlock_lock(&hw->lock);
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = pf->local_max_tc;
else
dcb_info->nb_tcs = 1;
@@ -6536,7 +6536,7 @@ hns3_stop_service(struct hns3_adapter *hns)
struct rte_eth_dev *eth_dev;
eth_dev = &rte_eth_devices[hw->data->port_id];
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (hw->adapter_state == HNS3_NIC_STARTED) {
rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
hns3_update_linkstatus_and_event(hw, false);
@@ -6826,7 +6826,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
* in device of link speed
* below 10 Gbps.
*/
- if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+ if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
*state = 0;
return 0;
}
@@ -6858,7 +6858,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
* configured FEC mode is returned.
* If link is up, current FEC mode is returned.
*/
- if (hw->mac.link_status == ETH_LINK_DOWN) {
+ if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
ret = get_current_fec_auto_state(hw, &auto_state);
if (ret)
return ret;
@@ -6957,12 +6957,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
uint32_t cur_capa;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
cur_capa = fec_capa[1].capa;
break;
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
cur_capa = fec_capa[0].capa;
break;
default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index e28056b1bd60..0f55fd4c83ad 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -190,10 +190,10 @@ struct hns3_mac {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t media_type;
uint8_t phy_addr;
- uint8_t link_duplex : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
- uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
- uint8_t link_status : 1; /* ETH_LINK_[DOWN/UP] */
- uint32_t link_speed; /* ETH_SPEED_NUM_ */
+ uint8_t link_duplex : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint8_t link_status : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /* RTE_ETH_SPEED_NUM_ */
/*
* Some firmware versions support only the SFP speed query. In addition
* to the SFP speed query, some firmware supports the query of the speed
@@ -1076,9 +1076,9 @@ static inline uint64_t
hns3_txvlan_cap_get(struct hns3_hw *hw)
{
if (hw->port_base_vlan_cfg.state)
- return DEV_TX_OFFLOAD_VLAN_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
else
- return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
}
#endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 54dbd4b798f2..7b784048b518 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -807,15 +807,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
}
hw->adapter_state = HNS3_NIC_CONFIGURING;
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
hns3_err(hw, "setting link speed/duplex not supported");
ret = -EINVAL;
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
hw->rss_dis_flag = false;
rss_conf = conf->rx_adv_conf.rss_conf;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -832,7 +832,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -935,32 +935,32 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_get_support(hw, INDEP_TXRX))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1640,10 +1640,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN filter */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = hns3vf_en_vlan_filter(hw, true);
else
ret = hns3vf_en_vlan_filter(hw, false);
@@ -1653,10 +1653,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Vlan stripping setting */
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ret = hns3vf_en_hw_strip_rxvtag(hw, true);
else
ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1724,7 +1724,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
int ret;
dev_conf = &hw->data->dev_conf;
- en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+ en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
: false;
ret = hns3vf_en_hw_strip_rxvtag(hw, en);
if (ret)
@@ -1749,8 +1749,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
}
/* Apply vlan offload setting */
- ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
@@ -2059,7 +2059,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
struct hns3_hw *hw = &hns->hw;
int ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
/*
* The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2218,31 +2218,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&new_link, 0, sizeof(new_link));
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link.link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link.link_duplex = mac->link_duplex;
- new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link.link_autoneg =
- !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+ !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(eth_dev, &new_link);
}
@@ -2570,11 +2570,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
* Make sure call update link status before hns3vf_stop_poll_job
* because update link status depend on polling job exist.
*/
- hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+ hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
hw->mac.link_duplex);
hns3vf_stop_poll_job(eth_dev);
}
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
hns3_set_rxtx_function(eth_dev);
rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 38a2ee58a651..da6918fddda3 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
* Kunpeng930 and future kunpeng series support to use src/dst port
* fields to RSS hash for IPv6 SCTP packet type.
*/
- if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
- (rss->types & ETH_RSS_IP ||
+ if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+ (rss->types & RTE_ETH_RSS_IP ||
(!hw->rss_info.ipv6_sctp_offload_supported &&
- rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+ rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return false;
return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 5dfe68cc4dbd..9a829d7011ad 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
struct hns3_hw *hw = &hns->hw;
int ret;
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
return 0;
ret = rte_mbuf_dyn_rx_timestamp_register
--git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_tuple_table[] = {
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
};
@@ -146,44 +146,44 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_rss_types[] = {
- { ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+ { RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+ { RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
};
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
* When user does not specify the following types or a combination of
* the following types, it enables all fields for the supported RSS
* types. the following types as:
- * - ETH_RSS_L3_SRC_ONLY
- * - ETH_RSS_L3_DST_ONLY
- * - ETH_RSS_L4_SRC_ONLY
- * - ETH_RSS_L4_DST_ONLY
+ * - RTE_ETH_RSS_L3_SRC_ONLY
+ * - RTE_ETH_RSS_L3_DST_ONLY
+ * - RTE_ETH_RSS_L4_SRC_ONLY
+ * - RTE_ETH_RSS_L4_DST_ONLY
*/
if (fields_count == 0) {
for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
sizeof(rss_cfg->rss_indirection_tbl));
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
rte_spinlock_unlock(&hw->lock);
hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
}
rte_spinlock_lock(&hw->lock);
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] =
rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
}
/* When RSS is off, redirect the packet queue 0 */
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
hns3_rss_uninit(hns);
/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
* When RSS is off, it doesn't need to configure rss redirection table
* to hardware.
*/
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
hw->rss_ind_tbl_size);
if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
return ret;
rss_indir_table_uninit:
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret1 = hns3_rss_reset_indir_table(hw);
if (ret1 != 0)
return ret;
--git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
#include <rte_flow.h>
#define HNS3_ETH_RSS_SUPPORT ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
#define HNS3_RSS_IND_TBL_SIZE 512 /* The size of hash lookup table */
#define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 602548a4f25b..920ee8ceeab9 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1924,7 +1924,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
/* CRC len set here is used for amending packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1969,7 +1969,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
rxq->rx_buf_len);
}
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
@@ -2845,7 +2845,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
vec_allowed = vec_support && hns3_get_default_vec_support();
sve_allowed = vec_support && hns3_get_sve_support();
simple_allowed = !dev->data->scattered_rx &&
- (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+ (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
return hns3_recv_pkts_vec;
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
int ret;
offloads = hw->data->dev_conf.rxmode.offloads;
- gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4291,7 +4291,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
if (hns3_dev_get_support(hw, PTP))
return false;
- return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+ return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
}
static bool
@@ -4303,16 +4303,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
return true;
#else
#define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index c8229e9076b5..dfea5d5b4c2f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
uint16_t rx_rearm_nb; /* number of remaining BDs to be re-armed */
- /* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+ /* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
uint8_t crc_len;
/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index ff434d2d33ed..455110361aac 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
if (hns3_dev_get_support(hw, PTP))
return -ENOTSUP;
- /* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
- if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ /* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+ if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
return -ENOTSUP;
return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
int
hns3_rx_check_vec_support(struct rte_eth_dev *dev)
{
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN;
+ uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hns3_dev_get_support(hw, PTP))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d4a..293df887bf7c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1629,7 +1629,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* Set the global registers with default ether type value */
if (!pf->support_multi_driver) {
- ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
if (ret != I40E_SUCCESS) {
PMD_INIT_LOG(ERR,
@@ -1896,8 +1896,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
ad->tx_simple_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Only legacy filter API needs the following fdir config. So when the
* legacy filter API is deprecated, the following codes should also be
@@ -1931,13 +1931,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
* number, which will be available after rx_queue_setup(). dev_start()
* function is good to place RSS setup.
*/
- if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
ret = i40e_vmdq_setup(dev);
if (ret)
goto err;
}
- if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = i40e_dcb_setup(dev);
if (ret) {
PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2214,17 +2214,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
{
uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed |= I40E_LINK_SPEED_40GB;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed |= I40E_LINK_SPEED_25GB;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed |= I40E_LINK_SPEED_20GB;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed |= I40E_LINK_SPEED_10GB;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed |= I40E_LINK_SPEED_1GB;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
link_speed |= I40E_LINK_SPEED_100MB;
return link_speed;
@@ -2332,13 +2332,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
I40E_AQ_PHY_LINK_ENABLED;
- if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
- conf->link_speeds = ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_100M;
+ if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+ conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_100M;
abilities |= I40E_AQ_PHY_AN_ENABLED;
} else {
@@ -2876,34 +2876,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
/* Parse the link status */
switch (link_speed) {
case I40E_REG_SPEED_0:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_REG_SPEED_1:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_REG_SPEED_2:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_REG_SPEED_3:
if (hw->mac.type == I40E_MAC_X722) {
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
} else {
reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
if (reg_val & I40E_REG_MACC_25GB)
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
else
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
}
break;
case I40E_REG_SPEED_4:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
default:
PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2930,8 +2930,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
status = i40e_aq_get_link_info(hw, enable_lse,
&link_status, NULL);
if (unlikely(status != I40E_SUCCESS)) {
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
return;
}
@@ -2946,28 +2946,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
/* Parse the link status */
switch (link_status.link_speed) {
case I40E_LINK_SPEED_100MB:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_LINK_SPEED_1GB:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_LINK_SPEED_20GB:
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case I40E_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case I40E_LINK_SPEED_40GB:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
default:
if (link->link_status)
- link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -2984,9 +2984,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
/* i40e uses full duplex only */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (!wait_to_complete && !enable_lse)
update_link_reg(hw, &link);
@@ -3720,33 +3720,33 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3805,7 +3805,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
/* For XL710 */
- dev_info->speed_capa = ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
dev_info->default_rxportconf.nb_queues = 2;
dev_info->default_txportconf.nb_queues = 2;
if (dev->data->nb_rx_queues == 1)
@@ -3819,17 +3819,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
/* For XXV710 */
- dev_info->speed_capa = ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
dev_info->default_rxportconf.ring_size = 256;
dev_info->default_txportconf.ring_size = 256;
} else {
/* For X710 */
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
dev_info->default_rxportconf.ring_size = 512;
dev_info->default_txportconf.ring_size = 256;
} else {
@@ -3868,7 +3868,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
int ret;
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
reg_id = 2;
}
@@ -3915,12 +3915,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
int ret = 0;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) ||
- (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+ (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -3934,12 +3934,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
/* 802.1ad frames ability is added in NVM API 1.7*/
if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->first_tag = rte_cpu_to_le_16(tpid);
- else if (vlan_type == ETH_VLAN_TYPE_INNER)
+ else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
hw->second_tag = rte_cpu_to_le_16(tpid);
} else {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->second_tag = rte_cpu_to_le_16(tpid);
}
ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -3998,37 +3998,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
i40e_vsi_config_vlan_filter(vsi, TRUE);
else
i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_vlan_stripping(vsi, FALSE);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
i40e_vsi_config_double_vlan(vsi, TRUE);
/* Set global registers with default ethertype. */
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
}
else
i40e_vsi_config_double_vlan(vsi, FALSE);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
/* Enable or disable outer VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4111,17 +4111,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* Return current mode according to actual setting*/
switch (hw->fc.current_mode) {
case I40E_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case I40E_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case I40E_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case I40E_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
};
return 0;
@@ -4137,10 +4137,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
struct i40e_hw *hw;
struct i40e_pf *pf;
enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
- [RTE_FC_NONE] = I40E_FC_NONE,
- [RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
- [RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
- [RTE_FC_FULL] = I40E_FC_FULL
+ [RTE_ETH_FC_NONE] = I40E_FC_NONE,
+ [RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+ [RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+ [RTE_ETH_FC_FULL] = I40E_FC_FULL
};
/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4287,7 +4287,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
}
rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
else
mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4440,7 +4440,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4456,8 +4456,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -4483,7 +4483,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4500,8 +4500,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -4818,7 +4818,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
hw->func_caps.num_vsis - vsi_count);
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
- ETH_64_POOLS);
+ RTE_ETH_64_POOLS);
if (pf->max_nb_vmdq_vsi) {
pf->flags |= I40E_FLAG_VMDQ;
pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6104,10 +6104,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
int mask = 0;
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = i40e_vlan_offload_set(dev, mask);
if (ret) {
PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6236,9 +6236,9 @@ i40e_pf_setup(struct i40e_pf *pf)
/* Configure filter control */
memset(&settings, 0, sizeof(settings));
- if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+ if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
- else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+ else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
else {
PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7098,7 +7098,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
{
uint32_t vid_idx, vid_bit;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return 0;
vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7133,7 +7133,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
int ret;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return;
i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -7727,25 +7727,25 @@ static int
i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag)
{
switch (filter_type) {
- case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
break;
- case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
break;
- case RTE_TUNNEL_FILTER_IMAC_TENID:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_TENID:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
break;
- case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+ case RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC:
*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
break;
- case ETH_TUNNEL_FILTER_IMAC:
+ case RTE_ETH_TUNNEL_FILTER_IMAC:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
break;
- case ETH_TUNNEL_FILTER_OIP:
+ case RTE_ETH_TUNNEL_FILTER_OIP:
*flag = I40E_AQC_ADD_CLOUD_FILTER_OIP;
break;
- case ETH_TUNNEL_FILTER_IIP:
+ case RTE_ETH_TUNNEL_FILTER_IIP:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IIP;
break;
default:
@@ -8711,16 +8711,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8746,12 +8746,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8843,7 +8843,7 @@ int
i40e_pf_reset_rss_reta(struct i40e_pf *pf)
{
struct i40e_hw *hw = &pf->adapter->hw;
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
int num;
@@ -8851,7 +8851,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
* configured. It's necessary to calculate the actual PF
* queues that are configured.
*/
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
else
num = pf->dev_data->nb_rx_queues;
@@ -8930,7 +8930,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
if (!(rss_hf & pf->adapter->flow_types_mask) ||
- !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+ !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return 0;
hw = I40E_PF_TO_HW(pf);
@@ -10267,16 +10267,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_25G:
tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
break;
@@ -10504,7 +10504,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
else
*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_cfg->pfc.willing = 0;
dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11012,7 +11012,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
uint16_t bsf, tc_mapping;
int i, j = 0;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
else
dcb_info->nb_tcs = 1;
@@ -11060,7 +11060,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
}
j++;
- } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+ } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 1d57b9617e66..d8042abbd9be 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -147,17 +147,17 @@ enum i40e_flxpld_layer_idx {
I40E_FLAG_RSS_AQ_CAPABLE)
#define I40E_RSS_OFFLOAD_ALL ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/* All bits of RSS hash enable for X722*/
#define I40E_RSS_HENA_ALL_X722 ( \
@@ -1063,7 +1063,7 @@ struct i40e_rte_flow_rss_conf {
uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /**< Hash key. */
- uint16_t queue[ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
+ uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
bool symmetric_enable; /**< true, if enable symmetric */
uint64_t config_pctypes; /**< All PCTYPES with the flow */
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index e41a84f1d737..9acaa1875105 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
uint64_t reg_r = 0;
uint16_t reg_id;
uint16_t tpid;
@@ -3601,13 +3601,13 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
}
static uint16_t i40e_supported_tunnel_filter_types[] = {
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID |
- ETH_TUNNEL_FILTER_IVLAN,
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID,
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID |
- ETH_TUNNEL_FILTER_IMAC,
- ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC,
};
static int
@@ -3697,12 +3697,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
rte_memcpy(&filter->outer_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_OMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
} else {
rte_memcpy(&filter->inner_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_IMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
}
}
break;
@@ -3724,7 +3724,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
filter->inner_vlan =
rte_be_to_cpu_16(vlan_spec->tci) &
I40E_VLAN_TCI_MASK;
- filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
}
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -3798,7 +3798,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
vxlan_spec->vni, 3);
filter->tenant_id =
rte_be_to_cpu_32(tenant_id_be);
- filter_type |= ETH_TUNNEL_FILTER_TENID;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
}
vxlan_flag = 1;
@@ -3927,12 +3927,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
rte_memcpy(&filter->outer_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_OMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
} else {
rte_memcpy(&filter->inner_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_IMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
}
}
@@ -3955,7 +3955,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
filter->inner_vlan =
rte_be_to_cpu_16(vlan_spec->tci) &
I40E_VLAN_TCI_MASK;
- filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
}
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -4050,7 +4050,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
nvgre_spec->tni, 3);
filter->tenant_id =
rte_be_to_cpu_32(tenant_id_be);
- filter_type |= ETH_TUNNEL_FILTER_TENID;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
}
nvgre_flag = 1;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 5da3d187076e..8962e9d97aa7 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -105,47 +105,47 @@ struct i40e_hash_map_rss_inset {
const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
/* IPv4 */
- { ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* IPv6 */
- { ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* Port */
- { ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+ { RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
/* Ether */
- { ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
- { ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+ { RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+ { RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
/* VLAN */
- { ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
- { ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+ { RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+ { RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
};
#define I40E_HASH_VOID_NEXT_ALLOW BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -208,30 +208,30 @@ struct i40e_hash_match_pattern {
#define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
pattern, rss_mask, true, cus_pctype }
-#define I40E_HASH_L2_RSS_MASK (ETH_RSS_VLAN | ETH_RSS_ETH | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK (RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY)
#define I40E_HASH_L23_RSS_MASK (I40E_HASH_L2_RSS_MASK | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY)
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
-#define I40E_HASH_IPV4_L23_RSS_MASK (ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK (ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK (RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK (RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
#define I40E_HASH_L234_RSS_MASK (I40E_HASH_L23_RSS_MASK | \
- ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
-#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
-#define I40E_HASH_L4_TYPES (ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES (RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
@@ -239,72 +239,72 @@ struct i40e_hash_match_pattern {
static const struct i40e_hash_match_pattern match_patterns[] = {
/* Ether */
I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
- ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+ RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
I40E_FILTER_PCTYPE_L2_PAYLOAD),
/* IPv4 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV4),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
- ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
- ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
- ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
/* IPv6 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV6),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_FRAG,
- ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV6),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
- ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
- ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
- ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
/* ESP */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
/* GTPC */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -319,27 +319,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
/* L2TPV3 */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
/* AH */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV6),
};
@@ -575,29 +575,29 @@ i40e_hash_get_inset(uint64_t rss_types)
/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
* it is the same case as none of them are added.
*/
- mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
- if (mask == ETH_RSS_L2_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
inset &= ~I40E_INSET_DMAC;
- else if (mask == ETH_RSS_L2_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
inset &= ~I40E_INSET_SMAC;
- mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
- if (mask == ETH_RSS_L3_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
- else if (mask == ETH_RSS_L3_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
- mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
- if (mask == ETH_RSS_L4_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
inset &= ~I40E_INSET_DST_PORT;
- else if (mask == ETH_RSS_L4_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
inset &= ~I40E_INSET_SRC_PORT;
if (rss_types & I40E_HASH_L4_TYPES) {
uint64_t l3_mask = rss_types &
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
uint64_t l4_mask = rss_types &
- (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
if (l3_mask && !l4_mask)
inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -836,7 +836,7 @@ i40e_hash_config(struct i40e_pf *pf,
/* Update lookup table */
if (rss_info->queue_num > 0) {
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i, j = 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -943,7 +943,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
"RSS key is ignored when queues specified");
pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
max_queue = i40e_pf_calc_configured_queues_num(pf);
else
max_queue = pf->dev_data->nb_rx_queues;
@@ -1081,22 +1081,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
uint64_t type, mask;
/* Validate L2 */
- type = ETH_RSS_ETH & rss_types;
- mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+ type = RTE_ETH_RSS_ETH & rss_types;
+ mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L3 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
- mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+ mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L4 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
- mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+ mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
if (!type && mask)
return false;
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
event.event_data.link_event.link_status =
dev->data->dev_link.link_status;
- /* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+ /* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
switch (dev->data->dev_link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
break;
default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 554b1142c136..a13bb81115f4 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
for (i = 0; i < tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
if (k) {
for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -1995,7 +1995,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->queue_id = queue_idx;
rxq->reg_idx = reg_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2243,7 +2243,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
}
/* check simple tx conflict */
if (ad->tx_simple_allowed) {
- if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+ if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
PMD_DRV_LOG(ERR, "No-simple tx is required.");
return -EINVAL;
@@ -3417,7 +3417,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
ad->tx_vec_allowed = (ad->tx_simple_allowed &&
txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2301e6301d7d..5e6eecc50116 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
bool rx_deferred_start; /**< don't start this queue in dev start */
uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
uint8_t dcb_tc; /**< Traffic class of rx queue */
- uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -166,7 +166,7 @@ struct i40e_tx_queue {
bool q_set; /**< indicate if tx queue has been configured */
bool tx_deferred_start; /**< don't start this queue in dev start */
uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 4ffe030fcb64..7abc0821d119 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -900,7 +900,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52e3c567558..f9a7f4655050 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
*/
txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < n; i++) {
free[i] = txep[i].mbuf;
txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct i40e_rx_queue *rxq;
uint16_t desc, i;
bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
/* no QinQ support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a9b..663c46b91dc5 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
return -EINVAL;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
return i40e_vsi_config_vlan_filter(vsi, TRUE);
else
return i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
return i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 34bfa9af4734..12f541f53926 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -50,18 +50,18 @@
VIRTCHNL_VF_OFFLOAD_RX_POLLING)
#define IAVF_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
#define IAVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define IAVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722b0..df44df772e4e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -266,53 +266,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
static const uint64_t map_hena_rss[] = {
/* IPv4 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
- ETH_RSS_NONFRAG_IPV4_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
- ETH_RSS_NONFRAG_IPV4_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
/* IPv6 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
- ETH_RSS_NONFRAG_IPV6_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
- ETH_RSS_NONFRAG_IPV6_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
/* L2 Payload */
- [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+ [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
};
- const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV4_OTHER |
- ETH_RSS_FRAG_IPV4;
+ const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_FRAG_IPV4;
- const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP |
- ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_FRAG_IPV6;
+ const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_FRAG_IPV6;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -331,13 +331,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
/**
- * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+ * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
* generalizations of all other IPv4 and IPv6 RSS types.
*/
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
rss_hf |= ipv4_rss;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
rss_hf |= ipv6_rss;
RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -363,10 +363,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
if (valid_rss_hf & ipv4_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
if (valid_rss_hf & ipv6_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
if (rss_hf & ~valid_rss_hf)
PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -467,7 +467,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
return 0;
enable = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
iavf_config_vlan_insert_v2(adapter, enable);
return 0;
@@ -479,10 +479,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
int err;
err = iavf_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to update vlan offload");
return err;
@@ -512,8 +512,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
ad->rx_vec_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Large VF setting */
if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -611,7 +611,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
rxq->max_pkt_len > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -961,34 +961,34 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1048,42 +1048,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
*/
switch (vf->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = vf->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -1231,14 +1231,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
bool enable;
int err;
- if (mask & ETH_VLAN_FILTER_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
iavf_iterate_vlan_filters_v2(dev, enable);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = iavf_config_vlan_strip_v2(adapter, enable);
/* If not support, the stripping is already disabled by PF */
@@ -1267,9 +1267,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
err = iavf_enable_vlan_strip(adapter);
else
err = iavf_disable_vlan_strip(adapter);
@@ -1311,8 +1311,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(lut, vf->rss_lut, reta_size);
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -1348,8 +1348,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = vf->rss_lut[i];
}
@@ -1556,7 +1556,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
ret = iavf_query_stats(adapter, &pstats);
if (ret == 0) {
uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
RTE_ETHER_CRC_LEN;
iavf_update_stats(vsi, pstats);
stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 1f2d3772d105..248054f79efd 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -341,90 +341,90 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_IPV4_CHKSUM)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_IPV4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
/* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
/* VLAN IPV4 */
#define IAVF_RSS_TYPE_VLAN_IPV4 (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4 ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4 RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6 ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6 RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* GTPU IPv4 */
#define IAVF_RSS_TYPE_GTPU_IPV4 (IAVF_RSS_TYPE_INNER_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_UDP (IAVF_RSS_TYPE_INNER_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_TCP (IAVF_RSS_TYPE_INNER_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define IAVF_RSS_TYPE_GTPU_IPV6 (IAVF_RSS_TYPE_INNER_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_UDP (IAVF_RSS_TYPE_INNER_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_TCP (IAVF_RSS_TYPE_INNER_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/**
* Supported pattern for hash.
@@ -442,7 +442,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv4_udp, IAVF_RSS_TYPE_VLAN_IPV4_UDP, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_vlan_ipv4_tcp, IAVF_RSS_TYPE_VLAN_IPV4_TCP, &outer_ipv4_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv4_sctp, IAVF_RSS_TYPE_VLAN_IPV4_SCTP, &outer_ipv4_sctp_tmplt},
- {iavf_pattern_eth_ipv4_gtpu, ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
+ {iavf_pattern_eth_ipv4_gtpu, RTE_ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4, IAVF_RSS_TYPE_GTPU_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_udp, IAVF_RSS_TYPE_GTPU_IPV4_UDP, &inner_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp, IAVF_RSS_TYPE_GTPU_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -484,9 +484,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ah, IAVF_RSS_TYPE_IPV4_AH, &ipv4_ah_tmplt},
{iavf_pattern_eth_ipv4_l2tpv3, IAVF_RSS_TYPE_IPV4_L2TPV3, &ipv4_l2tpv3_tmplt},
{iavf_pattern_eth_ipv4_pfcp, IAVF_RSS_TYPE_IPV4_PFCP, &ipv4_pfcp_tmplt},
- {iavf_pattern_eth_ipv4_gtpc, ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
- {iavf_pattern_eth_ecpri, ETH_RSS_ECPRI, ð_ecpri_tmplt},
- {iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_gtpc, RTE_ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ecpri, RTE_ETH_RSS_ECPRI, ð_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_ecpri, RTE_ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4_tcp, IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -504,7 +504,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
- {iavf_pattern_eth_ipv6_gtpu, ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
+ {iavf_pattern_eth_ipv6_gtpu, RTE_ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6, IAVF_RSS_TYPE_GTPU_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_udp, IAVF_RSS_TYPE_GTPU_IPV6_UDP, &inner_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp, IAVF_RSS_TYPE_GTPU_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -546,7 +546,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv6_ah, IAVF_RSS_TYPE_IPV6_AH, &ipv6_ah_tmplt},
{iavf_pattern_eth_ipv6_l2tpv3, IAVF_RSS_TYPE_IPV6_L2TPV3, &ipv6_l2tpv3_tmplt},
{iavf_pattern_eth_ipv6_pfcp, IAVF_RSS_TYPE_IPV6_PFCP, &ipv6_pfcp_tmplt},
- {iavf_pattern_eth_ipv6_gtpc, ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ipv6_gtpc, RTE_ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6_tcp, IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -580,52 +580,52 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
struct virtchnl_rss_cfg rss_cfg;
#define IAVF_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
rss_cfg.proto_hdrs = inner_ipv4_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
rss_cfg.proto_hdrs = inner_ipv6_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
@@ -779,28 +779,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr = &proto_hdrs->proto_hdr[i];
switch (hdr->type) {
case VIRTCHNL_PROTO_HDR_ETH:
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
hdr->field_selector = 0;
- else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
REFINE_PROTO_FLD(DEL, ETH_DST);
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
REFINE_PROTO_FLD(DEL, ETH_SRC);
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
- } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
REFINE_PROTO_FLD(DEL, IPV4_SRC);
}
@@ -808,39 +808,39 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
REFINE_PROTO_FLD(DEL, IPV6_SRC);
}
@@ -857,7 +857,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
}
break;
case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
else
hdr->field_selector = 0;
@@ -865,87 +865,87 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_UDP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, UDP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_TCP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, TCP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_SCTP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, SCTP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_S_VLAN:
- if (!(rss_type & ETH_RSS_S_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_S_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_C_VLAN:
- if (!(rss_type & ETH_RSS_C_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_C_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_L2TPV3:
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ESP:
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_AH:
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_PFCP:
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ECPRI:
- if (!(rss_type & ETH_RSS_ECPRI))
+ if (!(rss_type & RTE_ETH_RSS_ECPRI))
hdr->field_selector = 0;
break;
default:
@@ -962,7 +962,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
struct virtchnl_proto_hdr *hdr;
int i;
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
for (i = 0; i < proto_hdrs->count; i++) {
@@ -1059,10 +1059,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -1073,27 +1073,27 @@ struct rss_attr_type {
uint64_t type;
};
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE64)
#define INVALID_RSS_ATTR (RTE_ETH_RSS_L3_PRE32 | \
@@ -1103,9 +1103,9 @@ struct rss_attr_type {
RTE_ETH_RSS_L3_PRE96)
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE64, VALID_RSS_IPV6},
{INVALID_RSS_ATTR, 0}
@@ -1122,15 +1122,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88bbd40c1027..ac4db117f5cd 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -617,7 +617,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->vsi = vsi;
rxq->offloads = offloads;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index f4ae2fd6e123..2d7f6b1b2dca 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
#define IAVF_VPMD_TX_MAX_FREE_BUF 64
#define IAVF_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define IAVF_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define IAVF_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IAVF_VECTOR_PATH 0
#define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 72a4fcab04a5..b47c51b8ebe4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -906,7 +906,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -958,7 +958,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
(_mm256_castsi128_si256(raw_desc_bh0),
raw_desc_bh1, 1);
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 12375d3d80bd..b8f2f69f12fc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1141,7 +1141,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -1193,7 +1193,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
(_mm256_castsi128_si256(raw_desc_bh0),
raw_desc_bh1, 1);
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
@@ -1721,7 +1721,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index edb54991e298..1de43b9b8ee2 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -819,7 +819,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e349..7b7df5eebb6d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -835,7 +835,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
PMD_DRV_LOG(DEBUG, "RSS is not supported");
return -ENOTSUP;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
/* set all lut items to default queue */
memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b8a537cb8556..a90e40964ec5 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -95,7 +95,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -576,7 +576,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -637,7 +637,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
}
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
return 0;
@@ -652,8 +652,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
return 0;
}
@@ -675,27 +675,27 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -925,42 +925,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
*/
switch (hw->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = hw->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -979,11 +979,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
udp_tunnel->udp_port);
break;
@@ -1010,8 +1010,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
break;
default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 44fb38dbe7b1..b9fcfc80ad9b 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -143,28 +143,28 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -246,9 +246,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
bool enable = !!(dev_conf->rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (enable && repr->outer_vlan_info.port_vlan_ena) {
PMD_DRV_LOG(ERR,
@@ -345,7 +345,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (!ice_dcf_vlan_offload_ena(repr))
return -ENOTSUP;
- if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer VLAN in QinQ\n");
return -EINVAL;
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (repr->outer_vlan_info.stripping_ena) {
err = ice_dcf_vf_repr_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR,
"Failed to reset VLAN stripping : %d\n",
@@ -449,7 +449,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
int err;
err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index edbc74632711..6a6637a15af7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1487,9 +1487,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
TAILQ_INIT(&vsi->mac_list);
TAILQ_INIT(&vsi->vlan_list);
- /* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+ /* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
- ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+ RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
hw->func_caps.common_cap.rss_table_size;
pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
@@ -2993,14 +2993,14 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
int ret;
#define ICE_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
if (ret)
@@ -3010,7 +3010,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
cfg.symm = 0;
cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
/* Configure RSS for IPv4 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3020,7 +3020,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for IPv6 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3030,7 +3030,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3041,7 +3041,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3052,7 +3052,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3063,7 +3063,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3074,7 +3074,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -3085,7 +3085,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -3095,7 +3095,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -3105,7 +3105,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -3115,7 +3115,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3125,7 +3125,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3135,7 +3135,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3145,7 +3145,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3288,8 +3288,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_rx_queues) {
ret = ice_init_rss(pf);
@@ -3569,8 +3569,8 @@ ice_dev_start(struct rte_eth_dev *dev)
ice_set_rx_function(dev);
ice_set_tx_function(dev);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = ice_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3682,40 +3682,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->flow_type_rss_offloads = 0;
if (!is_safe_mode) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
}
dev_info->rx_queue_offload_capa = 0;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->reta_size = pf->hash_lut_size;
dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3754,24 +3754,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_align = ICE_ALIGN_RING_DESC,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_25G;
phy_type_low = hw->port_info->phy.phy_type_low;
phy_type_high = hw->port_info->phy.phy_type_high;
if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->nb_rx_queues = dev->data->nb_rx_queues;
dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3836,8 +3836,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
status = ice_aq_get_link_info(hw->port_info, enable_lse,
&link_status, NULL);
if (status != ICE_SUCCESS) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
goto out;
}
@@ -3853,55 +3853,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
goto out;
/* Full-duplex operation at all supported speeds */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* Parse the link status */
switch (link_status.link_speed) {
case ICE_AQ_LINK_SPEED_10MB:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case ICE_AQ_LINK_SPEED_100MB:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case ICE_AQ_LINK_SPEED_1000MB:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ICE_AQ_LINK_SPEED_2500MB:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case ICE_AQ_LINK_SPEED_5GB:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case ICE_AQ_LINK_SPEED_10GB:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case ICE_AQ_LINK_SPEED_20GB:
- link.link_speed = ETH_SPEED_NUM_20G;
+ link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case ICE_AQ_LINK_SPEED_25GB:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case ICE_AQ_LINK_SPEED_40GB:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case ICE_AQ_LINK_SPEED_50GB:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case ICE_AQ_LINK_SPEED_100GB:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case ICE_AQ_LINK_SPEED_UNKNOWN:
PMD_DRV_LOG(ERR, "Unknown link speed");
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
default:
PMD_DRV_LOG(ERR, "None link speed");
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
out:
ice_atomic_write_link_status(dev, &link);
@@ -4377,15 +4377,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ice_vsi_config_vlan_filter(vsi, true);
else
ice_vsi_config_vlan_filter(vsi, false);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ice_vsi_config_vlan_stripping(vsi, true);
else
ice_vsi_config_vlan_stripping(vsi, false);
@@ -4500,8 +4500,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -4550,8 +4550,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -5460,7 +5460,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
break;
default:
@@ -5484,7 +5484,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
break;
default:
@@ -5505,7 +5505,7 @@ ice_timesync_enable(struct rte_eth_dev *dev)
int ret;
if (dev->data->dev_started && !(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_TIMESTAMP)) {
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
PMD_DRV_LOG(ERR, "Rx timestamp offload not configured");
return -1;
}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 1cd3753ccc5f..599e0028f7e8 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -117,19 +117,19 @@
ICE_FLAG_VF_MAC_BY_PF)
#define ICE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/**
* The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 20a3204fab7e..35eff8b17d28 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
#define ICE_IPV4_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
#define ICE_IPV6_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE32 | \
RTE_ETH_RSS_L3_PRE48 | \
RTE_ETH_RSS_L3_PRE64)
@@ -373,87 +373,87 @@ struct ice_rss_hash_cfg eth_tmplt = {
};
/* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_IPV4_CHKSUM)
+#define ICE_RSS_TYPE_ETH_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_IPV4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_UDP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_TCP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_SCTP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV4 ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV4 RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG (ETH_RSS_ETH | ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_ETH_IPV6_UDP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV6_TCP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV6_SCTP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV6 ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV6 RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* VLAN IPV4 */
#define ICE_RSS_TYPE_VLAN_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV4)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define ICE_RSS_TYPE_VLAN_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_SCTP (ICE_RSS_TYPE_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define ICE_RSS_TYPE_VLAN_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_FRAG (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_VLAN_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_SCTP (ICE_RSS_TYPE_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* GTPU IPv4 */
#define ICE_RSS_TYPE_GTPU_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define ICE_RSS_TYPE_GTPU_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* PPPOE */
-#define ICE_RSS_TYPE_PPPOE (ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE (RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
/* PPPOE IPv4 */
#define ICE_RSS_TYPE_PPPOE_IPV4 (ICE_RSS_TYPE_IPV4 | \
@@ -472,17 +472,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
ICE_RSS_TYPE_PPPOE)
/* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/* MAC */
-#define ICE_RSS_TYPE_ETH ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH RTE_ETH_RSS_ETH
/**
* Supported pattern for hash.
@@ -647,86 +647,86 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
*hash_flds &= ~ICE_FLOW_HASH_ETH;
- if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
- if (rss_type & ETH_RSS_ETH)
+ if (rss_type & RTE_ETH_RSS_ETH)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
- if (rss_type & ETH_RSS_C_VLAN)
+ if (rss_type & RTE_ETH_RSS_C_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
- else if (rss_type & ETH_RSS_S_VLAN)
+ else if (rss_type & RTE_ETH_RSS_S_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
- if (!(rss_type & ETH_RSS_PPPOE))
+ if (!(rss_type & RTE_ETH_RSS_PPPOE))
*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
}
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
}
if (rss_type & RTE_ETH_RSS_L3_PRE32) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
} else {
@@ -735,10 +735,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE48) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
} else {
@@ -747,10 +747,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE64) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
} else {
@@ -762,81 +762,81 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
}
}
@@ -870,7 +870,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -892,10 +892,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -907,9 +907,9 @@ struct rss_attr_type {
};
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE32, VALID_RSS_IPV6},
{RTE_ETH_RSS_L3_PRE48, VALID_RSS_IPV6},
@@ -928,16 +928,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ff362c21d9f5..8406240d7209 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -303,7 +303,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
}
}
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
/* Register mbuf field and flag for Rx timestamp */
err = rte_mbuf_dyn_rx_timestamp_register(
&ice_timestamp_dynfield_offset,
@@ -367,7 +367,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
regval |= (0x03 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
QRXFLXP_CNTXT_RXDID_PRIO_M;
- if (ad->ptp_ena || rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (ad->ptp_ena || rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
regval |= QRXFLXP_CNTXT_TS_M;
ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
@@ -1117,7 +1117,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = vsi->base_queue + queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1624,7 +1624,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
ice_rxd_to_vlan_tci(mb, &rxdp[j]);
rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -1942,7 +1942,7 @@ ice_recv_scattered_pkts(void *rx_queue,
rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -2373,7 +2373,7 @@ ice_recv_pkts(void *rx_queue,
rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -2889,7 +2889,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
for (i = 0; i < txq->tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
rte_mempool_put(txep->mbuf->pool, txep->mbuf);
txep->mbuf = NULL;
@@ -3365,7 +3365,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 490693bff218..86955539bea8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -474,7 +474,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 7efe7b50a206..af23f6a34e58 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -585,7 +585,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
@@ -995,7 +995,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index f0f99265857e..b1d975b31a5a 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
}
#define ICE_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define ICE_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define ICE_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define ICE_VECTOR_PATH 0
#define ICE_VECTOR_OFFLOAD_PATH 1
@@ -287,7 +287,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
if (rxq->proto_xtr != PROTO_XTR_NONE)
return -1;
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
return -1;
if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD)
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b641b..7ce80a442b35 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -307,8 +307,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
@@ -318,7 +318,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (tx_mq_mode != ETH_MQ_TX_NONE)
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
PMD_INIT_LOG(WARNING,
"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
tx_mq_mode);
@@ -334,8 +334,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = igc_check_mq_mode(dev);
if (ret != 0)
@@ -473,12 +473,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (speed == SPEED_2500) {
uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -490,9 +490,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
} else {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -525,7 +525,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -972,18 +972,18 @@ eth_igc_start(struct rte_eth_dev *dev)
/* VLAN Offload Settings */
eth_igc_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
hw->mac.autoneg = 1;
} else {
int num_speeds = 0;
- if (*speeds & ETH_LINK_SPEED_FIXED) {
+ if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_DRV_LOG(ERR,
"Force speed mode currently not supported");
igc_dev_clear_queues(dev);
@@ -993,33 +993,33 @@ eth_igc_start(struct rte_eth_dev *dev)
hw->phy.autoneg_advertised = 0;
hw->mac.autoneg = 1;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_2_5G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
num_speeds++;
}
@@ -1482,14 +1482,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
- dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_vmdq_pools = 0;
dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1515,9 +1515,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2141,13 +2141,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2179,16 +2179,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->fc.requested_mode = igc_fc_none;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->fc.requested_mode = igc_fc_rx_pause;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->fc.requested_mode = igc_fc_tx_pause;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->fc.requested_mode = igc_fc_full;
break;
default:
@@ -2234,29 +2234,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* set redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta, reg;
uint16_t idx, shift;
uint8_t j, mask;
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGC_RSS_RDT_REG_SIZE_MASK);
/* if no need to update the register */
if (!mask ||
- shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+ shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
continue;
/* check mask whether need to read the register value first */
@@ -2290,29 +2290,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* read redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta;
uint16_t idx, shift;
uint8_t j, mask;
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGC_RSS_RDT_REG_SIZE_MASK);
/* if no need to read register */
if (!mask ||
- shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+ shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
continue;
/* read register and get the queue index */
@@ -2369,23 +2369,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf = 0;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf |= rss_hf;
return 0;
@@ -2514,22 +2514,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igc_vlan_hw_strip_enable(dev);
else
igc_vlan_hw_strip_disable(dev);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igc_vlan_hw_filter_enable(dev);
else
igc_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return igc_vlan_hw_extend_enable(dev);
else
return igc_vlan_hw_extend_disable(dev);
@@ -2547,7 +2547,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
uint32_t reg_val;
/* only outer TPID of double VLAN can be configured*/
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg_val = IGC_READ_REG(hw, IGC_VET);
reg_val = (reg_val & (~IGC_VET_EXT)) |
((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 5e6c2ff30157..f56cad79e939 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -66,37 +66,37 @@ extern "C" {
#define IGC_TX_MAX_MTU_SEG UINT8_MAX
#define IGC_RX_OFFLOAD_ALL ( \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IGC_TX_OFFLOAD_ALL ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_UDP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_UDP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define IGC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IGC_MAX_ETQF_FILTERS 3 /* etqf(3) is used for 1588 */
#define IGC_ETQF_FILTER_1588 3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 56132e8c6cd6..1d34ae2e1b15 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
};
/** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
/**< Start context position for transmit queue. */
struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
static inline uint64_t
@@ -847,23 +847,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
}
@@ -1037,10 +1037,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
}
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igc_rss_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/*
* configure RSS register for following,
* then disable the RSS logic
@@ -1111,7 +1111,7 @@ igc_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0;
bus_addr = rxq->rx_ring_phys_addr;
@@ -1177,7 +1177,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
if (dev->data->scattered_rx) {
@@ -1221,20 +1221,20 @@ igc_rx_init(struct rte_eth_dev *dev)
rxcsum |= IGC_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= IGC_RXCSUM_IPOFL;
else
rxcsum &= ~IGC_RXCSUM_IPOFL;
if (offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rxcsum |= IGC_RXCSUM_TUOFL;
- offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
} else {
rxcsum &= ~IGC_RXCSUM_TUOFL;
}
- if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
rxcsum |= IGC_RXCSUM_CRCOFL;
else
rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1242,7 +1242,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1279,12 +1279,12 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
dvmolr |= IGC_DVMOLR_STRVLAN;
else
dvmolr &= ~IGC_DVMOLR_STRVLAN;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
dvmolr &= ~IGC_DVMOLR_STRCRC;
else
dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2253,10 +2253,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
if (on) {
reg_val |= IGC_DVMOLR_STRVLAN;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index f94a1fed0a38..c688c3735c06 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(link));
if (adapter->idev.port_info->config.an_enable) {
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
}
if (!adapter->link_up ||
!(lif->state & IONIC_LIF_F_UP)) {
/* Interface is down */
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else {
/* Interface is up */
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (adapter->link_speed) {
case 10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -387,17 +387,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
dev_info->speed_capa =
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/*
* Per-queue capabilities
* RTE does not support disabling a feature on a queue if it is
* enabled globally on the device. Thus the driver does not advertise
- * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+ * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
* though the driver would be otherwise capable of disabling it on
* a per-queue basis.
*/
@@ -411,24 +411,24 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
*/
dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
0;
dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
0;
dev_info->rx_desc_lim = rx_desc_lim;
@@ -463,9 +463,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
fc_conf->autoneg = 0;
if (idev->port_info->config.pause_type)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -487,14 +487,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
break;
- case RTE_FC_RX_PAUSE:
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
return -ENOTSUP;
}
@@ -545,12 +545,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = tbl_sz / RTE_RETA_GROUP_SIZE;
+ num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if (reta_conf[i].mask & ((uint64_t)1 << j)) {
- index = (i * RTE_RETA_GROUP_SIZE) + j;
+ index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
}
}
@@ -585,12 +585,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = reta_size / RTE_RETA_GROUP_SIZE;
+ num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
memcpy(reta_conf->reta,
- &lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
- RTE_RETA_GROUP_SIZE);
+ &lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+ RTE_ETH_RETA_GROUP_SIZE);
reta_conf++;
}
@@ -618,17 +618,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
IONIC_RSS_HASH_KEY_SIZE);
if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
rss_conf->rss_hf = rss_hf;
@@ -660,17 +660,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (!lif->rss_ind_tbl)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
rss_types |= IONIC_RSS_TYPE_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
rss_types |= IONIC_RSS_TYPE_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -842,15 +842,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
static inline uint32_t
ionic_parse_link_speeds(uint16_t link_speeds)
{
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
return 100000;
- else if (link_speeds & ETH_LINK_SPEED_50G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
return 50000;
- else if (link_speeds & ETH_LINK_SPEED_40G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
return 40000;
- else if (link_speeds & ETH_LINK_SPEED_25G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
return 25000;
- else if (link_speeds & ETH_LINK_SPEED_10G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
return 10000;
else
return 0;
@@ -874,12 +874,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
IONIC_PRINT_CALL();
allowed_speeds =
- ETH_LINK_SPEED_FIXED |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_FIXED |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
if (dev_conf->link_speeds & ~allowed_speeds) {
IONIC_PRINT(ERR, "Invalid link setting");
@@ -896,7 +896,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure link */
- an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
ionic_dev_cmd_port_autoneg(idev, an_enable);
err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
#include <rte_ethdev.h>
#define IONIC_ETH_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index a1f9ce2d81cb..5e8fdf3893ad 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
/*
* IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
- * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+ * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
*/
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
else
lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
/*
* NB: While it is true that RSS_HASH is always enabled on ionic,
* setting this flag unconditionally causes problems in DTS.
- * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
*/
/* RX per-port */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
lif->features |= IONIC_ETH_HW_RX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_RX_CSUM;
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
lif->features |= IONIC_ETH_HW_RX_SG;
lif->eth_dev->data->scattered_rx = 1;
} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
}
/* Covers VLAN_STRIP */
- ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+ ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
/* TX per-port */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
lif->features |= IONIC_ETH_HW_TX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_TX_CSUM;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
else
lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
lif->features |= IONIC_ETH_HW_TX_SG;
else
lif->features &= ~IONIC_ETH_HW_TX_SG;
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
lif->features |= IONIC_ETH_HW_TSO;
lif->features |= IONIC_ETH_HW_TSO_IPV6;
lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 4d16a39c6b6d..e3df7c56debe 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -203,11 +203,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
txq->flags |= IONIC_QCQ_F_DEFERRED;
/* Convert the offload flags into queue flags */
- if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_L3;
- if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_TCP;
- if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_UDP;
eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -743,11 +743,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
/*
* Note: the interface does not currently support
- * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
* when the adapter will be able to keep the CRC and subtract
* it to the length for all received packets:
* if (eth_dev->data->dev_conf.rxmode.offloads &
- * DEV_RX_OFFLOAD_KEEP_CRC)
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC)
* rxq->crc_len = ETHER_CRC_LEN;
*/
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f7f..17088585757f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->speed_capa =
(hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
- ETH_LINK_SPEED_10G :
+ RTE_ETH_LINK_SPEED_10G :
((hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
- ETH_LINK_SPEED_25G :
- ETH_LINK_SPEED_AUTONEG);
+ RTE_ETH_LINK_SPEED_25G :
+ RTE_ETH_LINK_SPEED_AUTONEG);
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
@@ -67,30 +67,30 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
};
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
@@ -2399,10 +2399,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
(uint64_t *)&link_speed);
switch (link_speed) {
case IFPGA_RAWDEV_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case IFPGA_RAWDEV_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2460,9 +2460,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2518,9 +2518,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 46c95425adfb..7fd2c539e002 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1857,7 +1857,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= IXGBE_DMATXCTL_GDV;
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (qinq) {
reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1872,7 +1872,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
" by single VLAN");
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (qinq) {
/* Only the high 16-bits is valid */
IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1959,10 +1959,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -2083,7 +2083,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
if (hw->mac.type == ixgbe_mac_82598EB) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
ctrl |= IXGBE_VLNCTRL_VME;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2100,7 +2100,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl |= IXGBE_RXDCTL_VME;
on = TRUE;
} else {
@@ -2122,17 +2122,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct ixgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -2143,19 +2143,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
ixgbe_vlan_hw_strip_config(dev);
- }
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ixgbe_vlan_hw_filter_enable(dev);
else
ixgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
ixgbe_vlan_hw_extend_enable(dev);
else
ixgbe_vlan_hw_extend_disable(dev);
@@ -2194,10 +2193,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -2221,18 +2220,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2242,12 +2241,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -2256,12 +2255,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -2276,13 +2275,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2291,15 +2290,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2308,39 +2307,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -2349,7 +2348,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB) {
if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
PMD_INIT_LOG(ERR,
@@ -2373,8 +2372,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = ixgbe_check_mq_mode(dev);
@@ -2619,15 +2618,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
ixgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -2704,17 +2703,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G | ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G | RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- allowed_speeds = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
break;
default:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
}
link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2728,7 +2727,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2746,17 +2745,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
speed = IXGBE_LINK_SPEED_82599_AUTONEG;
}
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= IXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= IXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= IXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= IXGBE_LINK_SPEED_100_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= IXGBE_LINK_SPEED_10_FULL;
}
@@ -3832,7 +3831,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB)
dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
}
@@ -3842,9 +3841,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3883,21 +3882,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
if (hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X540_vf ||
hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550_vf) {
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
}
if (hw->mac.type == ixgbe_mac_X550) {
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
}
/* Driver-preferred Rx/Tx parameters */
@@ -3966,9 +3965,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
@@ -4211,11 +4210,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
u32 esdp_reg;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
hw->mac.get_link_status = true;
@@ -4237,8 +4236,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
if (diag != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -4274,37 +4273,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case IXGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
case IXGBE_LINK_SPEED_10_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case IXGBE_LINK_SPEED_100_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case IXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case IXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case IXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case IXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -4521,7 +4520,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4740,13 +4739,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -5044,8 +5043,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IXGBE_4_BIT_MASK);
if (!mask)
@@ -5092,8 +5091,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IXGBE_4_BIT_MASK);
if (!mask)
@@ -5255,22 +5254,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -5330,8 +5329,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
ixgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5568,10 +5567,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
ixgbevf_vlan_strip_queue_set(dev, i, on);
}
}
@@ -5702,12 +5701,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
}
@@ -5721,15 +5720,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= IXGBE_VMOLR_AUPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= IXGBE_VMOLR_ROMPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= IXGBE_VMOLR_ROPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= IXGBE_VMOLR_BAM;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= IXGBE_VMOLR_MPE;
return new_val;
@@ -6724,15 +6723,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = IXGBE_INCVAL_100;
shift = IXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = IXGBE_INCVAL_1GB;
shift = IXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = IXGBE_INCVAL_10GB;
shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7143,16 +7142,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- return ETH_RSS_RETA_SIZE_512;
+ return RTE_ETH_RSS_RETA_SIZE_512;
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
- return ETH_RSS_RETA_SIZE_64;
+ return RTE_ETH_RSS_RETA_SIZE_64;
case ixgbe_mac_X540_vf:
case ixgbe_mac_82599_vf:
return 0;
default:
- return ETH_RSS_RETA_SIZE_128;
+ return RTE_ETH_RSS_RETA_SIZE_128;
}
}
@@ -7162,10 +7161,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- if (reta_idx < ETH_RSS_RETA_SIZE_128)
+ if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
return IXGBE_RETA(reta_idx >> 2);
else
- return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+ return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
@@ -7221,7 +7220,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -7232,7 +7231,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled*/
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -7256,9 +7255,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled*/
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7271,7 +7270,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7524,7 +7523,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -7556,7 +7555,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -7653,12 +7652,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
@@ -7690,11 +7689,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 950fb2d2450c..876b670f2682 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -114,15 +114,15 @@
#define IXGBE_FDIR_NVGRE_TUNNEL_TYPE 0x0
#define IXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IXGBE_VF_IRQ_ENABLE_MASK 3 /* vf irq enable mask */
#define IXGBE_VF_MAXMSIVECTOR 1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
uint32_t key);
static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_input *input, uint8_t queue,
uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
{
*fdirctrl = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 27322ab9038a..bdc9d4796c02 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
IXGBE_SECTXCTRL_STORE_FORWARD);
reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 295e5a39b245..9f1bd0a62ba4 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -104,15 +104,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -263,15 +263,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
gpie |= IXGBE_GPIE_VTMODE_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
gpie |= IXGBE_GPIE_VTMODE_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
gpie |= IXGBE_GPIE_VTMODE_16;
break;
@@ -674,29 +674,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = &dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index b263dfe1d574..9e5716f935a2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2592,26 +2592,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2780,7 +2780,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/*
@@ -3021,7 +3021,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hw->mac.type != ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return offloads;
}
@@ -3032,19 +3032,19 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
uint64_t offloads;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (ixgbe_is_vf(dev) == 0)
- offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
/*
* RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3054,20 +3054,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X550) &&
!RTE_ETH_DEV_SRIOV(dev).active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -3122,7 +3122,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -3507,23 +3507,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
}
@@ -3605,23 +3605,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -3697,12 +3697,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
ixgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* RXPBSIZE
@@ -3727,7 +3727,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3736,7 +3736,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
}
/* MRQC: enable vmdq and dcb */
- mrqc = (num_pools == ETH_16_POOLS) ?
+ mrqc = (num_pools == RTE_ETH_16_POOLS) ?
IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
@@ -3752,7 +3752,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3776,7 +3776,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 16 or 32 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*
* MPSAR - allow pools to read specific mac addresses
@@ -3858,7 +3858,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
if (hw->mac.type != ixgbe_mac_82598EB)
/*PF VF Transmit Enable*/
IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
- vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3874,12 +3874,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3889,7 +3889,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3907,12 +3907,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3922,7 +3922,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3949,7 +3949,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3976,7 +3976,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4145,7 +4145,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
if (hw->mac.type != ixgbe_mac_82598EB) {
config_dcb_rx = DCB_RX_CONFIG;
@@ -4158,8 +4158,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_configure(dev);
}
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4172,7 +4172,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -4183,7 +4183,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4199,15 +4199,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
- if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -4257,9 +4257,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
- }
}
if (config_dcb_tx) {
/* Only support an equally distributed
@@ -4273,7 +4272,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
}
@@ -4309,7 +4308,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/*
@@ -4323,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = ixgbe_dcb_pfc_enabled;
}
ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -4344,12 +4343,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -4405,7 +4404,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 64 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
/*
@@ -4526,11 +4525,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
mrqc &= ~IXGBE_MRQC_MRQE_MASK;
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS64EN;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS32EN;
break;
@@ -4551,17 +4550,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQEN);
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT4TCEN);
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT8TCEN);
break;
@@ -4588,21 +4587,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
ixgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
ixgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
ixgbe_rss_disable(dev);
@@ -4613,18 +4612,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
ixgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4658,7 +4657,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
ixgbe_vmdq_tx_hw_configure(hw);
else {
mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4671,13 +4670,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
IXGBE_MTQC_8TC_8TQ;
break;
@@ -4885,7 +4884,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
rxq->rx_using_sse = rx_using_sse;
#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
#endif
}
}
@@ -4913,10 +4912,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4924,8 +4923,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
/*
* According to chapter of 4.6.7.2.1 of the Spec Rev.
* 3.0 RSC configuration requires HW CRC stripping being
@@ -4939,7 +4938,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RFCTL configuration */
rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
- if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~IXGBE_RFCTL_RSC_DIS;
else
rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4948,7 +4947,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set RDRXCTL.RSCACKC bit */
@@ -5070,7 +5069,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
else
hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5107,7 +5106,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5116,7 +5115,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -5158,11 +5157,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/* It adds dual VLAN length for supporting dual VLAN */
if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -5177,7 +5176,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
rxcsum |= IXGBE_RXCSUM_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= IXGBE_RXCSUM_IPPCSE;
else
rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5187,7 +5186,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540) {
rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
else
rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5393,9 +5392,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY) ||
+ RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY)) {
+ RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = ixgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -5683,7 +5682,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5732,7 +5731,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
@@ -5740,8 +5739,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index a1764f2b08af..668a5b9814f6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
uint8_t rx_udp_csum_zero_err;
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -227,7 +227,7 @@ struct ixgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 005e60668a8b..cd34d4098785 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -277,7 +277,7 @@ static inline int
ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
/* no fdir support */
if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ae03ea6e9db3..ac8976062fa7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index 9fa75984fb31..bd528ff346c7 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
/**< Maximum number of MAC addresses. */
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/**< Device RX offload capabilities. */
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/**< Device TX offload capabilities. */
dev_info->speed_capa =
representor->pf_ethdev->data->dev_link.link_speed;
- /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
dev_info->switch_info.name =
representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
*/
if (hw->mac.type == ixgbe_mac_82598EB)
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_16_POOLS;
+ RTE_ETH_16_POOLS;
else
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_64_POOLS;
+ RTE_ETH_64_POOLS;
for (q = 0; q < queues_per_pool; q++)
(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
* @param rx_mask
* The RX mode mask, which is one or more of accepting Untagged Packets,
* packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-* ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-* ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+* RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+* RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
* in rx_mode.
* @param on
* 1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index cb9f7c8e8200..c428caf44189 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static int is_kni_initialized;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 0fc3f0ab66a9..90ffe31b9fda 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
break;
/* CN23xx 25G cards */
case PCI_SUBSYS_DEV_ID_CN2350_225:
case PCI_SUBSYS_DEV_ID_CN2360_225:
- devinfo->speed_capa = ETH_LINK_SPEED_25G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
break;
default:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
lio_dev_err(lio_dev,
"Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->max_mac_addrs = 1;
- devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
- devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+ devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
+ devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
devinfo->rx_desc_lim = lio_rx_desc_lim;
devinfo->tx_desc_lim = lio_tx_desc_lim;
devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_EX |
- ETH_RSS_IPV6_TCP_EX);
+ devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_IPV6_TCP_EX);
return 0;
}
@@ -519,10 +519,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
- for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
- index = (i * RTE_RETA_GROUP_SIZE) + j;
+ index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
rss_state->itable[index] = reta_conf[i].reta[j];
}
}
@@ -562,12 +562,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = reta_size / RTE_RETA_GROUP_SIZE;
+ num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
memcpy(reta_conf->reta,
- &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
- RTE_RETA_GROUP_SIZE);
+ &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+ RTE_ETH_RETA_GROUP_SIZE);
reta_conf++;
}
@@ -595,17 +595,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
if (rss_state->ip)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (rss_state->tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (rss_state->ipv6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (rss_state->ipv6_tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (rss_state->ipv6_ex)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (rss_state->ipv6_tcp_ex_hash)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
rss_conf->rss_hf = rss_hf;
@@ -673,42 +673,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_state->hash_disable)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
hashinfo |= LIO_RSS_HASH_IPV4;
rss_state->ip = 1;
} else {
rss_state->ip = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV4;
rss_state->tcp_hash = 1;
} else {
rss_state->tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
hashinfo |= LIO_RSS_HASH_IPV6;
rss_state->ipv6 = 1;
} else {
rss_state->ipv6 = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6;
rss_state->ipv6_tcp_hash = 1;
} else {
rss_state->ipv6_tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
hashinfo |= LIO_RSS_HASH_IPV6_EX;
rss_state->ipv6_ex = 1;
} else {
rss_state->ipv6_ex = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
rss_state->ipv6_tcp_ex_hash = 1;
} else {
@@ -757,7 +757,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -814,7 +814,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -912,10 +912,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
/* Initialize */
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
/* Return what we found */
if (lio_dev->linfo.link.s.link_up == 0) {
@@ -923,18 +923,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
return rte_eth_linkstatus_set(eth_dev, &link);
}
- link.link_status = ETH_LINK_UP; /* Interface is up */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (lio_dev->linfo.link.s.speed) {
case LIO_LINK_SPEED_10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case LIO_LINK_SPEED_25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
}
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1086,8 +1086,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
i % eth_dev->data->nb_rx_queues : 0);
- conf_idx = i / RTE_RETA_GROUP_SIZE;
- reta_idx = i % RTE_RETA_GROUP_SIZE;
+ conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
reta_conf[conf_idx].reta[reta_idx] = q_idx;
reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
}
@@ -1103,10 +1103,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
struct rte_eth_rss_conf rss_conf;
switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
lio_dev_rss_configure(eth_dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode. */
default:
memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1484,7 +1484,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -1505,11 +1505,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 0;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
lio_dev_err(lio_dev, "Unable to set Link Down\n");
return -1;
}
@@ -1721,9 +1721,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Inform firmware about change in number of queues to use.
* Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65c1..8533e39f6957 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
int i;
int ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e86..9deb7a5f1360 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
#define MEMIF_MP_SEND_REGION "memif_mp_send_region"
@@ -199,7 +199,7 @@ memif_dev_info(struct rte_eth_dev *dev __rte_unused, struct rte_eth_dev_info *de
dev_info->max_rx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
dev_info->max_tx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -1219,7 +1219,7 @@ memif_connect(struct rte_eth_dev *dev)
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
}
MIF_LOG(INFO, "Connected.");
return 0;
@@ -1381,10 +1381,10 @@ memif_link_update(struct rte_eth_dev *dev,
if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
proc_private = dev->process_private;
- if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
proc_private->regions_num == 0) {
memif_mp_request_regions(dev);
- } else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+ } else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
proc_private->regions_num > 0) {
memif_free_regions(dev);
}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->if_index = priv->if_index;
info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_56G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_56G;
info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
dev_link.link_speed = link_speed;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
dev->data->dev_link = dev_link;
return 0;
}
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
ret = 0;
out:
MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
};
static const uint64_t dpdk[] = {
[INNER] = 0,
- [IPV4] = ETH_RSS_IPV4,
- [IPV4_1] = ETH_RSS_FRAG_IPV4,
- [IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
- [IPV6] = ETH_RSS_IPV6,
- [IPV6_1] = ETH_RSS_FRAG_IPV6,
- [IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
- [IPV6_3] = ETH_RSS_IPV6_EX,
+ [IPV4] = RTE_ETH_RSS_IPV4,
+ [IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+ [IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IPV6] = RTE_ETH_RSS_IPV6,
+ [IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+ [IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IPV6_3] = RTE_ETH_RSS_IPV6_EX,
[TCP] = 0,
[UDP] = 0,
- [IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
- [IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
- [IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
- [IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
- [IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
- [IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+ [IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ [IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ [IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+ [IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+ [IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ [IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
};
static const uint64_t verbs[RTE_DIM(dpdk)] = {
[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
* - MAC flow rules are generated from @p dev->data->mac_addrs
* (@p priv->mac array).
* - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
* is enabled and VLAN filters are configured.
*
* @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
struct rte_ether_addr *rule_mac = ð_spec.dst;
rte_be16_t *rule_vlan =
(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!ETH_DEV(priv)->data->promiscuous ?
&vlan_spec.tci :
NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
static void
mlx4_link_status_alarm(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
};
uint32_t caught[RTE_DIM(type)] = { 0 };
struct ibv_async_event event;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
unsigned int i;
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
int
mlx4_intr_install(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
int rc;
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
int
mlx4_rxq_intr_enable(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index ee2d2b75e59a..781ee256df71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,12 +682,12 @@ mlx4_rxq_detach(struct rxq *rxq)
uint64_t
mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_RSS_HASH;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
- offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return offloads;
}
@@ -703,7 +703,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
uint64_t
mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
(void)priv;
return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
/* By default, FCS (CRC) is stripped by hardware. */
crc_present = 0;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (priv->hw_fcs_strip) {
crc_present = 1;
} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts = elts,
/* Toggle Rx checksum offload if hardware supports it. */
.csum = priv->hw_csum &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.csum_l2tun = priv->hw_csum_l2tun &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.crc_present = crc_present,
.l2tun_offload = priv->hw_csum_l2tun,
.stats = {
@@ -832,7 +832,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 7d8c4f2a2223..0db2e55befd3 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
uint64_t
mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+ uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (priv->hw_csum) {
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
}
if (priv->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (priv->hw_csum_l2tun) {
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (priv->tso)
- offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
}
return offloads;
}
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts_comp_cd_init =
RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
.csum = priv->hw_csum &&
- (offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM)),
+ (offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
.csum_l2tun = priv->hw_csum_l2tun &&
(offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
/* Enable Tx loopback for VF devices. */
.lb = !!priv->vf,
.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
dev_link.link_speed = link_speed;
priv->link_speed_capa = 0;
if (edata.supported & (SUPPORTED_1000baseT_Full |
SUPPORTED_1000baseKX_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (edata.supported & SUPPORTED_10000baseKR_Full)
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (edata.supported & (SUPPORTED_40000baseKR4_Full |
SUPPORTED_40000baseCR4_Full |
SUPPORTED_40000baseSR4_Full |
SUPPORTED_40000baseLR4_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
return ret;
}
dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
- ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+ RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
sc = ecmd->link_mode_masks[0] |
((uint64_t)ecmd->link_mode_masks[1] << 32);
priv->link_speed_capa = 0;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
sc = ecmd->link_mode_masks[2] |
((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
MLX5_BITSHIFT
(ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index a823d26bebf9..d207ec053e07 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1350,8 +1350,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1634,7 +1634,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
/*
* If HW has bug working with tunnel packet decapsulation and
* scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
- * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+ * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
*/
if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index e28cc461b914..7727dfb4196c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1488,10 +1488,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_udp_tunnel *udp_tunnel)
{
MLX5_ASSERT(udp_tunnel != NULL);
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
udp_tunnel->udp_port == 4789)
return 0;
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
udp_tunnel->udp_port == 4790)
return 0;
return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a15f86616d49..ea17a86f4955 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1217,7 +1217,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
struct mlx5_flow_rss_desc {
uint32_t level;
uint32_t queue_num; /**< Number of entries in @p queue. */
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint64_t hash_fields; /* Verbs Hash fields. */
uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
#define MLX5_VPMD_DESCS_PER_LOOP 4
/* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
/* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
MLX5_RSS_SRC_DST_ONLY))
/* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
}
if ((dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+ RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->default_txportconf.ring_size = 256;
info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
- if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
- (priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+ if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+ (priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
info->default_rxportconf.nb_queues = 16;
info->default_txportconf.nb_queues = 16;
if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b4d0b7b5ef32..4309852523b2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
uint64_t rss_types;
/**<
* RSS types bit-field associated with this node
- * (see ETH_RSS_* definitions).
+ * (see RTE_ETH_RSS_* definitions).
*/
uint64_t node_flags;
/**<
@@ -298,7 +298,7 @@ mlx5_flow_expand_rss_skip_explicit(const struct mlx5_flow_expand_node graph[],
* @param[in] pattern
* User flow pattern.
* @param[in] types
- * RSS types to expand (see ETH_RSS_* definitions).
+ * RSS types to expand (see RTE_ETH_RSS_* definitions).
* @param[in] graph
* Input graph to expand @p pattern according to @p types.
* @param[in] graph_root_index
@@ -560,8 +560,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV6),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -569,11 +569,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_OUTER_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -584,8 +584,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_GRE,
MLX5_EXPANSION_NVGRE),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -593,11 +593,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_VXLAN] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -659,32 +659,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
MLX5_EXPANSION_IPV4_TCP),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_IPV4_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
MLX5_EXPANSION_IPV6_TCP,
MLX5_EXPANSION_IPV6_FRAG_EXT),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_IPV6_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1095,7 +1095,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
* @param[in] tunnel
* 1 when the hash field is for a tunnel item.
* @param[in] layer_types
- * ETH_RSS_* types.
+ * RTE_ETH_RSS_* types.
* @param[in] hash_fields
* Item hash fields.
*
@@ -1648,14 +1648,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
&rss->types,
"some RSS protocols are not"
" supported");
- if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
- !(rss->types & ETH_RSS_IP))
+ if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+ !(rss->types & RTE_ETH_RSS_IP))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L3 partial RSS requested but L3 RSS"
" type not specified");
- if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
- !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+ if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+ !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L4 partial RSS requested but L4 RSS"
@@ -6411,8 +6411,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
* mlx5_flow_hashfields_adjust() in advance.
*/
rss_desc->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
}
flow->dev_handles = 0;
if (rss && rss->types) {
@@ -7036,7 +7036,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
if (!priv->reta_idx_n || !priv->rxqs_n) {
return 0;
}
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
queue[i] = (*priv->reta_idx)[i];
@@ -8704,7 +8704,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF,
NULL, "invalid port configuration");
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
ctx->action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 5c68d4f7d742..ff85c1c013a5 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -328,18 +328,18 @@ enum mlx5_feature_name {
/* Valid layer type for IPV4 RSS. */
#define MLX5_IPV4_LAYER_TYPES \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
/* IBV hash source bits for IPV4. */
#define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
/* Valid layer type for IPV6 RSS. */
#define MLX5_IPV6_LAYER_TYPES \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
/* IBV hash source bits for IPV6. */
#define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index e31d4d846825..759fe57f19d6 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10837,9 +10837,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
else
dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10847,9 +10847,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
else
dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10863,11 +10863,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
return;
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
- if (rss_types & ETH_RSS_UDP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_UDP;
else
@@ -10875,11 +10875,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
}
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
- if (rss_types & ETH_RSS_TCP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_TCP;
else
@@ -14418,9 +14418,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4:
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV4;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV4;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV4;
else
*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14429,9 +14429,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV6:
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV6;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV6;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV6;
else
*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14440,11 +14440,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_UDP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_UDP:
- if (rss_types & ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_UDP) {
*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
else
*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14453,11 +14453,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_TCP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_TCP:
- if (rss_types & ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_TCP) {
*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
else
*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14605,8 +14605,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
origin = &shared_rss->origin;
origin->func = rss->func;
origin->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1627c3905fa4..8a455cbf22f4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1816,7 +1816,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_TCP,
+ (rss_desc, tunnel, RTE_ETH_RSS_TCP,
(IBV_RX_HASH_SRC_PORT_TCP |
IBV_RX_HASH_DST_PORT_TCP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1829,7 +1829,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_UDP,
+ (rss_desc, tunnel, RTE_ETH_RSS_UDP,
(IBV_RX_HASH_SRC_PORT_UDP |
IBV_RX_HASH_DST_PORT_UDP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
--git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
if (!(*priv->rxqs)[i])
continue;
(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
- !!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+ !!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
++idx;
}
return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
}
/* Fill each entry of the table even if its bit is not set. */
for (idx = 0, i = 0; (i != reta_size); ++i) {
- idx = i / RTE_RETA_GROUP_SIZE;
- reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
(*priv->reta_idx)[i];
}
return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
return ret;
for (idx = 0, i = 0; (i != reta_size); ++i) {
- idx = i / RTE_RETA_GROUP_SIZE;
- pos = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ pos = i % RTE_ETH_RETA_GROUP_SIZE;
if (((reta_conf[idx].mask >> i) & 0x1) == 0)
continue;
MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index d8d7e481dea0..eb4dc3375248 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,22 +333,22 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *config = &priv->config;
- uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_RSS_HASH);
+ uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
if (config->hw_fcs_strip)
- offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (config->hw_csum)
- offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
if (config->hw_vlan_strip)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (MLX5_LRO_SUPPORTED(dev))
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
return offloads;
}
@@ -362,7 +362,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
uint64_t
mlx5_get_rx_port_offloads(void)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
return offloads;
}
@@ -694,7 +694,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->dev_conf.rxmode.offloads;
/* The offloads should be checked on rte_eth_dev layer. */
- MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+ MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
DRV_LOG(ERR, "port %u queue index %u split "
"offload not configured",
@@ -1325,7 +1325,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
- unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+ unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
@@ -1428,7 +1428,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
MLX5_ASSERT(tmpl->rxq.rxseg_n &&
tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
- if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
@@ -1472,7 +1472,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
config->mprq.stride_size_n : mprq_stride_size;
tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
tmpl->rxq.strd_scatter_en =
- !!(offloads & DEV_RX_OFFLOAD_SCATTER);
+ !!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
max_lro_size = RTE_MIN(max_rx_pktlen,
@@ -1487,7 +1487,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
max_lro_size = max_rx_pktlen;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
if (lro_on_queue && first_mb_free_size <
@@ -1548,9 +1548,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
/* Toggle RX checksum offload if hardware supports it. */
- tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+ tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* Configure Rx timestamp. */
- tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+ tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
tmpl->rxq.timestamp_rx_flag = 0;
if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
&tmpl->rxq.timestamp_offset,
@@ -1559,11 +1559,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
goto error;
}
/* Configure VLAN stripping. */
- tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
/* By default, FCS (CRC) is stripped by hardware. */
tmpl->rxq.crc_present = 0;
tmpl->rxq.lro = lro_on_queue;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (config->hw_fcs_strip) {
/*
* RQs used for LRO-enabled TIRs should not be
@@ -1593,7 +1593,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
tmpl->rxq.crc_present << 2);
/* Save port ID. */
tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
- (!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+ (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
tmpl->rxq.port_id = dev->data->port_id;
tmpl->priv = priv;
tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
/* HW checksum offload capabilities of vectorized Tx. */
#define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
/*
* Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
unsigned int diff = 0, olx = 0, i, m;
MLX5_ASSERT(priv);
- if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
/* We should support Multi-Segment Packets. */
olx |= MLX5_TXOFF_CONFIG_MULTI;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
/* We should support TCP Send Offload. */
olx |= MLX5_TXOFF_CONFIG_TSO;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support Software Parser for Tunnels. */
olx |= MLX5_TXOFF_CONFIG_SWP;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support IP/TCP/UDP Checksums. */
olx |= MLX5_TXOFF_CONFIG_CSUM;
}
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
/* We should support VLAN insertion. */
olx |= MLX5_TXOFF_CONFIG_VLAN;
}
- if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
rte_mbuf_dynflag_lookup
(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 1f92250f5edd..02bb9307ae61 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,42 +98,42 @@ uint64_t
mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
- uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
struct mlx5_dev_config *config = &priv->config;
if (config->hw_csum)
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if (config->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (config->tx_pp)
- offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+ offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
if (config->swp) {
if (config->swp & MLX5_SW_PARSING_CSUM_CAP)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->swp & MLX5_SW_PARSING_TSO_CAP)
- offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
}
if (config->tunnel_en) {
if (config->hw_csum)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->tso) {
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)
- offloads |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_GRE_CAP)
- offloads |= DEV_TX_OFFLOAD_GRE_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO;
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)
- offloads |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
}
}
if (!config->mprq.enabled)
- offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
return offloads;
}
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
unsigned int inlen_mode; /* Minimal required Inline data. */
unsigned int txqs_inline; /* Min Tx queues to enable inline. */
uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
- bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
bool vlan_inline;
unsigned int temp;
txq_ctrl->txq.fast_free =
- !!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
- !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+ !!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
!config->mprq.enabled);
if (config->txqs_inline == MLX5_ARG_UNSET)
txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
* tx_burst routine.
*/
txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
- vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+ vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
!config->hw_vlan_insert;
/*
* If there are few Tx queues it is prioritized
@@ -978,19 +978,19 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
MLX5_MAX_TSO_HEADER);
txq_ctrl->txq.tso_en = 1;
}
- if (((DEV_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
+ if (((RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) |
- ((DEV_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
+ ((RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) |
- ((DEV_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
+ ((RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) |
(config->swp & MLX5_SW_PARSING_TSO_CAP))
txq_ctrl->txq.tunnel_en = 1;
- txq_ctrl->txq.swp_en = (((DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO) &
+ txq_ctrl->txq.swp_en = (((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO) &
txq_ctrl->txq.offloads) && (config->swp &
MLX5_SW_PARSING_TSO_CAP)) |
- ((DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM &
+ ((RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM &
txq_ctrl->txq.offloads) && (config->swp &
MLX5_SW_PARSING_CSUM_CAP));
}
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct mlx5_priv *priv = dev->data->dev_private;
unsigned int i;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (!priv->config.hw_vlan_strip) {
DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 8937ec0d3037..7f7b545ca63a 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -485,8 +485,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
if (config->hw_padding) {
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2a0288087357..10fe6d828ccd 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
struct mvneta_priv *priv = dev->data->dev_private;
struct neta_ppio_params *ppio_params;
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
if (dev->data->nb_rx_queues > 1)
@@ -126,7 +126,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ppio_params = &priv->ppio_params;
@@ -151,10 +151,10 @@ static int
mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_dev_info *info)
{
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G;
info->max_rx_queues = MRVL_NETA_RXQ_MAX;
info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -503,28 +503,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
neta_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index 6428f9ff7931..64aadcffd85a 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,14 +54,14 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 9836bb071a82..62d8aa586dae 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -734,7 +734,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9b5..d0746b0d1215 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,15 +58,15 @@
#define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
/** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
@@ -442,14 +442,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
if (rss_conf->rss_hf == 0) {
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
- } else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_2_TUPLE;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 1;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 0;
@@ -483,8 +483,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
- dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+ dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
return -EINVAL;
@@ -502,7 +502,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -524,7 +524,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return ret;
if (dev->data->nb_rx_queues == 1 &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
priv->configured = 1;
@@ -623,7 +623,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -644,7 +644,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -664,14 +664,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
ret = pp2_ppio_disable(priv->ppio);
if (ret)
return ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -893,7 +893,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
if (dev->data->all_multicast == 1)
mrvl_allmulticast_enable(dev);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = mrvl_populate_vlan_table(dev, 1);
if (ret) {
MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -929,11 +929,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
priv->flow_ctrl = 0;
}
- if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
ret = mrvl_dev_set_link_up(dev);
if (ret) {
MRVL_LOG(ERR, "Failed to set link up");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
}
@@ -1202,30 +1202,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case SPEED_10000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
pp2_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
@@ -1709,11 +1709,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
{
struct mrvl_priv *priv = dev->data->dev_private;
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
info->max_rx_queues = MRVL_PP2_RXQ_MAX;
info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1733,9 +1733,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
info->tx_offload_capa = MRVL_TX_OFFLOADS;
info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
- info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP;
+ info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP;
/* By default packets are dropped if no descriptors are available */
info->default_rxconf.rx_drop_en = 1;
@@ -1864,13 +1864,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
int ret;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
MRVL_LOG(ERR, "VLAN stripping is not supported\n");
return -ENOTSUP;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = mrvl_populate_vlan_table(dev, 1);
else
ret = mrvl_populate_vlan_table(dev, 0);
@@ -1879,7 +1879,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return ret;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
MRVL_LOG(ERR, "Extend VLAN not supported\n");
return -ENOTSUP;
}
@@ -2022,7 +2022,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
- rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+ rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2182,7 +2182,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
return ret;
}
- fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+ fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
if (ret) {
@@ -2191,10 +2191,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
if (en) {
- if (fc_conf->mode == RTE_FC_NONE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
}
return 0;
@@ -2240,19 +2240,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
rx_en = 1;
tx_en = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
rx_en = 0;
tx_en = 1;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
rx_en = 1;
tx_en = 0;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
rx_en = 0;
tx_en = 0;
break;
@@ -2329,11 +2329,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hash_type == PP2_PPIO_HASH_T_NONE)
rss_conf->rss_hf = 0;
else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
- rss_conf->rss_hf = ETH_RSS_IPV4;
+ rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
return 0;
}
@@ -3152,7 +3152,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
eth_dev->dev_ops = &mrvl_ops;
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_eth_dev_probing_finish(eth_dev);
return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
#include "hn_nvs.h"
#include "ndis.h"
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NETVSC_ARG_LATENCY "latency"
#define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
hn_rndis_get_linkspeed(hv);
link = (struct rte_eth_link) {
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_autoneg = ETH_LINK_SPEED_FIXED,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
.link_speed = hv->link_speed / 10000,
};
if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
if (old.link_status == link.link_status)
return 0;
PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
- (link.link_status == ETH_LINK_UP) ? "up" : "down");
+ (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
return rte_eth_linkstatus_set(dev, &link);
}
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
struct hn_data *hv = dev->data->dev_private;
int rc;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = HN_MAX_XFER_LEN;
dev_info->max_mac_addrs = 1;
dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
dev_info->flow_type_rss_offloads = hv->rss_offloads;
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->max_rx_queues = hv->max_queues;
dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < NDIS_HASH_INDCNT; i++) {
- uint16_t idx = i / RTE_RETA_GROUP_SIZE;
- uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+ uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
uint64_t mask = (uint64_t)1 << shift;
if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < NDIS_HASH_INDCNT; i++) {
- uint16_t idx = i / RTE_RETA_GROUP_SIZE;
- uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+ uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
uint64_t mask = (uint64_t)1 << shift;
if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
/* Convert from DPDK RSS hash flags to NDIS hash flags */
hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
hv->rss_hash |= NDIS_HASH_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
hv->rss_hash |= NDIS_HASH_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
hv->rss_hash |= NDIS_HASH_IPV6_EX;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
if (hv->rss_hash & NDIS_HASH_IPV4)
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (hv->rss_hash & NDIS_HASH_IPV6)
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
if (hv->rss_hash & NDIS_HASH_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
return 0;
}
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = hn_rndis_conf_offload(hv, txmode->offloads,
rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 62ba39636cd8..1b63b27e0c3e 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
hv->rss_offloads = 0;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
- hv->rss_offloads |= ETH_RSS_IPV4
- | ETH_RSS_NONFRAG_IPV4_TCP
- | ETH_RSS_NONFRAG_IPV4_UDP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV4
+ | RTE_ETH_RSS_NONFRAG_IPV4_TCP
+ | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
- hv->rss_offloads |= ETH_RSS_IPV6
- | ETH_RSS_NONFRAG_IPV6_TCP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6
+ | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
- hv->rss_offloads |= ETH_RSS_IPV6_EX
- | ETH_RSS_IPV6_TCP_EX;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+ | RTE_ETH_RSS_IPV6_TCP_EX;
/* Commit! */
*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
== NDIS_RXCSUM_CAP_TCP4)
params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
== NDIS_TXCSUM_CAP_IP4)
params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
else
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
else
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
return error;
}
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
== HN_NDIS_TXCSUM_CAP_IP4)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
== HN_NDIS_TXCSUM_CAP_TCP4 &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
== HN_NDIS_TXCSUM_CAP_TCP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
(hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
== HN_NDIS_LSOV2_CAP_IP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
return 0;
}
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 99d93ebf4667..3c39937816a4 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = dev->data->nb_rx_queues;
dev_info->max_tx_queues = dev->data->nb_tx_queues;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
status.speed = MAC_SPEED_UNKNOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_SPEED_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
if (internals->rxmac[0] != NULL) {
nc_rxmac_read_status(internals->rxmac[0], &status);
switch (status.speed) {
case MAC_SPEED_10G:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case MAC_SPEED_40G:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case MAC_SPEED_100G:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
nc_rxmac_read_status(internals->rxmac[i], &status);
if (status.enabled && status.link_up) {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
break;
}
}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 3ebb332ae46c..f76e2ba64621 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
}
/* Timestamps are enabled when there is
* key-value pair: enable_timestamp=1
- * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+ * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
*/
if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 0003fd54dde5..3ea697c54462 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Checking TX mode */
if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
}
/* Checking RX mode */
- if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
!(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
PMD_INIT_LOG(INFO, "RSS not supported");
return -EINVAL;
@@ -359,19 +359,19 @@ nfp_check_offloads(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
hw->mtu = dev->data->mtu;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
/* L2 broadcast */
@@ -383,13 +383,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_L2MC;
/* TX checksum offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
/* LSO offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hw->cap & NFP_NET_CFG_CTRL_LSO)
ctrl |= NFP_NET_CFG_CTRL_LSO;
else
@@ -397,7 +397,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
/* RX gather */
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
ctrl |= NFP_NET_CFG_CTRL_GATHER;
return ctrl;
@@ -485,14 +485,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
int ret;
static const uint32_t ls_to_ethtool[] = {
- [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_1G] = ETH_SPEED_NUM_1G,
- [NFP_NET_CFG_STS_LINK_RATE_10G] = ETH_SPEED_NUM_10G,
- [NFP_NET_CFG_STS_LINK_RATE_25G] = ETH_SPEED_NUM_25G,
- [NFP_NET_CFG_STS_LINK_RATE_40G] = ETH_SPEED_NUM_40G,
- [NFP_NET_CFG_STS_LINK_RATE_50G] = ETH_SPEED_NUM_50G,
- [NFP_NET_CFG_STS_LINK_RATE_100G] = ETH_SPEED_NUM_100G,
+ [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_1G] = RTE_ETH_SPEED_NUM_1G,
+ [NFP_NET_CFG_STS_LINK_RATE_10G] = RTE_ETH_SPEED_NUM_10G,
+ [NFP_NET_CFG_STS_LINK_RATE_25G] = RTE_ETH_SPEED_NUM_25G,
+ [NFP_NET_CFG_STS_LINK_RATE_40G] = RTE_ETH_SPEED_NUM_40G,
+ [NFP_NET_CFG_STS_LINK_RATE_50G] = RTE_ETH_SPEED_NUM_50G,
+ [NFP_NET_CFG_STS_LINK_RATE_100G] = RTE_ETH_SPEED_NUM_100G,
};
PMD_DRV_LOG(DEBUG, "Link update");
@@ -504,15 +504,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
memset(&link, 0, sizeof(struct rte_eth_link));
if (nn_link_status & NFP_NET_CFG_STS_LINK)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
NFP_NET_CFG_STS_LINK_RATE_MASK;
if (nn_link_status >= RTE_DIM(ls_to_ethtool))
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
link.link_speed = ls_to_ethtool[nn_link_status];
@@ -701,26 +701,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = 1;
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -757,22 +757,22 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
};
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_UDP;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP;
dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
}
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -843,7 +843,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
if (link.link_status)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -973,12 +973,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
new_ctrl = 0;
/* Enable vlan strip if it is not configured yet */
- if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
!(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
/* Disable vlan strip just if it is configured */
- if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
@@ -1018,8 +1018,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
*/
for (i = 0; i < reta_size; i += 4) {
/* Handling 4 RSS entries per loop */
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
if (!mask)
@@ -1099,8 +1099,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
*/
for (i = 0; i < reta_size; i += 4) {
/* Handling 4 RSS entries per loop */
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
if (!mask)
@@ -1138,22 +1138,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
rss_hf = rss_conf->rss_hf;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1223,22 +1223,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
/* Propagate current RSS hash functions to caller */
rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8c7..e08e594b04fe 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -141,7 +141,7 @@ nfp_net_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0c9..817fe64dbceb 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -103,7 +103,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = link_up;
link_speeds = &dev->data->dev_conf.link_speeds;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
negotiate = true;
err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
allowed_speeds = 0;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
- allowed_speeds |= ETH_LINK_SPEED_1G;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_100M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_10M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
if (*link_speeds & ~allowed_speeds) {
PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = hw->mac.default_speeds;
} else {
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= NGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= NGBE_LINK_SPEED_100M_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= NGBE_LINK_SPEED_10M_FULL;
}
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_10M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_10M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ~ETH_LINK_SPEED_AUTONEG);
+ ~RTE_ETH_LINK_SPEED_AUTONEG);
hw->mac.get_link_status = true;
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case NGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
case NGBE_LINK_SPEED_10M_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
lan_speed = 0;
break;
case NGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
lan_speed = 1;
break;
case NGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
lan_speed = 2;
break;
}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
- if (link.link_status == ETH_LINK_UP) {
+ if (link.link_status == RTE_ETH_LINK_UP) {
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
ngbe_dev_link_update(dev, 0);
/* likely to up */
- if (link.link_status != ETH_LINK_UP)
+ if (link.link_status != RTE_ETH_LINK_UP)
/* handle it 1 sec later, wait it being stable */
timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 25b9e5b1ce1b..ca03469d0e6d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
rte_spinlock_t rss_lock;
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
- RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+ RTE_ETH_RETA_GROUP_SIZE];
uint8_t rss_key[40]; /**< 40-byte hash key. */
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
if (dev == NULL)
return -EINVAL;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
if (dev == NULL)
return 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -391,9 +391,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
rte_spinlock_lock(&internal->rss_lock);
/* Copy RETA table */
- for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
internal->reta_conf[i].mask = reta_conf[i].mask;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
}
@@ -416,8 +416,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
rte_spinlock_lock(&internal->rss_lock);
/* Copy RETA table */
- for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
}
@@ -548,8 +548,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
internals->port_id = eth_dev->data->port_id;
rte_eth_random_addr(internals->eth_addr.addr_bytes);
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
- internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
+ internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
rte_memcpy(internals->rss_key, default_rss_key, 40);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index f578123ed00b..5b8cbec67b5d 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
(eth_dev->data->port_id),
link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
switch (nic->speed) {
case OCTEONTX_LINK_SPEED_SGMII:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case OCTEONTX_LINK_SPEED_XAUI:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_RXAUI:
case OCTEONTX_LINK_SPEED_10G_R:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_QSGMII:
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case OCTEONTX_LINK_SPEED_40G_R:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case OCTEONTX_LINK_SPEED_RESERVE1:
case OCTEONTX_LINK_SPEED_RESERVE2:
default:
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
octeontx_log_err("incorrect link speed %d", nic->speed);
break;
}
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= OCCTX_TX_MULTI_SEG_F;
return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
flags |= OCCTX_RX_MULTI_SEG_F;
eth_dev->data->scattered_rx = 1;
/* If scatter mode is enabled, TX should also be in multi
* seg mode, else memory leak will occur
*/
- nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
- if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+ if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
- txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+ txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
octeontx_log_err("setting link speed/duplex not supported");
return -EINVAL;
}
@@ -530,13 +530,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
octeontx_log_err("Scatter mode is disabled");
return -EINVAL;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -571,7 +571,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
/* Setup scatter mode if needed by jumbo */
if (data->mtu > buffsz) {
- nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
}
@@ -843,10 +843,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
struct octeontx_nic *nic = octeontx_pmd_priv(dev);
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_40G;
/* Min/Max MTU supported */
dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1356,7 +1356,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
nic->ev_ports = 1;
nic->print_flag = -1;
- data->dev_link.link_status = ETH_LINK_DOWN;
+ data->dev_link.link_status = RTE_ETH_LINK_DOWN;
data->dev_started = 0;
data->promiscuous = 0;
data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index 3a02824e3948..c493fa7a03ed 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,23 +55,23 @@
#define OCCTX_MAX_MTU (OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
#define OCTEONTX_RX_OFFLOADS ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
static inline struct octeontx_nic *
octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = octeontx_vlan_hw_filter(nic, true);
if (rc)
goto done;
- nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
} else {
rc = octeontx_vlan_hw_filter(nic, false);
if (rc)
goto done;
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
}
}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
TAILQ_INIT(&nic->vlan_info.fltr_tbl);
- rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
if (rc)
octeontx_log_err("Failed to set vlan offload rc=%d", rc);
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
return rc;
if (conf.rx_pause && conf.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (conf.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (conf.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
/* low_water & high_water values are in Bytes */
fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
conf.high_water = fc_conf->high_water;
conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 9c5d748e8575..72da8856bd86 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
if (otx2_dev_is_vf(dev) ||
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
/* TSO not supported for earlier chip revisions */
if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
return capa;
}
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
req->npa_func = otx2_npa_pf_func_get();
req->sso_func = otx2_sso_pf_func_get();
req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
aq->rq.sso_ena = 0;
- if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
aq->rq.ipsech_ena = 1;
aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -665,7 +665,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev)) {
rc = otx2_nix_raw_clock_tsc_conv(dev);
if (rc) {
@@ -692,7 +692,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -707,29 +707,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
if (!dev->ptype_disable)
@@ -768,43 +768,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
- conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if (conf & DEV_TX_OFFLOAD_SECURITY)
+ if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
@@ -914,8 +914,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Setting up the rx[tx]_offload_flags due to change
* in rx[tx]_offloads.
@@ -1848,21 +1848,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
otx2_err("Outer IP and SCTP checksum unsupported");
goto fail_configure;
}
@@ -2235,7 +2235,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled in PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_enable(eth_dev);
else
@@ -2563,8 +2563,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
rc = otx2_eth_sec_ctx_create(eth_dev);
if (rc)
goto free_mac_addrs;
- dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+ dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+ dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
/* Initialize rte-flow */
rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4557a0ee1945..a5282c6c1231 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,43 +117,43 @@
#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
#define CQ_TIMER_THRESH_MAX 255
-#define NIX_RSS_L3_L4_SRC_DST (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
- | ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+ | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
- ETH_RSS_TCP | ETH_RSS_SCTP | \
- ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
- ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+ NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_C_VLAN)
#define NIX_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
#define NIX_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_QINQ_STRIP | \
- DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
- val = ETH_RSS_RETA_SIZE_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
- val = ETH_RSS_RETA_SIZE_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
- val = ETH_RSS_RETA_SIZE_256;
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+ val = RTE_ETH_RSS_RETA_SIZE_64;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+ val = RTE_ETH_RSS_RETA_SIZE_128;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+ val = RTE_ETH_RSS_RETA_SIZE_256;
else
val = NIX_RSS_RETA_SIZE;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba45..d5caaa326a5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -26,11 +26,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
return -EINVAL;
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * NIX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -568,17 +568,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
};
/* Auto negotiation disabled */
- devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
/* 50G and 100G to be supported for board version C0
* and above.
*/
if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index 7bd1ed6da043..4d40184de46d 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -869,8 +869,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
!RTE_IS_POWER_OF_2(sa_width));
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return 0;
if (rte_security_dynfield_register() < 0)
@@ -912,8 +912,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
uint16_t port = eth_dev->data->port_id;
char name[RTE_MEMZONE_NAMESIZE];
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
goto err_exit;
}
- if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
rc = flow_update_sec_tt(dev, actions);
if (rc != 0) {
rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
int rc;
if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
goto done;
if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rsp->rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (rsp->tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
done:
return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (otx2_dev_is_Ax(dev) &&
(dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+ (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_conf.mode =
- (fc_conf.mode == RTE_FC_FULL ||
- fc_conf.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_conf.mode == RTE_ETH_FC_FULL ||
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
return 0;
memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 79b92fda8a4a..91267bbb8182 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
attr, "No support of RSS in egress");
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
act, "multi-queue mode is disabled");
@@ -1186,7 +1186,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
*FLOW_KEY_ALG index. So, till we update the action with
*flow_key_alg index, set the action to drop.
*/
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
flow->npc_action = NIX_RX_ACTIONOP_DROP;
else
flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
otx2_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
eth_link.link_status = link->link_up;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
static int
lbk_link_update(struct rte_eth_link *link)
{
- link->link_status = ETH_LINK_UP;
- link->link_speed = ETH_SPEED_NUM_100G;
- link->link_autoneg = ETH_LINK_FIXED;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = RTE_ETH_LINK_UP;
+ link->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
link->link_status = rsp->link_info.link_up;
link->link_speed = rsp->link_info.speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (rsp->link_info.full_duplex)
link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
/* 50G and 100G to be supported for board version C0 and above */
if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
link_speed = 100000;
- if (link_speeds & ETH_LINK_SPEED_50G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
link_speed = 50000;
}
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed = 40000;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed = 25000;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed = 20000;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed = 10000;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
link_speed = 5000;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed = 1000;
return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
static inline uint8_t
nix_parse_eth_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
return cgx_change_mode(dev, &cfg);
}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
/* System time should be already on by default */
nix_start_timecounters(eth_dev);
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
return -EINVAL;
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
rss->ind_tbl[idx] = reta_conf[i].reta[j];
idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = rss->ind_tbl[j];
}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
}
#define RSS_IPV4_ENABLE ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE ( \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE ( \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
dev->rss_info.nix_rss = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
otx2_nix_rss_set_key(dev, rss_conf->rss_key,
(uint32_t)rss_conf->rss_key_len);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
int rc;
/* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
return 0;
/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
}
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952dc..986902287b67 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
/* For PTP enabled, scalar rx function should be chosen as most of the
* PTP apps are implemented to rx burst 1 pkt.
*/
- if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
pick_rx_func(eth_dev, nix_eth_rx_burst);
else
pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b913..c60190074926 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
else
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
* Take offset from LA since in case of untagged packet,
* lbptr is zero.
*/
- if (type == ETH_VLAN_TYPE_OUTER) {
+ if (type == RTE_ETH_VLAN_TYPE_OUTER) {
vtag_action.act.vtag0_def = vtag_index;
vtag_action.act.vtag0_lid = NPC_LID_LA;
vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
if (vlan->strip_on ||
(vlan->qinq_on && !vlan->qinq_before_def)) {
if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- ETH_MQ_RX_RSS)
+ RTE_ETH_MQ_RX_RSS)
vlan->def_rx_mcam_ent.action |=
NIX_RX_ACTIONOP_RSS;
else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
rxmode = ð_dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, true);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, false);
}
if (rc)
goto done;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, true, 0);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, false, 0);
}
if (rc)
goto done;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
if (!dev->vlan_info.qinq_on) {
- offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, true);
if (rc)
goto done;
}
} else {
if (dev->vlan_info.qinq_on) {
- offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, false);
if (rc)
goto done;
}
}
- if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP)) {
+ if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
dev->rx_offloads |= offloads;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
tpid_cfg->tpid = tpid;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
else
tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
if (rc)
return rc;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
dev->vlan_info.outer_vlan_tpid = tpid;
else
dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
vlan->outer_vlan_idx = 0;
}
- rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+ rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
vtag_index, on);
if (rc < 0) {
printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
} else {
/* Reinstall all mcam entries now if filter offload is set */
if (eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
nix_vlan_reinstall_vlan_filters(eth_dev);
}
mask =
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
rc = otx2_nix_vlan_offload_set(eth_dev, mask);
if (rc) {
otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 698d22e22685..74dc36a17648 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,14 +33,14 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
otx_epvf = OTX_EP_DEV(eth_dev);
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
devinfo->max_rx_queues = otx_epvf->max_rx_queues;
devinfo->max_tx_queues = otx_epvf->max_tx_queues;
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
- devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+ devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index aa4dcd33cc79..9338b30672ec 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)
@@ -954,7 +954,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l4_len = hdr_lens.l4_len;
if (droq_pkt->nb_segs > 1 &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
goto oq_read_fail;
}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index d695c5eef7b0..ec29fd6bc53c 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -136,10 +136,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -659,7 +659,7 @@ eth_dev_start(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -714,7 +714,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 4cc002ee8fab..047010e15ed0 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
static struct pfe *g_pfe;
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
/* TODO: make pfe_svr a runtime option.
* Driver should be able to get the SVR
@@ -601,9 +601,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
}
link.link_status = lstatus;
- link.link_speed = ETH_LINK_SPEED_1G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_speed = RTE_ETH_LINK_SPEED_1G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
pfe_eth_atomic_write_link_status(dev, &link);
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t; /* In DWORDS !!! */
struct eth_phy_cfg {
/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
u32 speed;
-#define ETH_SPEED_AUTONEG 0
-#define ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG 0
+#define RTE_ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
u32 pause; /* bitmask */
#define ETH_PAUSE_NONE 0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc74e..c907d7fd8312 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
}
use_tx_offload = !!(tx_offloads &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
- DEV_TX_OFFLOAD_TCP_TSO | /* tso */
- DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
if (use_tx_offload) {
DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
(void)qede_vlan_stripping(eth_dev, 1);
else
(void)qede_vlan_stripping(eth_dev, 0);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN filtering kicks in when a VLAN is added */
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
qede_vlan_filter_set(eth_dev, 0, 1);
} else {
if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
* enabled
*/
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
qede_vlan_filter_set(eth_dev, 0, 0);
}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
/* Configure default RETA */
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
- id = i / RTE_RETA_GROUP_SIZE;
- pos = i % RTE_RETA_GROUP_SIZE;
+ id = i / RTE_ETH_RETA_GROUP_SIZE;
+ pos = i % RTE_ETH_RETA_GROUP_SIZE;
q = i % QEDE_RSS_COUNT(eth_dev);
reta_conf[id].reta[pos] = q;
}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure TPA parameters */
- if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (qede_enable_tpa(eth_dev, true))
return -EINVAL;
/* Enable scatter mode for LRO */
if (!eth_dev->data->scattered_rx)
- rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
}
/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
* Also, we would like to retain similar behavior in PF case, so we
* don't do PF/VF specific check here.
*/
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
if (qede_config_rss(eth_dev))
goto err;
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE(edev);
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* We need to have min 1 RX queue.There is no min check in
* rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DP_NOTICE(edev, false,
"Invalid devargs supplied, requested change will not take effect\n");
- if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
- rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+ if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
DP_ERR(edev, "Unsupported multi-queue mode\n");
return -ENOTSUP;
}
@@ -1312,7 +1312,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1321,8 +1321,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
qdev->mtu = eth_dev->data->mtu;
/* Enable VLAN offloads by default */
- ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -1385,34 +1385,34 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
- dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
dev_info->rx_queue_offload_capa = 0;
/* TX offloads are on a per-packet basis, so it is applicable
* to both at port and queue levels.
*/
- dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
dev_info->default_txconf = (struct rte_eth_txconf) {
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
};
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1424,17 +1424,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(struct qed_link_output));
qdev->ops->common->get_link(edev, &link);
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
- speed_cap |= ETH_LINK_SPEED_1G;
+ speed_cap |= RTE_ETH_LINK_SPEED_1G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
- speed_cap |= ETH_LINK_SPEED_10G;
+ speed_cap |= RTE_ETH_LINK_SPEED_10G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
- speed_cap |= ETH_LINK_SPEED_25G;
+ speed_cap |= RTE_ETH_LINK_SPEED_25G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
- speed_cap |= ETH_LINK_SPEED_40G;
+ speed_cap |= RTE_ETH_LINK_SPEED_40G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
- speed_cap |= ETH_LINK_SPEED_50G;
+ speed_cap |= RTE_ETH_LINK_SPEED_50G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
- speed_cap |= ETH_LINK_SPEED_100G;
+ speed_cap |= RTE_ETH_LINK_SPEED_100G;
dev_info->speed_capa = speed_cap;
return 0;
@@ -1461,10 +1461,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
/* Link Mode */
switch (q_link.duplex) {
case QEDE_DUPLEX_HALF:
- link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case QEDE_DUPLEX_FULL:
- link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case QEDE_DUPLEX_UNKNOWN:
default:
@@ -1473,11 +1473,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
link.link_duplex = link_duplex;
/* Link Status */
- link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
/* AN */
link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
link.link_speed, link.link_duplex,
@@ -2012,12 +2012,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
/* Pause is assumed to be supported (SUPPORTED_Pause) */
- if (fc_conf->mode == RTE_FC_FULL)
+ if (fc_conf->mode == RTE_ETH_FC_FULL)
params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
QED_LINK_PAUSE_RX_ENABLE);
- if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
- if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
params.link_up = true;
@@ -2041,13 +2041,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
QED_LINK_PAUSE_TX_ENABLE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2088,14 +2088,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
{
*rss_caps = 0;
- *rss_caps |= (hf & ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
}
int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2221,7 +2221,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
uint8_t entry;
int rc = 0;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported by hardware\n",
reta_size);
return -EINVAL;
@@ -2245,8 +2245,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
for_each_hwfn(edev, i) {
for (j = 0; j < reta_size; j++) {
- idx = j / RTE_RETA_GROUP_SIZE;
- shift = j % RTE_RETA_GROUP_SIZE;
+ idx = j / RTE_ETH_RETA_GROUP_SIZE;
+ shift = j % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift)) {
entry = reta_conf[idx].reta[shift];
fid = entry * edev->num_hwfns + i;
@@ -2282,15 +2282,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
uint16_t i, idx, shift;
uint8_t entry;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported\n",
reta_size);
return -EINVAL;
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift)) {
entry = qdev->rss_ind_table[i];
reta_conf[idx].reta[shift] = entry;
@@ -2718,16 +2718,16 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
adapter->ipgre.num_filters = 0;
if (is_vf) {
adapter->vxlan.enable = true;
- adapter->vxlan.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->vxlan.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
adapter->vxlan.udp_port = QEDE_VXLAN_DEF_PORT;
adapter->geneve.enable = true;
- adapter->geneve.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->geneve.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
adapter->geneve.udp_port = QEDE_GENEVE_DEF_PORT;
adapter->ipgre.enable = true;
- adapter->ipgre.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->ipgre.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
} else {
adapter->vxlan.enable = false;
adapter->geneve.enable = false;
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..440440423a32 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -20,97 +20,97 @@ const struct _qede_udp_tunn_types {
const char *string;
} qede_tunn_types[] = {
{
- ETH_TUNNEL_FILTER_OMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC,
ECORE_FILTER_MAC,
ECORE_TUNN_CLSS_MAC_VLAN,
"outer-mac"
},
{
- ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_TENID,
ECORE_FILTER_VNI,
ECORE_TUNN_CLSS_MAC_VNI,
"vni"
},
{
- ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_INNER_MAC,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-mac"
},
{
- ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_INNER_VLAN,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-vlan"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID,
ECORE_FILTER_MAC_VNI_PAIR,
ECORE_TUNN_CLSS_MAC_VNI,
"outer-mac and vni"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-mac and inner-mac"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-mac and inner-vlan"
},
{
- ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_INNER_MAC_VNI_PAIR,
ECORE_TUNN_CLSS_INNER_MAC_VNI,
"vni and inner-mac",
},
{
- ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"vni and inner-vlan",
},
{
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_INNER_PAIR,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-mac and inner-vlan",
},
{
- ETH_TUNNEL_FILTER_OIP,
+ RTE_ETH_TUNNEL_FILTER_OIP,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-IP"
},
{
- ETH_TUNNEL_FILTER_IIP,
+ RTE_ETH_TUNNEL_FILTER_IIP,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"inner-IP"
},
{
- RTE_TUNNEL_FILTER_IMAC_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_IVLAN"
},
{
- RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID,
+ RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_IVLAN_TENID"
},
{
- RTE_TUNNEL_FILTER_IMAC_TENID,
+ RTE_ETH_TUNNEL_FILTER_IMAC_TENID,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_TENID"
},
{
- RTE_TUNNEL_FILTER_OMAC_TENID_IMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"OMAC_TENID_IMAC"
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
/* check FDIR modes */
switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
ECORE_TUNN_CLSS_MAC_VLAN, false);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
qdev->vxlan.udp_port = udp_port;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c2263787b4ec..d585db8b61e8 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
- if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
#define QEDE_MAX_ETHER_HDR_LEN (RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
#define QEDE_ETH_MAX_LEN (RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
-#define QEDE_RSS_OFFLOAD_ALL (ETH_RSS_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP |\
- ETH_RSS_NONFRAG_IPV4_UDP |\
- ETH_RSS_IPV6 |\
- ETH_RSS_NONFRAG_IPV6_TCP |\
- ETH_RSS_NONFRAG_IPV6_UDP |\
- ETH_RSS_VXLAN |\
- ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL (RTE_ETH_RSS_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |\
+ RTE_ETH_RSS_IPV6 |\
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |\
+ RTE_ETH_RSS_VXLAN |\
+ RTE_ETH_RSS_GENEVE)
#define QEDE_RXTX_MAX(qdev) \
(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 0440019e07e1..db10f035dfcb 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -110,21 +110,21 @@ static int
eth_dev_stop(struct rte_eth_dev *dev)
{
dev->data->dev_started = 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_down(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_up(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = 1;
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 431c42f508d0..9c1be10ac93d 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -106,13 +106,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
{
uint32_t phy_caps = 0;
- if (~speeds & ETH_LINK_SPEED_FIXED) {
+ if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
phy_caps |= (1 << EFX_PHY_CAP_AN);
/*
* If no speeds are specified in the mask, any supported
* may be negotiated
*/
- if (speeds == ETH_LINK_SPEED_AUTONEG)
+ if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
phy_caps |=
(1 << EFX_PHY_CAP_1000FDX) |
(1 << EFX_PHY_CAP_10000FDX) |
@@ -121,17 +121,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
(1 << EFX_PHY_CAP_50000FDX) |
(1 << EFX_PHY_CAP_100000FDX);
}
- if (speeds & ETH_LINK_SPEED_1G)
+ if (speeds & RTE_ETH_LINK_SPEED_1G)
phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
- if (speeds & ETH_LINK_SPEED_10G)
+ if (speeds & RTE_ETH_LINK_SPEED_10G)
phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
- if (speeds & ETH_LINK_SPEED_25G)
+ if (speeds & RTE_ETH_LINK_SPEED_25G)
phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
- if (speeds & ETH_LINK_SPEED_40G)
+ if (speeds & RTE_ETH_LINK_SPEED_40G)
phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
- if (speeds & ETH_LINK_SPEED_50G)
+ if (speeds & RTE_ETH_LINK_SPEED_50G)
phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
- if (speeds & ETH_LINK_SPEED_100G)
+ if (speeds & RTE_ETH_LINK_SPEED_100G)
phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
return phy_caps;
@@ -401,10 +401,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
tx_offloads |= txq_info->offloads;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
else
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -899,7 +899,7 @@ sfc_attach(struct sfc_adapter *sa)
sa->priv.shared->tunnel_encaps =
encp->enc_tunnel_encapsulations_supported;
- if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso)
@@ -908,8 +908,8 @@ sfc_attach(struct sfc_adapter *sa)
if (sa->tso &&
(sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
- (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+ (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d958fd642fb1..eeb73a7530ef 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -979,11 +979,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
SFC_DP_RX_FEAT_INTR |
SFC_DP_RX_FEAT_STATS,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.get_dev_info = sfc_ef100_rx_get_dev_info,
.qsize_up_rings = sfc_ef100_rx_qsize_up_rings,
.qcreate = sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index e166fda888b1..67980a587fe4 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -971,16 +971,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
.features = SFC_DP_TX_FEAT_MULTI_PROCESS |
SFC_DP_TX_FEAT_STATS,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef100_get_dev_info,
.qsize_up_rings = sfc_ef100_tx_qsize_up_rings,
.qcreate = sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
},
.features = SFC_DP_RX_FEAT_FLOW_FLAG |
SFC_DP_RX_FEAT_FLOW_MARK,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.queue_offload_capa = 0,
.get_dev_info = sfc_ef10_essb_rx_get_dev_info,
.pool_ops_supported = sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
},
.features = SFC_DP_RX_FEAT_MULTI_PROCESS |
SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.get_dev_info = sfc_ef10_rx_get_dev_info,
.qsize_up_rings = sfc_ef10_rx_qsize_up_rings,
.qcreate = sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
if (txq->sw_ring == NULL)
goto fail_sw_ring_alloc;
- if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+ if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
info->txq_entries,
SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_EF10,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
.type = SFC_DP_TX,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f5986b610fff..833d833a0408 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -105,19 +105,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = sa->sriov.num_vfs;
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->max_rx_queues = sa->rxq_max;
dev_info->max_tx_queues = sa->txq_max;
@@ -145,8 +145,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
dev_info->tx_queue_offload_capa;
- if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->default_txconf.offloads |= txq_offloads_def;
@@ -989,16 +989,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
switch (link_fc) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case EFX_FCNTL_RESPOND:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case EFX_FCNTL_GENERATE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
default:
sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -1029,16 +1029,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
fcntl = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
fcntl = EFX_FCNTL_RESPOND;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
fcntl = EFX_FCNTL_GENERATE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
break;
default:
@@ -1313,7 +1313,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
- qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
qinfo->scattered_rx = 1;
}
qinfo->nb_desc = rxq_info->entries;
@@ -1523,9 +1523,9 @@ static efx_tunnel_protocol_t
sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
{
switch (rte_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
return EFX_TUNNEL_PROTOCOL_VXLAN;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
return EFX_TUNNEL_PROTOCOL_GENEVE;
default:
return EFX_TUNNEL_NPROTOS;
@@ -1652,7 +1652,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
/*
* Mapping of hash configuration between RTE and EFX is not one-to-one,
- * hence, conversion is done here to derive a correct set of ETH_RSS
+ * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
* flags which corresponds to the active EFX configuration stored
* locally in 'sfc_adapter' and kept up-to-date
*/
@@ -1778,8 +1778,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
for (entry = 0; entry < reta_size; entry++) {
- int grp = entry / RTE_RETA_GROUP_SIZE;
- int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+ int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+ int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[grp].mask >> grp_idx) & 1)
reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1828,10 +1828,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
for (entry = 0; entry < reta_size; entry++) {
- int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+ int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
struct rte_eth_rss_reta_entry64 *grp;
- grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+ grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
if (grp->mask & (1ull << grp_idx)) {
if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 8096af56739f..be2dfe778a0d 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -392,7 +392,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
const struct rte_flow_item_vlan *spec = NULL;
const struct rte_flow_item_vlan *mask = NULL;
const struct rte_flow_item_vlan supp_mask = {
- .tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+ .tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
.inner_type = RTE_BE16(0xffff),
};
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index 5320d8903dac..27b02b1119fb 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -573,66 +573,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
memset(link_info, 0, sizeof(*link_info));
if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
- link_info->link_status = ETH_LINK_DOWN;
+ link_info->link_status = RTE_ETH_LINK_DOWN;
else
- link_info->link_status = ETH_LINK_UP;
+ link_info->link_status = RTE_ETH_LINK_UP;
switch (link_mode) {
case EFX_LINK_10HDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_10FDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100HDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_100FDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_1000HDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_1000FDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_10000FDX:
- link_info->link_speed = ETH_SPEED_NUM_10G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_25000FDX:
- link_info->link_speed = ETH_SPEED_NUM_25G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_25G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_40000FDX:
- link_info->link_speed = ETH_SPEED_NUM_40G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_40G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_50000FDX:
- link_info->link_speed = ETH_SPEED_NUM_50G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_50G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100000FDX:
- link_info->link_speed = ETH_SPEED_NUM_100G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
SFC_ASSERT(B_FALSE);
/* FALLTHROUGH */
case EFX_LINK_UNKNOWN:
case EFX_LINK_DOWN:
- link_info->link_speed = ETH_SPEED_NUM_NONE;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_NONE;
link_info->link_duplex = 0;
break;
}
- link_info->link_autoneg = ETH_LINK_AUTONEG;
+ link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
int
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 2500b14cb006..9d88d554c1ba 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -405,7 +405,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
}
switch (conf->rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
if (nb_rx_queues != 1) {
sfcr_err(sr, "Rx RSS is not supported with %u queues",
nb_rx_queues);
@@ -420,7 +420,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
ret = -EINVAL;
}
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
break;
default:
sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
@@ -428,7 +428,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
break;
}
- if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+ if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
sfcr_err(sr, "Tx mode MQ modes not supported");
ret = -EINVAL;
}
@@ -553,8 +553,8 @@ sfc_repr_dev_link_update(struct rte_eth_dev *dev,
sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
} else {
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_UP;
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
}
return rte_eth_linkstatus_set(dev, &link);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c60ef17a922a..23df27c8f45a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -648,9 +648,9 @@ struct sfc_dp_rx sfc_efx_rx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_RX_EFX,
},
.features = SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.qsize_up_rings = sfc_efx_rx_qsize_up_rings,
.qcreate = sfc_efx_rx_qcreate,
.qdestroy = sfc_efx_rx_qdestroy,
@@ -931,7 +931,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (encp->enc_tunnel_encapsulations_supported == 0)
- no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
return ~no_caps;
}
@@ -1140,7 +1140,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
encp->enc_rx_prefix_size,
- (offloads & DEV_RX_OFFLOAD_SCATTER),
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
encp->enc_rx_scatter_max,
&error)) {
sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1166,15 +1166,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
rxq_info->type_flags |=
- (offloads & DEV_RX_OFFLOAD_SCATTER) ?
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
if ((encp->enc_tunnel_encapsulations_supported != 0) &&
(sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
if ((sa->negotiated_rx_metadata & RTE_ETH_RX_METADATA_USER_FLAG) != 0)
@@ -1211,7 +1211,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->refill_mb_pool = mb_pool;
if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
- (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
else
rxq_info->rxq_flags = 0;
@@ -1313,19 +1313,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
* Mapping between RTE RSS hash functions and their EFX counterparts.
*/
static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
- { ETH_RSS_NONFRAG_IPV4_TCP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP,
EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV4_UDP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP,
EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
- { ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE) },
- { ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_IPV6_EX,
+ { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_EX,
EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
EFX_RX_HASH(IPV6, 2TUPLE) }
};
@@ -1645,10 +1645,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
int rc = 0;
switch (rxmode->mq_mode) {
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* No special checks are required */
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
sfc_err(sa, "RSS is not available");
rc = EINVAL;
@@ -1665,16 +1665,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
* so unsupported offloads cannot be added as the result of
* below check.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
- (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+ (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
- rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
}
- if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
- rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
}
return rc;
@@ -1820,7 +1820,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
}
configure_rss:
- rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+ rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 13392cdd5a09..0273788c20ce 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (!encp->enc_hw_tx_insert_vlan_enabled)
- no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (!encp->enc_tunnel_encapsulations_supported)
- no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (!sa->tso)
- no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
- no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
- no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
return ~no_caps;
}
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
}
/* We either perform both TCP and UDP offload, or no offload at all */
- if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
- ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+ if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+ ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
sfc_err(sa, "TCP and UDP offloads can't be set independently");
rc = EINVAL;
}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
int rc = 0;
switch (txmode->mq_mode) {
- case ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_NONE:
break;
default:
sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -529,23 +529,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
if (rc != 0)
goto fail_ev_qstart;
- if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_IPV4;
- if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_IPV4;
- if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
flags |= EFX_TXQ_CKSUM_TCPUDP;
- if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
}
- if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
flags |= EFX_TXQ_FATSOV2;
rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -876,9 +876,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/*
* Here VLAN TCI is expected to be zero in case if no
- * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
* if the calling app ignores the absence of
- * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
* TX_ERROR will occur
*/
pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1242,13 +1242,13 @@ struct sfc_dp_tx sfc_efx_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_TX_EFX,
},
.features = 0,
- .dev_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
.qsize_up_rings = sfc_efx_tx_qsize_up_rings,
.qcreate = sfc_efx_tx_qcreate,
.qdestroy = sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
return status;
/* Link UP */
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
struct pmd_internals *p = dev->data->dev_private;
/* Link DOWN */
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
/* Firmware */
softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
/* dev->data */
dev->data->dev_private = dev_private;
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
dev->data->mac_addrs = ð_addr;
dev->data->promiscuous = 1;
dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 3c6a285e3c5e..6a084e3e1b1b 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
eth_dev_configure(struct rte_eth_dev *dev)
{
struct rte_eth_dev_data *data = dev->data;
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
dev->rx_pkt_burst = eth_szedata2_rx_scattered;
data->scattered_rx = 1;
} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_queues = internals->max_rx_queues;
dev_info->max_tx_queues = internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
dev_info->tx_offload_capa = 0;
dev_info->rx_queue_offload_capa = 0;
dev_info->tx_queue_offload_capa = 0;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1202,10 +1202,10 @@ eth_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_status = ETH_LINK_UP;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad45219e..5d5350d78e03 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
#define TAP_IOV_DEFAULT_MAX 1024
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
static int tap_devices_count;
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
static volatile uint32_t tap_trigger; /* Rx trigger */
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
len = readv(process_private->rxq_fds[rxq->queue_id],
*rxq->iovecs,
- 1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+ 1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
rxq->nb_rx_desc : 1));
if (len < (int)sizeof(struct tun_pi))
break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
seg->next = NULL;
mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
RTE_PTYPE_ALL_MASK);
- if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
tap_verify_csum(mbuf);
/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
}
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
}
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
uint32_t speed = pmd_link.link_speed;
uint32_t capa = 0;
- if (speed >= ETH_SPEED_NUM_10M)
- capa |= ETH_LINK_SPEED_10M;
- if (speed >= ETH_SPEED_NUM_100M)
- capa |= ETH_LINK_SPEED_100M;
- if (speed >= ETH_SPEED_NUM_1G)
- capa |= ETH_LINK_SPEED_1G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_2_5G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_5G;
- if (speed >= ETH_SPEED_NUM_10G)
- capa |= ETH_LINK_SPEED_10G;
- if (speed >= ETH_SPEED_NUM_20G)
- capa |= ETH_LINK_SPEED_20G;
- if (speed >= ETH_SPEED_NUM_25G)
- capa |= ETH_LINK_SPEED_25G;
- if (speed >= ETH_SPEED_NUM_40G)
- capa |= ETH_LINK_SPEED_40G;
- if (speed >= ETH_SPEED_NUM_50G)
- capa |= ETH_LINK_SPEED_50G;
- if (speed >= ETH_SPEED_NUM_56G)
- capa |= ETH_LINK_SPEED_56G;
- if (speed >= ETH_SPEED_NUM_100G)
- capa |= ETH_LINK_SPEED_100G;
+ if (speed >= RTE_ETH_SPEED_NUM_10M)
+ capa |= RTE_ETH_LINK_SPEED_10M;
+ if (speed >= RTE_ETH_SPEED_NUM_100M)
+ capa |= RTE_ETH_LINK_SPEED_100M;
+ if (speed >= RTE_ETH_SPEED_NUM_1G)
+ capa |= RTE_ETH_LINK_SPEED_1G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_2_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_10G)
+ capa |= RTE_ETH_LINK_SPEED_10G;
+ if (speed >= RTE_ETH_SPEED_NUM_20G)
+ capa |= RTE_ETH_LINK_SPEED_20G;
+ if (speed >= RTE_ETH_SPEED_NUM_25G)
+ capa |= RTE_ETH_LINK_SPEED_25G;
+ if (speed >= RTE_ETH_SPEED_NUM_40G)
+ capa |= RTE_ETH_LINK_SPEED_40G;
+ if (speed >= RTE_ETH_SPEED_NUM_50G)
+ capa |= RTE_ETH_LINK_SPEED_50G;
+ if (speed >= RTE_ETH_SPEED_NUM_56G)
+ capa |= RTE_ETH_LINK_SPEED_56G;
+ if (speed >= RTE_ETH_SPEED_NUM_100G)
+ capa |= RTE_ETH_LINK_SPEED_100G;
return capa;
}
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
if (!(ifr.ifr_flags & IFF_UP) ||
!(ifr.ifr_flags & IFF_RUNNING)) {
- dev_link->link_status = ETH_LINK_DOWN;
+ dev_link->link_status = RTE_ETH_LINK_DOWN;
return 0;
}
}
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
dev_link->link_status =
((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
- ETH_LINK_UP :
- ETH_LINK_DOWN);
+ RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN);
return 0;
}
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
int ret;
/* initialize GSO context */
- gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!pmd->gso_ctx_mp) {
/*
* Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->csum = !!(offloads &
- (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM));
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
if (ret == -1)
@@ -1760,7 +1760,7 @@ static int
tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1768,7 +1768,7 @@ static int
tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- if (fc_conf->mode != RTE_FC_NONE)
+ if (fc_conf->mode != RTE_ETH_FC_NONE)
return -ENOTSUP;
return 0;
}
@@ -2262,7 +2262,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
}
}
}
- pmd_link.link_speed = ETH_SPEED_NUM_10G;
+ pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
@@ -2436,7 +2436,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
return 0;
}
- speed = ETH_SPEED_NUM_10G;
+ speed = RTE_ETH_SPEED_NUM_10G;
/* use tap%d which causes kernel to choose next available */
strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
--git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
#define TAP_RSS_HASH_KEY_SIZE 40
/* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
/* hashed fields for RSS */
enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 328d6d56d921..38a2ddc633b5 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
if (nic->duplex == NICVF_HALF_DUPLEX)
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else if (nic->duplex == NICVF_FULL_DUPLEX)
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_speed = nic->speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* rte_eth_link_get() might need to wait up to 9 seconds */
for (i = 0; i < MAX_CHECK_TIME; i++) {
nicvf_link_status_update(nic, &link);
- if (link.link_status == ETH_LINK_UP)
+ if (link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
}
@@ -390,35 +390,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
{
uint64_t nic_rss = 0;
- if (ethdev_rss & ETH_RSS_IPV4)
+ if (ethdev_rss & RTE_ETH_RSS_IPV4)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_IPV6)
+ if (ethdev_rss & RTE_ETH_RSS_IPV6)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
nic_rss |= RSS_TUN_VXLAN_ENA;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
nic_rss |= RSS_TUN_GENEVE_ENA;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
nic_rss |= RSS_TUN_NVGRE_ENA;
}
@@ -431,28 +431,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic, uint64_t nic_rss)
uint64_t ethdev_rss = 0;
if (nic_rss & RSS_IP_ENA)
- ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+ ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP);
if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
- ethdev_rss |= ETH_RSS_PORT;
+ ethdev_rss |= RTE_ETH_RSS_PORT;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
if (nic_rss & RSS_TUN_VXLAN_ENA)
- ethdev_rss |= ETH_RSS_VXLAN;
+ ethdev_rss |= RTE_ETH_RSS_VXLAN;
if (nic_rss & RSS_TUN_GENEVE_ENA)
- ethdev_rss |= ETH_RSS_GENEVE;
+ ethdev_rss |= RTE_ETH_RSS_GENEVE;
if (nic_rss & RSS_TUN_NVGRE_ENA)
- ethdev_rss |= ETH_RSS_NVGRE;
+ ethdev_rss |= RTE_ETH_RSS_NVGRE;
}
return ethdev_rss;
}
@@ -479,8 +479,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
return ret;
/* Copy RETA table */
- for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = tbl[j];
}
@@ -509,8 +509,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
return ret;
/* Copy RETA table */
- for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
tbl[j] = reta_conf[i].reta[j];
}
@@ -807,9 +807,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
dev->data->nb_rx_queues,
dev->data->dev_conf.lpbk_mode, rsshf);
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
ret = nicvf_rss_term(nic);
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
if (ret)
PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -870,7 +870,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
multiseg = true;
break;
}
@@ -992,7 +992,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->offloads = offloads;
- is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+ is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
/* Choose optimum free threshold value for multipool case */
if (!is_single_pool) {
@@ -1382,11 +1382,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
PMD_INIT_FUNC_TRACE();
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1415,10 +1415,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
};
return 0;
@@ -1582,8 +1582,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
/* Configure VLAN Strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = nicvf_vlan_offload_config(dev, mask);
/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1711,7 +1711,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
/* Setup scatter mode if needed by jumbo */
if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
/* Setup MTU */
@@ -1896,8 +1896,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!rte_eal_has_hugepages()) {
PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1909,8 +1909,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
@@ -1920,7 +1920,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
return -EINVAL;
}
@@ -1955,7 +1955,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic->offload_cksum = 1;
PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2032,8 +2032,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct nicvf *nic = nicvf_pmd_priv(dev);
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
nicvf_vlan_hw_strip(nic, true);
else
nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 5d38750d6313..cb474e26b81e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,32 +16,32 @@
#define NICVF_UNKNOWN_DUPLEX 0xff
#define NICVF_RSS_OFFLOAD_PASS1 ( \
- ETH_RSS_PORT | \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define NICVF_RSS_OFFLOAD_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
#define NICVF_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define NICVF_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NICVF_DEFAULT_RX_FREE_THRESH 224
#define NICVF_DEFAULT_TX_FREE_THRESH 224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb68635..0b0f9db7cb2a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -998,7 +998,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
restart = (rxcfg & TXGBE_RXCFG_ENA) &&
!(rxcfg & TXGBE_RXCFG_VLAN);
rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1033,7 +1033,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (vlan_ext) {
wr32m(hw, TXGBE_VLANCTL,
TXGBE_VLANCTL_TPID_MASK,
@@ -1053,7 +1053,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
TXGBE_TAGTPID_LSB(tpid));
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (vlan_ext) {
/* Only the high 16-bits is valid */
wr32m(hw, TXGBE_EXTAG,
@@ -1138,10 +1138,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -1240,7 +1240,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
txgbe_vlan_strip_queue_set(dev, i, 1);
else
txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1254,17 +1254,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct txgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -1275,25 +1275,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
txgbe_vlan_hw_strip_config(dev);
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
txgbe_vlan_hw_filter_enable(dev);
else
txgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
txgbe_vlan_hw_extend_enable(dev);
else
txgbe_vlan_hw_extend_disable(dev);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
txgbe_qinq_hw_strip_enable(dev);
else
txgbe_qinq_hw_strip_disable(dev);
@@ -1331,10 +1331,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -1357,18 +1357,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1378,13 +1378,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode =
- ETH_MQ_RX_VMDQ_ONLY;
+ RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -1393,13 +1393,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
dev->data->dev_conf.txmode.mq_mode =
- ETH_MQ_TX_VMDQ_ONLY;
+ RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -1414,13 +1414,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1429,15 +1429,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1446,39 +1446,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -1495,8 +1495,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multiple queue mode checking */
ret = txgbe_check_mq_mode(dev);
@@ -1694,15 +1694,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
txgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1763,8 +1763,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
if (err)
goto error;
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
link_speeds = &dev->data->dev_conf.link_speeds;
if (((*link_speeds) >> 1) & ~(allowed_speeds >> 1)) {
@@ -1773,20 +1773,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = (TXGBE_LINK_SPEED_100M_FULL |
TXGBE_LINK_SPEED_1GB_FULL |
TXGBE_LINK_SPEED_10GB_FULL);
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= TXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= TXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= TXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= TXGBE_LINK_SPEED_100M_FULL;
}
@@ -2601,7 +2601,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2634,11 +2634,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_desc_lim = tx_desc_lim;
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -2695,11 +2695,11 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_AUTONEG);
hw->mac.get_link_status = true;
@@ -2713,8 +2713,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -2733,34 +2733,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
}
intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case TXGBE_LINK_SPEED_UNKNOWN:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case TXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case TXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case TXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -2990,7 +2990,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3221,13 +3221,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3359,16 +3359,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
return -ENOTSUP;
}
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += 4) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
if (!mask)
continue;
@@ -3400,16 +3400,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += 4) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
if (!mask)
continue;
@@ -3576,12 +3576,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
wr32(hw, TXGBE_UCADDRTBL(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
wr32(hw, TXGBE_UCADDRTBL(i), 0);
}
@@ -3605,15 +3605,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= TXGBE_POOLETHCTL_UTA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= TXGBE_POOLETHCTL_MCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= TXGBE_POOLETHCTL_UCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= TXGBE_POOLETHCTL_BCA;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= TXGBE_POOLETHCTL_MCP;
return new_val;
@@ -4264,15 +4264,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = TXGBE_INCVAL_100;
shift = TXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = TXGBE_INCVAL_1GB;
shift = TXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = TXGBE_INCVAL_10GB;
shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4628,7 +4628,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -4639,7 +4639,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled */
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -4663,9 +4663,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled */
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4678,7 +4678,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4908,7 +4908,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -4939,7 +4939,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -4979,7 +4979,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -4987,7 +4987,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
ret = -EINVAL;
@@ -4995,7 +4995,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
ret = -EINVAL;
@@ -5003,7 +5003,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -5035,7 +5035,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5045,7 +5045,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, 0);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5055,7 +5055,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, 0);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5065,7 +5065,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, 0);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index fd65d89ffe7d..8304b68292da 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -60,15 +60,15 @@
#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
#define TXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define TXGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define TXGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b75..283b52e8f3db 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -486,14 +486,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -574,22 +574,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -647,8 +647,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
txgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -891,10 +891,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
txgbevf_vlan_strip_queue_set(dev, i, on);
}
}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
uint32_t *fdirctrl, uint32_t *flex)
{
*fdirctrl = 0;
*flex = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_signature_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= TXGBE_SECRXCTL_CRCSTRIP;
wr32(hw, TXGBE_SECRXCTL, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
reg = rd32(hw, TXGBE_SECTXCTL);
if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index a48972b1a381..30be2873307a 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -101,15 +101,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct txgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -256,13 +256,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
break;
}
@@ -611,29 +611,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = ð_dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = TXGBE_DEV_HW(eth_dev);
vmvir = rd32(hw, TXGBE_POOLTAG(vf));
vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 7e18dcce0a86..1204dc5499a5 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1960,7 +1960,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
uint64_t
txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
{
- return DEV_RX_OFFLOAD_VLAN_STRIP;
+ return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
uint64_t
@@ -1970,34 +1970,34 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
if (!txgbe_is_vf(dev))
- offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_VLAN_EXTEND);
+ offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
/*
* RSC is only supported by PF devices in a non-SR-IOV
* mode.
*/
if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == txgbe_mac_raptor)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -2222,32 +2222,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_UDP_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (!txgbe_is_vf(dev))
- tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2349,7 +2349,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/* Modification to set tail pointer for virtual function
@@ -2599,7 +2599,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2900,20 +2900,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_VFPLCFG_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
if (rss_hf)
@@ -2930,20 +2930,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
} else {
mrqc = rd32(hw, TXGBE_RACTL);
mrqc &= ~TXGBE_RACTL_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_RACTL_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_RACTL_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_RACTL_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_RACTL_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6UDP;
if (rss_hf)
@@ -2984,39 +2984,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
rss_hf = 0;
} else {
mrqc = rd32(hw, TXGBE_RACTL);
if (mrqc & TXGBE_RACTL_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_RACTL_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_RACTL_RSSENA))
rss_hf = 0;
}
@@ -3046,7 +3046,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
*/
if (adapter->rss_reta_updated == 0) {
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == dev->data->nb_rx_queues)
j = 0;
reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3083,12 +3083,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
txgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* split rx buffer up into sections, each for 1 traffic class
@@ -3103,7 +3103,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
rxpbsize &= (~(0x3FF << 10));
@@ -3111,7 +3111,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
- if (num_pools == ETH_16_POOLS) {
+ if (num_pools == RTE_ETH_16_POOLS) {
mrqc = TXGBE_PORTCTL_NUMTC_8;
mrqc |= TXGBE_PORTCTL_NUMVT_16;
} else {
@@ -3130,7 +3130,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_POOLCTL, vt_ctl);
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3151,7 +3151,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
wr32(hw, TXGBE_POOLRXENA(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
wr32(hw, TXGBE_ETHADDRIDX, 0);
wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3221,7 +3221,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
/*PF VF Transmit Enable*/
wr32(hw, TXGBE_POOLTXENA(0),
vmdq_tx_conf->nb_queue_pools ==
- ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3237,12 +3237,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3252,7 +3252,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3270,12 +3270,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3285,7 +3285,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3312,7 +3312,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3339,7 +3339,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3475,7 +3475,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_rx = DCB_RX_CONFIG;
/*
@@ -3486,8 +3486,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
/*Configure general VMDQ and DCB RX parameters*/
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3500,7 +3500,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -3511,7 +3511,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3527,15 +3527,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
- if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -3576,7 +3576,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
wr32(hw, TXGBE_PBRXSIZE(i), 0);
}
if (config_dcb_tx) {
@@ -3592,7 +3592,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
wr32(hw, TXGBE_PBTXSIZE(i), 0);
wr32(hw, TXGBE_PBTXDMATH(i), 0);
}
@@ -3634,7 +3634,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/* If the TC count is 8,
@@ -3648,7 +3648,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = txgbe_dcb_pfc_enabled;
}
txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -3719,12 +3719,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -3780,7 +3780,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* pool enabling for receive - 64 */
wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
/*
@@ -3904,11 +3904,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
@@ -3931,15 +3931,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -3962,21 +3962,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
txgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
txgbe_rss_disable(dev);
@@ -3987,18 +3987,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
txgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4028,7 +4028,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
txgbe_vmdq_tx_hw_configure(hw);
else
wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4038,13 +4038,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -4107,10 +4107,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4118,22 +4118,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
"is disabled");
return -EINVAL;
}
rfctl = rd32(hw, TXGBE_PSRCTL);
- if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~TXGBE_PSRCTL_RSCDIA;
else
rfctl |= TXGBE_PSRCTL_RSCDIA;
wr32(hw, TXGBE_PSRCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set PSRCTL.RSCACK bit */
@@ -4273,7 +4273,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
}
#endif
}
@@ -4316,7 +4316,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4344,7 +4344,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4354,7 +4354,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -4391,11 +4391,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -4410,7 +4410,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = rd32(hw, TXGBE_PSRCTL);
rxcsum |= TXGBE_PSRCTL_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= TXGBE_PSRCTL_L4CSUM;
else
rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4419,7 +4419,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == txgbe_mac_raptor) {
rdrxctl = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4542,8 +4542,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
txgbe_setup_loopback_link_raptor(hw);
#ifdef RTE_LIB_SECURITY
- if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
- (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+ if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+ (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = txgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -4851,7 +4851,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Set PSR type for VF RSS according to max Rx queue */
psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4903,7 +4903,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
*/
wr32(hw, TXGBE_RXCFG(i), srrctl);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4912,8 +4912,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/*
@@ -5084,7 +5084,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
* little-endian order.
*/
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == conf->conf.queue_num)
j = 0;
reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
uint8_t rx_deferred_start; /**< not in global dev start. */
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a7935a716de9..27f81a5cafc5 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
static struct rte_eth_link pmd_link = {
.link_speed = 10000,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN
};
struct rte_vhost_vring_state {
@@ -823,7 +823,7 @@ new_device(int vid)
rte_vhost_get_mtu(vid, ð_dev->data->mtu);
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_atomic32_set(&internal->dev_attached, 1);
update_queuing_status(eth_dev);
@@ -858,7 +858,7 @@ destroy_device(int vid)
rte_atomic32_set(&internal->dev_attached, 0);
update_queuing_status(eth_dev);
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1124,7 +1124,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
if (vhost_driver_setup(dev) < 0)
return -1;
- internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -1273,9 +1273,9 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_tx_queues = internal->max_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return 0;
}
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 047d3f43a3cf..74ede2aeccc1 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -712,7 +712,7 @@ int
virtio_dev_close(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "virtio_dev_close");
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1771,7 +1771,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
- if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+ if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
config = &local_config;
virtio_read_dev_config(hw,
@@ -1785,7 +1785,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
}
}
if (hw->duplex == DUPLEX_UNKNOWN)
- hw->duplex = ETH_LINK_FULL_DUPLEX;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
hw->speed, hw->duplex);
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1884,7 +1884,7 @@ int
eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
{
struct virtio_hw *hw = eth_dev->data->dev_private;
- uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+ uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
int vectorized = 0;
int ret;
@@ -1955,22 +1955,22 @@ static uint32_t
virtio_dev_speed_capa_get(uint32_t speed)
{
switch (speed) {
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -2086,14 +2086,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "configure");
req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
- if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Rx multi queue mode %d",
rxmode->mq_mode);
return -EINVAL;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Tx multi queue mode %d",
txmode->mq_mode);
@@ -2111,20 +2111,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
req_features |=
(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
- if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_CSUM);
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
req_features |=
(1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2136,15 +2136,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+ if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
PMD_DRV_LOG(ERR,
"rx checksum not available on this host");
return -ENOTSUP;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
PMD_DRV_LOG(ERR,
@@ -2156,12 +2156,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
virtio_dev_cq_start(dev);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
hw->vlan_strip = 1;
- hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+ hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
- if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(ERR,
"vlan filtering not available on this host");
@@ -2214,7 +2214,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(INFO,
"disabled packed ring vectorized rx for TCP_LRO enabled");
hw->use_vec_rx = 0;
@@ -2241,10 +2241,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN_STRIP)) {
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
PMD_DRV_LOG(INFO,
"disabled split ring vectorized rx for offloading enabled");
hw->use_vec_rx = 0;
@@ -2437,7 +2437,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
struct rte_eth_link link;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "stop");
dev->data->dev_started = 0;
@@ -2478,28 +2478,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
memset(&link, 0, sizeof(link));
link.link_duplex = hw->duplex;
link.link_speed = hw->speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (!hw->started) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
PMD_INIT_LOG(DEBUG, "Get link status from hw");
virtio_read_dev_config(hw,
offsetof(struct virtio_net_config, status),
&status, sizeof(status));
if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
PMD_INIT_LOG(DEBUG, "Port %d is down",
dev->data->port_id);
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
PMD_INIT_LOG(DEBUG, "Port %d is up",
dev->data->port_id);
}
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2512,8 +2512,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct virtio_hw *hw = dev->data->dev_private;
uint64_t offloads = rxmode->offloads;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(NOTICE,
@@ -2523,8 +2523,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (mask & ETH_VLAN_STRIP_MASK)
- hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
+ hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -2546,32 +2546,32 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mtu = hw->max_mtu;
host_features = VIRTIO_OPS(hw)->get_features(hw);
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
}
if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
}
tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
/*
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a19895af1f17..26d9edf5319c 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,20 +41,20 @@
#define VMXNET3_TX_MAX_SEG UINT8_MAX
#define VMXNET3_TX_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define VMXNET3_RX_OFFLOAD_CAP \
- (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
@@ -398,9 +398,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
/* set the initial link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(eth_dev, &link);
return 0;
@@ -486,8 +486,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -547,7 +547,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
hw->queueDescPA = mz->iova;
hw->queue_desc_len = (uint16_t)size;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Allocate memory structure for UPT1_RSSConf and configure */
mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
"rss_conf", rte_socket_id(),
@@ -843,15 +843,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
devRead->rxFilterConf.rxMode = 0;
/* Setting up feature flags */
- if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
devRead->misc.uptFeatures |= VMXNET3_F_LRO;
devRead->misc.maxNumRxSG = 0;
}
- if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
ret = vmxnet3_rss_configure(dev);
if (ret != VMXNET3_SUCCESS)
return ret;
@@ -863,7 +863,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
}
ret = vmxnet3_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -930,7 +930,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
}
if (VMXNET3_VERSION_GE_4(hw) &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Check for additional RSS */
ret = vmxnet3_v4_rss_configure(dev);
if (ret != VMXNET3_SUCCESS) {
@@ -1039,9 +1039,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clear recorded link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
hw->adapter_stopped = 1;
@@ -1365,7 +1365,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
dev_info->min_mtu = VMXNET3_MIN_MTU;
dev_info->max_mtu = VMXNET3_MAX_MTU;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1447,10 +1447,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
if (ret & 0x1)
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -1503,7 +1503,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1573,8 +1573,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint32_t *vf_table = devRead->rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
else
devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1583,8 +1583,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
VMXNET3_CMD_UPDATE_FEATURE);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 8950175460f0..ef858ac9512f 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
VMXNET3_MAX_RX_QUEUES + 1)
#define VMXNET3_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define VMXNET3_V4_RSS_MASK ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define VMXNET3_MANDATORY_V4_RSS ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
/* RSS configuration structure - shared with device through GPA */
typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b01c4c01f9c9..870100fa4f11 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
rss_hf = port_rss_conf->rss_hf &
(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
/* loading hashType */
dev_rss_conf->hashType = 0;
rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 68e3c13730ad..a9fef2297842 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -71,11 +71,11 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -328,7 +328,7 @@ check_port_link_status(uint16_t port_id)
if (link_get_err >= 0 && link.link_status) {
const char *dp = (link.link_duplex ==
- ETH_LINK_FULL_DUPLEX) ?
+ RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex";
printf("\nPort %u Link Up - speed %s - %s\n",
port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 6352a715c0d9..3f41d8e5965d 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -115,17 +115,17 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -149,9 +149,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
@@ -241,9 +241,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
BOND_PORT, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
if (retval != 0)
rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 8c4a8feec0c2..c681e237ea46 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,15 +80,15 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
}
},
};
@@ -126,9 +126,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 1bc675962bf3..cdd9e9b60bd8 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
int ret;
memset(&cfg_port, 0, sizeof(cfg_port));
- cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+ cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
pause_param->tx_pause = 0;
pause_param->rx_pause = 0;
switch (fc_conf.mode) {
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
pause_param->rx_pause = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
pause_param->tx_pause = 1;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_param->rx_pause = 1;
pause_param->tx_pause = 1;
default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
if (pause_param->tx_pause) {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_FULL;
+ fc_conf.mode = RTE_ETH_FC_FULL;
else
- fc_conf.mode = RTE_FC_TX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
} else {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_RX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf.mode = RTE_FC_NONE;
+ fc_conf.mode = RTE_ETH_FC_NONE;
}
status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
for (vf = 0; vf < num_vfs; vf++) {
#ifdef RTE_NET_IXGBE
rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
- ETH_VMDQ_ACCEPT_UNTAG, 0);
+ RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
#endif
}
/* Enable Rx vlan filter, VF unspport status is discard */
- ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+ ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
if (ret != 0)
return ret;
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index e26be8edf28f..193a16463449 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,13 +283,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -311,12 +311,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 476b147bdfcc..1b841d46ad93 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,13 +614,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -642,9 +642,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 8a43f6ac0f92..6185b340600c 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -212,9 +212,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index dd8a33d036ee..bfc1949c8428 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
memset(&link, 0, sizeof(link));
do {
link_get_err = rte_eth_link_get(port_id, &link);
- if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+ if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
if (link_get_err < 0)
rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
rte_strerror(-link_get_err));
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
}
@@ -138,12 +138,12 @@ init_port(void)
},
.txmode = {
.offloads =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
},
};
struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ccfee585f850..b1aa2767a0af 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,12 +819,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
/* Configuring port to use RSS for multiple RX queues. 8< */
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_PROTO_MASK,
+ .rss_hf = RTE_ETH_RSS_PROTO_MASK,
}
}
};
@@ -852,9 +852,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 8644454a9aef..0307709f2b4a 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -149,13 +149,13 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER),
+ .offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER),
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -624,7 +624,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 9ba02e687adb..0290767af473 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < reta_size; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < reta_size; i++) {
- uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
- uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+ uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
uint32_t rss_qs_pos = i % rss->n_queues;
reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -267,5 +267,5 @@ link_is_up(const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 4f0e12e62447..a9f9bd477007 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -161,22 +161,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -738,7 +738,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -1096,9 +1096,9 @@ main(int argc, char **argv)
n_tx_queue = nb_lcores;
if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
n_tx_queue = MAX_TX_QUEUE_PER_PORT;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 5f5ec260f315..feddd84d1551 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -234,19 +234,19 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1455,10 +1455,10 @@ print_usage(const char *prgname)
" \"parallel\" : Parallel\n"
" --" CMD_LINE_OPT_RX_OFFLOAD
": bitmask of the RX HW offload capabilities to enable/use\n"
- " (DEV_RX_OFFLOAD_*)\n"
+ " (RTE_ETH_RX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_TX_OFFLOAD
": bitmask of the TX HW offload capabilities to enable/use\n"
- " (DEV_TX_OFFLOAD_*)\n"
+ " (RTE_ETH_TX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_REASSEMBLE " NUM"
": max number of entries in reassemble(fragment) table\n"
" (zero (default value) disables reassembly)\n"
@@ -1909,7 +1909,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2212,8 +2212,8 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2236,12 +2236,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
portid, local_port_conf.txmode.offloads,
dev_info.tx_offload_capa);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
printf("port %u configurng rx_offloads=0x%" PRIx64
", tx_offloads=0x%" PRIx64 "\n",
@@ -2299,7 +2299,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
/* Pre-populate pkt offloads based on capabilities */
qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
- if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
tx_queueid++;
@@ -2660,7 +2660,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
struct rte_flow *flow;
int ret;
- if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
if (inbound) {
if ((dev_info.rx_offload_capa &
- DEV_RX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware RX IPSec offload is not supported\n");
return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
} else { /* outbound */
if ((dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware TX IPSec offload is not supported\n");
return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+ *rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
}
/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+ *tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
}
return 0;
}
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 87538dccc879..32670f80bc2b 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -115,8 +115,8 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
},
};
@@ -620,7 +620,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 1790ec024072..f780be712ec0 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -95,7 +95,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
/* Options for configuring ethernet port */
static struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -608,9 +608,9 @@ init_port(uint16_t port)
"Error during getting device (port %u) info: %s\n",
port, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -688,7 +688,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index c646f1748ca7..42c04abbbb34 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -216,11 +216,11 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1808,7 +1808,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2632,9 +2632,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (retval < 0) {
printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 9040be5ed9b6..cf3d1b8aaf40 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -14,7 +14,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
uint16_t nb_ports_available = 0;
@@ -22,9 +22,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
int ret;
if (rsrc->event_mode) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+ port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
}
/* Initialise each port */
@@ -60,9 +60,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
local_port_conf.rx_adv_conf.rss_conf.rss_hf);
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queue. 8< */
ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index 06280321b1f2..092ea0189c7f 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -726,7 +726,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -869,9 +869,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index 07271affb4a9..78e43f9c091e 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -478,7 +478,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -650,9 +650,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index f3deeba0a665..3edabd1dd19b 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -95,7 +95,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -606,7 +606,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -792,9 +792,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the number of queues for a port. */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 1890c88a5b01..fea414ae5929 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -124,19 +124,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1936,7 +1936,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2004,7 +2004,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -2088,9 +2088,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 05385807e83e..7f00c65609ed 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,17 +111,17 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -607,7 +607,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* Clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -731,7 +731,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -828,9 +828,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 6aa1b66ecfcc..5a4359a368b5 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -250,18 +250,18 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_UDP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
@@ -2197,7 +2197,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2510,7 +2510,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -2638,9 +2638,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
rte_panic("Error during getting device (port %u) info:"
"%s\n", port_id, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index f27c76bb7a73..51cbf81f1afa 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -120,18 +120,18 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -903,7 +903,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -988,7 +988,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -1053,15 +1053,15 @@ l3fwd_poll_resource_setup(void)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
if (dev_info.max_rx_queues == 1)
- local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index e4542df11f87..8714acddd110 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.intr_conf = {
.lsc = 1, /**< lsc interrupt feature enabled */
@@ -147,7 +147,7 @@ print_stats(void)
link_get_err < 0 ? "0" :
rte_eth_link_speed_to_str(link.link_speed),
link_get_err < 0 ? "Link get failed" :
- (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex"),
port_statistics[portid].tx,
port_statistics[portid].rx,
@@ -507,7 +507,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -634,9 +634,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index 1ad71ca7ec5f..23307073c904 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -94,7 +94,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS
+ .mq_mode = RTE_ETH_MQ_RX_RSS
}
};
const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -213,7 +213,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index 01dc3acf34d5..85955375f1bf 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -176,18 +176,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
{
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -218,9 +218,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
info.default_rxconf.rx_drop_en = 1;
- if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -392,7 +392,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
static struct rte_eth_conf eth_port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index 4f6982bc1289..b01ac60fd196 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
return ret;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
if (ret != 0)
return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 5de5df997ee9..baeee9298d57 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -307,18 +307,18 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_TCP,
+ .rss_hf = RTE_ETH_RSS_TCP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -3441,7 +3441,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3494,7 +3494,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -3593,9 +3593,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 4f20dfc4be06..569207a79d62 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < reta_size; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < reta_size; i++) {
- uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
- uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+ uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
uint32_t rss_qs_pos = i % rss->n_queues;
reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 229a277032cb..979d9eb9e9d0 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -193,14 +193,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Force full Tx path in the driver, required for IEEE1588 */
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index c32d2e12e633..743bae2da50a 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,18 +51,18 @@ static struct rte_mempool *pool = NULL;
***/
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -332,8 +332,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_rx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -378,8 +378,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_tx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1367569c65db..9b34e4a76b1b 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -60,7 +60,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -105,9 +105,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6845c396b8d9..1903d8b095a1 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -141,17 +141,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (hw_timestamping) {
- if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+ if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
printf("\nERROR: Port %u does not support hardware timestamping\n"
, port);
return -1;
}
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
if (hwts_dynfield_offset < 0) {
printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index 9ebd88bac20e..074fee5b26b2 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -96,7 +96,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
};
const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -115,9 +115,9 @@ init_port(uint16_t port_num)
if (retval != 0)
return retval;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/*
* Standard DPDK port initialisation - config port, then set up
@@ -277,7 +277,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index fd7207aee758..16435ee3ccc2 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -49,9 +49,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 999809e6ed41..49c134a3042f 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -110,23 +110,23 @@ static int nb_sockets;
/* empty vmdq configuration structure. Filled in programatically */
static struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
/*
* VLAN strip is necessary for 1G NIC such as I350,
* this fixes bug of ipv4 forwarding in guest can't
* forward pakets from one virtio dev to another virtio dev.
*/
- .offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+ .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO),
},
.rx_adv_conf = {
/*
@@ -134,7 +134,7 @@ static struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -291,9 +291,9 @@ port_init(uint16_t port)
return -1;
rx_rings = (uint16_t)dev_info.max_rx_queues;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0) {
@@ -557,8 +557,8 @@ us_vhost_parse_args(int argc, char **argv)
case 'P':
promiscuous = 1;
vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
- ETH_VMDQ_ACCEPT_BROADCAST |
- ETH_VMDQ_ACCEPT_MULTICAST;
+ RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+ RTE_ETH_VMDQ_ACCEPT_MULTICAST;
break;
case OPT_VM2VM_NUM:
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e19d79a40802..b159291d77ce 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -73,9 +73,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -270,7 +270,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index ee7f4324e141..1f336082e5c1 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -66,12 +66,12 @@ static uint8_t rss_enable;
/* empty vmdq configuration structure. Filled in programatically */
static const struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
/*
@@ -79,7 +79,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -157,11 +157,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
(void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf,
sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -259,9 +259,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
if (retval != 0)
return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 14c20e6a8b26..1a19f1799bd2 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -60,8 +60,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
static unsigned num_ports;
/* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs num_tcs = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs num_tcs = RTE_ETH_4_TCS;
static uint16_t num_queues, num_vmdq_queues;
static uint16_t vmdq_pool_base, vmdq_queue_base;
static uint8_t rss_enable;
@@ -69,11 +69,11 @@ static uint8_t rss_enable;
/* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
static const struct rte_eth_conf vmdq_dcb_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
},
/*
* should be overridden separately in code with
@@ -81,7 +81,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
*/
.rx_adv_conf = {
.vmdq_dcb_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -89,12 +89,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
.dcb_tc = {0},
},
.dcb_rx_conf = {
- .nb_tcs = ETH_4_TCS,
+ .nb_tcs = RTE_ETH_4_TCS,
/** Traffic class each UP mapped to. */
.dcb_tc = {0},
},
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -103,7 +103,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
},
.tx_adv_conf = {
.vmdq_dcb_tx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.dcb_tc = {0},
},
},
@@ -157,7 +157,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
conf.pool_map[i].pools = 1UL << i;
vmdq_conf.pool_map[i].pools = 1UL << i;
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
conf.dcb_tc[i] = i % num_tcs;
dcb_conf.dcb_tc[i] = i % num_tcs;
tx_conf.dcb_tc[i] = i % num_tcs;
@@ -173,11 +173,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
(void)(rte_memcpy(ð_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
sizeof(tx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -271,9 +271,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -382,9 +382,9 @@ vmdq_parse_num_pools(const char *q_arg)
if (n != 16 && n != 32)
return -1;
if (n == 16)
- num_pools = ETH_16_POOLS;
+ num_pools = RTE_ETH_16_POOLS;
else
- num_pools = ETH_32_POOLS;
+ num_pools = RTE_ETH_32_POOLS;
return 0;
}
@@ -404,9 +404,9 @@ vmdq_parse_num_tcs(const char *q_arg)
if (n != 4 && n != 8)
return -1;
if (n == 4)
- num_tcs = ETH_4_TCS;
+ num_tcs = RTE_ETH_4_TCS;
else
- num_tcs = ETH_8_TCS;
+ num_tcs = RTE_ETH_8_TCS;
return 0;
}
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 0174ba03d7f3..c134b878684e 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -116,7 +116,7 @@ struct rte_eth_dev_data {
/**< Device Ethernet link address.
* @see rte_eth_dev_release_port()
*/
- uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+ uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
/**< Bitmap associating MAC addresses to pools. */
struct rte_ether_addr *hash_mac_addrs;
/**< Device Ethernet MAC addresses of hash filtering.
@@ -1657,23 +1657,23 @@ struct rte_eth_syn_filter {
/**
* filter type of tunneling packet
*/
-#define ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr */
-#define ETH_TUNNEL_FILTER_OIP 0x02 /**< filter by outer IP Addr */
-#define ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
-#define ETH_TUNNEL_FILTER_IMAC 0x08 /**< filter by inner MAC addr */
-#define ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
-#define ETH_TUNNEL_FILTER_IIP 0x20 /**< filter by inner IP addr */
-
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_IVLAN)
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_IVLAN | \
- ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_IMAC_TENID (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_OMAC_TENID_IMAC (ETH_TUNNEL_FILTER_OMAC | \
- ETH_TUNNEL_FILTER_TENID | \
- ETH_TUNNEL_FILTER_IMAC)
+#define RTE_ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_OIP 0x02 /**< filter by outer IP Addr */
+#define RTE_ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
+#define RTE_ETH_TUNNEL_FILTER_IMAC 0x08 /**< filter by inner MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
+#define RTE_ETH_TUNNEL_FILTER_IIP 0x20 /**< filter by inner IP addr */
+
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_IVLAN)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_IVLAN | \
+ RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC (RTE_ETH_TUNNEL_FILTER_OMAC | \
+ RTE_ETH_TUNNEL_FILTER_TENID | \
+ RTE_ETH_TUNNEL_FILTER_IMAC)
/**
* Select IPv4 or IPv6 for tunnel filters.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1f18aa916cca..7fd916c070e9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -101,9 +101,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
#define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
#define RTE_RX_OFFLOAD_BIT2STR(_name) \
- { DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name) \
{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
static const struct {
@@ -128,14 +125,14 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
- RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+ RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
};
#undef RTE_RX_OFFLOAD_BIT2STR
#undef RTE_ETH_RX_OFFLOAD_BIT2STR
#define RTE_TX_OFFLOAD_BIT2STR(_name) \
- { DEV_TX_OFFLOAD_##_name, #_name }
+ { RTE_ETH_TX_OFFLOAD_##_name, #_name }
static const struct {
uint64_t offload;
@@ -1173,32 +1170,32 @@ uint32_t
rte_eth_speed_bitflag(uint32_t speed, int duplex)
{
switch (speed) {
- case ETH_SPEED_NUM_10M:
- return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
- case ETH_SPEED_NUM_100M:
- return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
- case ETH_SPEED_NUM_1G:
- return ETH_LINK_SPEED_1G;
- case ETH_SPEED_NUM_2_5G:
- return ETH_LINK_SPEED_2_5G;
- case ETH_SPEED_NUM_5G:
- return ETH_LINK_SPEED_5G;
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10M:
+ return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+ case RTE_ETH_SPEED_NUM_100M:
+ return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+ case RTE_ETH_SPEED_NUM_1G:
+ return RTE_ETH_LINK_SPEED_1G;
+ case RTE_ETH_SPEED_NUM_2_5G:
+ return RTE_ETH_LINK_SPEED_2_5G;
+ case RTE_ETH_SPEED_NUM_5G:
+ return RTE_ETH_LINK_SPEED_5G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -1503,7 +1500,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
uint32_t max_rx_pktlen;
uint32_t overhead_len;
@@ -1560,12 +1557,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
- if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
- (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+ (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
RTE_ETHDEV_LOG(ERR,
"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
port_id,
- rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+ rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
ret = -EINVAL;
goto rollback;
}
@@ -2180,7 +2177,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* size is supported by the configured device.
*/
/* Get the real Ethernet overhead length */
- if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
uint32_t overhead_len;
uint32_t max_rx_pktlen;
int ret;
@@ -2760,21 +2757,21 @@ const char *
rte_eth_link_speed_to_str(uint32_t link_speed)
{
switch (link_speed) {
- case ETH_SPEED_NUM_NONE: return "None";
- case ETH_SPEED_NUM_10M: return "10 Mbps";
- case ETH_SPEED_NUM_100M: return "100 Mbps";
- case ETH_SPEED_NUM_1G: return "1 Gbps";
- case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
- case ETH_SPEED_NUM_5G: return "5 Gbps";
- case ETH_SPEED_NUM_10G: return "10 Gbps";
- case ETH_SPEED_NUM_20G: return "20 Gbps";
- case ETH_SPEED_NUM_25G: return "25 Gbps";
- case ETH_SPEED_NUM_40G: return "40 Gbps";
- case ETH_SPEED_NUM_50G: return "50 Gbps";
- case ETH_SPEED_NUM_56G: return "56 Gbps";
- case ETH_SPEED_NUM_100G: return "100 Gbps";
- case ETH_SPEED_NUM_200G: return "200 Gbps";
- case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+ case RTE_ETH_SPEED_NUM_NONE: return "None";
+ case RTE_ETH_SPEED_NUM_10M: return "10 Mbps";
+ case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+ case RTE_ETH_SPEED_NUM_1G: return "1 Gbps";
+ case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+ case RTE_ETH_SPEED_NUM_5G: return "5 Gbps";
+ case RTE_ETH_SPEED_NUM_10G: return "10 Gbps";
+ case RTE_ETH_SPEED_NUM_20G: return "20 Gbps";
+ case RTE_ETH_SPEED_NUM_25G: return "25 Gbps";
+ case RTE_ETH_SPEED_NUM_40G: return "40 Gbps";
+ case RTE_ETH_SPEED_NUM_50G: return "50 Gbps";
+ case RTE_ETH_SPEED_NUM_56G: return "56 Gbps";
+ case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+ case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+ case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
default: return "Invalid";
}
}
@@ -2798,14 +2795,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
return -EINVAL;
}
- if (eth_link->link_status == ETH_LINK_DOWN)
+ if (eth_link->link_status == RTE_ETH_LINK_DOWN)
return snprintf(str, len, "Link down");
else
return snprintf(str, len, "Link up at %s %s %s",
rte_eth_link_speed_to_str(eth_link->link_speed),
- (eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"FDX" : "HDX",
- (eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+ (eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
"Autoneg" : "Fixed");
}
@@ -3712,7 +3709,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
dev = &rte_eth_devices[port_id];
if (!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
RTE_ETHDEV_LOG(ERR, "Port %u: vlan-filtering disabled\n",
port_id);
return -ENOSYS;
@@ -3799,44 +3796,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
dev_offloads = orig_offloads;
/* check which option changed by application */
- cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
- mask |= ETH_VLAN_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ mask |= RTE_ETH_VLAN_STRIP_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
- mask |= ETH_VLAN_FILTER_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+ mask |= RTE_ETH_VLAN_FILTER_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+ cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
- mask |= ETH_VLAN_EXTEND_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+ mask |= RTE_ETH_VLAN_EXTEND_MASK;
}
- cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+ cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
- mask |= ETH_QINQ_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+ mask |= RTE_ETH_QINQ_STRIP_MASK;
}
/*no change*/
@@ -3881,17 +3878,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
dev = &rte_eth_devices[port_id];
dev_offloads = &dev->data->dev_conf.rxmode.offloads;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- ret |= ETH_VLAN_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- ret |= ETH_VLAN_FILTER_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
- ret |= ETH_VLAN_EXTEND_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+ ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
- ret |= ETH_QINQ_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+ ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
return ret;
}
@@ -3968,7 +3965,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
return -EINVAL;
}
- if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+ if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
return -EINVAL;
}
@@ -3986,7 +3983,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
{
uint16_t i, num;
- num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+ num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
if (reta_conf[i].mask)
return 0;
@@ -4008,8 +4005,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & RTE_BIT64(shift)) &&
(reta_conf[idx].reta[shift] >= max_rxq)) {
RTE_ETHDEV_LOG(ERR,
@@ -4165,7 +4162,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4191,7 +4188,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4332,8 +4329,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
port_id);
return -EINVAL;
}
- if (pool >= ETH_64_POOLS) {
- RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", ETH_64_POOLS - 1);
+ if (pool >= RTE_ETH_64_POOLS) {
+ RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", RTE_ETH_64_POOLS - 1);
return -EINVAL;
}
@@ -6242,7 +6239,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
rte_tel_data_add_dict_string(d, status_str, "UP");
rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
rte_tel_data_add_dict_string(d, "duplex",
- (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex");
return 0;
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 014270d31672..9f0addee116c 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -250,7 +250,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
* field is not supported, its value is 0.
* All byte-related statistics do not include Ethernet FCS regardless
* of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
*/
struct rte_eth_stats {
uint64_t ipackets; /**< Total number of successfully received packets. */
@@ -280,43 +280,75 @@ struct rte_eth_stats {
/**@{@name Link speed capabilities
* Device supported speeds bitmap flags
*/
-#define ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */
-#define ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */
-#define ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
-#define ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
-#define ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
-#define ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
-#define ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
-#define ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
-#define ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */
+#define ETH_LINK_SPEED_1G RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */
+#define ETH_LINK_SPEED_5G RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
+#define ETH_LINK_SPEED_10G RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
+#define ETH_LINK_SPEED_20G RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
+#define ETH_LINK_SPEED_25G RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
+#define ETH_LINK_SPEED_40G RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
+#define ETH_LINK_SPEED_50G RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
+#define ETH_LINK_SPEED_56G RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G RTE_ETH_LINK_SPEED_200G
/**@}*/
/**@{@name Link speed
* Ethernet numeric link speeds in Mbps
*/
-#define ETH_SPEED_NUM_NONE 0 /**< Not defined */
-#define ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
-#define ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
-#define ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
-#define ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
-#define ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
-#define ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
-#define ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
-#define ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
-#define ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
-#define ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
+#define ETH_SPEED_NUM_10M RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
+#define ETH_SPEED_NUM_1G RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
+#define ETH_SPEED_NUM_5G RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
+#define ETH_SPEED_NUM_10G RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
+#define ETH_SPEED_NUM_20G RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
+#define ETH_SPEED_NUM_25G RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
+#define ETH_SPEED_NUM_40G RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
+#define ETH_SPEED_NUM_50G RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
+#define ETH_SPEED_NUM_56G RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN RTE_ETH_SPEED_NUM_UNKNOWN
/**@}*/
/**
@@ -324,21 +356,27 @@ struct rte_eth_stats {
*/
__extension__
struct rte_eth_link {
- uint32_t link_speed; /**< ETH_SPEED_NUM_ */
- uint16_t link_duplex : 1; /**< ETH_LINK_[HALF/FULL]_DUPLEX */
- uint16_t link_autoneg : 1; /**< ETH_LINK_[AUTONEG/FIXED] */
- uint16_t link_status : 1; /**< ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /**< RTE_ETH_SPEED_NUM_ */
+ uint16_t link_duplex : 1; /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint16_t link_autoneg : 1; /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint16_t link_status : 1; /**< RTE_ETH_LINK_[DOWN/UP] */
} __rte_aligned(8); /**< aligned for atomic64 read/write */
/**@{@name Link negotiation
* Constants used in link management.
*/
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP 1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG RTE_ETH_LINK_AUTONEG
#define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
/**@}*/
@@ -355,9 +393,12 @@ struct rte_eth_thresh {
/**@{@name Multi-queue mode
* @see rte_eth_conf.rxmode.mq_mode.
*/
-#define ETH_MQ_RX_RSS_FLAG 0x1 /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_DCB_FLAG 0x2 /**< Enable DCB. */
-#define ETH_MQ_RX_VMDQ_FLAG 0x4 /**< Enable VMDq. */
+#define RTE_ETH_MQ_RX_RSS_FLAG 0x1
+#define ETH_MQ_RX_RSS_FLAG RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG 0x2
+#define ETH_MQ_RX_DCB_FLAG RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG RTE_ETH_MQ_RX_VMDQ_FLAG
/**@}*/
/**
@@ -366,50 +407,49 @@ struct rte_eth_thresh {
*/
enum rte_eth_rx_mq_mode {
/** None of DCB,RSS or VMDQ mode */
- ETH_MQ_RX_NONE = 0,
+ RTE_ETH_MQ_RX_NONE = 0,
/** For RX side, only RSS is on */
- ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+ RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
/** For RX side,only DCB is on. */
- ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
/** Both DCB and RSS enable */
- ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Only VMDQ, no RSS nor DCB */
- ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
/** RSS mode with VMDQ */
- ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
/** Use VMDQ+DCB to route traffic to queues */
- ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Enable both VMDQ and DCB in VMDq */
- ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
- ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+ RTE_ETH_MQ_RX_VMDQ_FLAG,
};
-/**
- * for rx mq mode backward compatible
- */
-#define ETH_RSS ETH_MQ_RX_RSS
-#define VMDQ_DCB ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_ETH_MQ_RX_VMDQ_DCB_RSS
/**
* A set of values to identify what method is to be used to transmit
* packets using multi-TCs.
*/
enum rte_eth_tx_mq_mode {
- ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
- ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
- ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
- ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
+ RTE_ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
+ RTE_ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
+ RTE_ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
+ RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
-
-/**
- * for tx mq mode backward compatible
- */
-#define ETH_DCB_NONE ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY RTE_ETH_MQ_TX_VMDQ_ONLY
/**
* A structure used to configure the RX features of an Ethernet port.
@@ -422,7 +462,7 @@ struct rte_eth_rxmode {
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
- * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -437,12 +477,17 @@ struct rte_eth_rxmode {
* Note that single VLAN is treated the same as inner VLAN.
*/
enum rte_vlan_type {
- ETH_VLAN_TYPE_UNKNOWN = 0,
- ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
- ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
- ETH_VLAN_TYPE_MAX,
+ RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+ RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+ RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+ RTE_ETH_VLAN_TYPE_MAX,
};
+#define ETH_VLAN_TYPE_UNKNOWN RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX RTE_ETH_VLAN_TYPE_MAX
+
/**
* A structure used to describe a vlan filter.
* If the bit corresponding to a VID is set, such VID is on.
@@ -513,38 +558,70 @@ struct rte_eth_rss_conf {
* Below macros are defined for RSS offload types, they can be used to
* fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
*/
-#define ETH_RSS_IPV4 RTE_BIT64(2)
-#define ETH_RSS_FRAG_IPV4 RTE_BIT64(3)
-#define ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4)
-#define ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
-#define ETH_RSS_IPV6 RTE_BIT64(8)
-#define ETH_RSS_FRAG_IPV6 RTE_BIT64(9)
-#define ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10)
-#define ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
-#define ETH_RSS_L2_PAYLOAD RTE_BIT64(14)
-#define ETH_RSS_IPV6_EX RTE_BIT64(15)
-#define ETH_RSS_IPV6_TCP_EX RTE_BIT64(16)
-#define ETH_RSS_IPV6_UDP_EX RTE_BIT64(17)
-#define ETH_RSS_PORT RTE_BIT64(18)
-#define ETH_RSS_VXLAN RTE_BIT64(19)
-#define ETH_RSS_GENEVE RTE_BIT64(20)
-#define ETH_RSS_NVGRE RTE_BIT64(21)
-#define ETH_RSS_GTPU RTE_BIT64(23)
-#define ETH_RSS_ETH RTE_BIT64(24)
-#define ETH_RSS_S_VLAN RTE_BIT64(25)
-#define ETH_RSS_C_VLAN RTE_BIT64(26)
-#define ETH_RSS_ESP RTE_BIT64(27)
-#define ETH_RSS_AH RTE_BIT64(28)
-#define ETH_RSS_L2TPV3 RTE_BIT64(29)
-#define ETH_RSS_PFCP RTE_BIT64(30)
-#define ETH_RSS_PPPOE RTE_BIT64(31)
-#define ETH_RSS_ECPRI RTE_BIT64(32)
-#define ETH_RSS_MPLS RTE_BIT64(33)
-#define ETH_RSS_IPV4_CHKSUM RTE_BIT64(34)
+#define RTE_ETH_RSS_IPV4 RTE_BIT64(2)
+#define ETH_RSS_IPV4 RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4 RTE_BIT64(3)
+#define ETH_RSS_FRAG_IPV4 RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4)
+#define ETH_RSS_NONFRAG_IPV4_TCP RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5)
+#define ETH_RSS_NONFRAG_IPV4_UDP RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6 RTE_BIT64(8)
+#define ETH_RSS_IPV6 RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6 RTE_BIT64(9)
+#define ETH_RSS_FRAG_IPV6 RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10)
+#define ETH_RSS_NONFRAG_IPV6_TCP RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11)
+#define ETH_RSS_NONFRAG_IPV6_UDP RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD RTE_BIT64(14)
+#define ETH_RSS_L2_PAYLOAD RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX RTE_BIT64(15)
+#define ETH_RSS_IPV6_EX RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX RTE_BIT64(16)
+#define ETH_RSS_IPV6_TCP_EX RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX RTE_BIT64(17)
+#define ETH_RSS_IPV6_UDP_EX RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT RTE_BIT64(18)
+#define ETH_RSS_PORT RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN RTE_BIT64(19)
+#define ETH_RSS_VXLAN RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE RTE_BIT64(20)
+#define ETH_RSS_GENEVE RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE RTE_BIT64(21)
+#define ETH_RSS_NVGRE RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU RTE_BIT64(23)
+#define ETH_RSS_GTPU RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH RTE_BIT64(24)
+#define ETH_RSS_ETH RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN RTE_BIT64(25)
+#define ETH_RSS_S_VLAN RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN RTE_BIT64(26)
+#define ETH_RSS_C_VLAN RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP RTE_BIT64(27)
+#define ETH_RSS_ESP RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH RTE_BIT64(28)
+#define ETH_RSS_AH RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3 RTE_BIT64(29)
+#define ETH_RSS_L2TPV3 RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP RTE_BIT64(30)
+#define ETH_RSS_PFCP RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE RTE_BIT64(31)
+#define ETH_RSS_PPPOE RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI RTE_BIT64(32)
+#define ETH_RSS_ECPRI RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS RTE_BIT64(33)
+#define ETH_RSS_MPLS RTE_ETH_RSS_MPLS
+#define RTE_ETH_RSS_IPV4_CHKSUM RTE_BIT64(34)
+#define ETH_RSS_IPV4_CHKSUM RTE_ETH_RSS_IPV4_CHKSUM
/**
* The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
@@ -553,34 +630,41 @@ struct rte_eth_rss_conf {
* checksum type for constructing the use of RSS offload bits.
*
* Due to above reason, some old APIs (and configuration) don't support
- * ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
+ * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
*
* For the case that checksum is not used in an UDP header,
* it takes the reserved value 0 as input for the hash function.
*/
-#define ETH_RSS_L4_CHKSUM RTE_BIT64(35)
+#define RTE_ETH_RSS_L4_CHKSUM RTE_BIT64(35)
+#define ETH_RSS_L4_CHKSUM RTE_ETH_RSS_L4_CHKSUM
/*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
* more specific input set selection. These bits are defined starting
* from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
* both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
* the same level are used simultaneously, it is the same case as none of
* them are added.
*/
-#define ETH_RSS_L3_SRC_ONLY RTE_BIT64(63)
-#define ETH_RSS_L3_DST_ONLY RTE_BIT64(62)
-#define ETH_RSS_L4_SRC_ONLY RTE_BIT64(61)
-#define ETH_RSS_L4_DST_ONLY RTE_BIT64(60)
-#define ETH_RSS_L2_SRC_ONLY RTE_BIT64(59)
-#define ETH_RSS_L2_DST_ONLY RTE_BIT64(58)
+#define RTE_ETH_RSS_L3_SRC_ONLY RTE_BIT64(63)
+#define ETH_RSS_L3_SRC_ONLY RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY RTE_BIT64(62)
+#define ETH_RSS_L3_DST_ONLY RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY RTE_BIT64(61)
+#define ETH_RSS_L4_SRC_ONLY RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY RTE_BIT64(60)
+#define ETH_RSS_L4_DST_ONLY RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY RTE_BIT64(59)
+#define ETH_RSS_L2_SRC_ONLY RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY RTE_BIT64(58)
+#define ETH_RSS_L2_DST_ONLY RTE_ETH_RSS_L2_DST_ONLY
/*
* Only select IPV6 address prefix as RSS input set according to
- * https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * https:tools.ietf.org/html/rfc6052
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
*/
#define RTE_ETH_RSS_L3_PRE32 RTE_BIT64(57)
#define RTE_ETH_RSS_L3_PRE40 RTE_BIT64(56)
@@ -602,22 +686,27 @@ struct rte_eth_rss_conf {
* It basically stands for the innermost encapsulation level RSS
* can be performed on according to PMD and device capabilities.
*/
-#define ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_ETH_RSS_LEVEL_PMD_DEFAULT
/**
* level 1, requests RSS to be performed on the outermost packet
* encapsulation level.
*/
-#define ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST RTE_ETH_RSS_LEVEL_OUTERMOST
/**
* level 2, requests RSS to be performed on the specified inner packet
* encapsulation level, from outermost to innermost (lower to higher values).
*/
-#define ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK RTE_ETH_RSS_LEVEL_MASK
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf) RTE_ETH_RSS_LEVEL(rss_hf)
/**
* For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -632,219 +721,312 @@ struct rte_eth_rss_conf {
static inline uint64_t
rte_eth_rss_hf_refine(uint64_t rss_hf)
{
- if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
- rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+ rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
- if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
- rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+ rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
return rss_hf;
}
-#define ETH_RSS_IPV6_PRE32 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32 RTE_ETH_RSS_IPV6_PRE32
-#define ETH_RSS_IPV6_PRE40 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40 RTE_ETH_RSS_IPV6_PRE40
-#define ETH_RSS_IPV6_PRE48 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48 RTE_ETH_RSS_IPV6_PRE48
-#define ETH_RSS_IPV6_PRE56 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56 RTE_ETH_RSS_IPV6_PRE56
-#define ETH_RSS_IPV6_PRE64 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64 RTE_ETH_RSS_IPV6_PRE64
-#define ETH_RSS_IPV6_PRE96 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96 RTE_ETH_RSS_IPV6_PRE96
-#define ETH_RSS_IPV6_PRE32_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP RTE_ETH_RSS_IPV6_PRE32_UDP
-#define ETH_RSS_IPV6_PRE40_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP RTE_ETH_RSS_IPV6_PRE40_UDP
-#define ETH_RSS_IPV6_PRE48_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP RTE_ETH_RSS_IPV6_PRE48_UDP
-#define ETH_RSS_IPV6_PRE56_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP RTE_ETH_RSS_IPV6_PRE56_UDP
-#define ETH_RSS_IPV6_PRE64_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP RTE_ETH_RSS_IPV6_PRE64_UDP
-#define ETH_RSS_IPV6_PRE96_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP RTE_ETH_RSS_IPV6_PRE96_UDP
-#define ETH_RSS_IPV6_PRE32_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP RTE_ETH_RSS_IPV6_PRE32_TCP
-#define ETH_RSS_IPV6_PRE40_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP RTE_ETH_RSS_IPV6_PRE40_TCP
-#define ETH_RSS_IPV6_PRE48_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP RTE_ETH_RSS_IPV6_PRE48_TCP
-#define ETH_RSS_IPV6_PRE56_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP RTE_ETH_RSS_IPV6_PRE56_TCP
-#define ETH_RSS_IPV6_PRE64_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP RTE_ETH_RSS_IPV6_PRE64_TCP
-#define ETH_RSS_IPV6_PRE96_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP RTE_ETH_RSS_IPV6_PRE96_TCP
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP RTE_ETH_RSS_IPV6_PRE32_SCTP
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP RTE_ETH_RSS_IPV6_PRE40_SCTP
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP RTE_ETH_RSS_IPV6_PRE48_SCTP
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP RTE_ETH_RSS_IPV6_PRE56_SCTP
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP RTE_ETH_RSS_IPV6_PRE64_SCTP
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
- ETH_RSS_S_VLAN | \
- ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+ RTE_ETH_RSS_S_VLAN | \
+ RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN RTE_ETH_RSS_VLAN
/**< Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | \
- ETH_RSS_PORT | \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE | \
- ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | \
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE | \
+ RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK RTE_ETH_RSS_PROTO_MASK
/*
* Definitions used for redirection table entry size.
* Some RSS RETA sizes may not be supported by some drivers, check the
* documentation or the description of relevant functions for more details.
*/
-#define ETH_RSS_RETA_SIZE_64 64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE 64
+#define RTE_ETH_RSS_RETA_SIZE_64 64
+#define ETH_RSS_RETA_SIZE_64 RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128 RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256 RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512 RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE 64
+#define RTE_RETA_GROUP_SIZE RTE_ETH_RETA_GROUP_SIZE
/**@{@name VMDq and DCB maximums */
-#define ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDQ vlan filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDQ DCB queues. */
-#define ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDQ vlan filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDQ DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES RTE_ETH_DCB_NUM_QUEUES
/**@}*/
/**@{@name DCB capabilities */
-#define ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT RTE_ETH_DCB_PFC_SUPPORT
/**@}*/
/**@{@name VLAN offload bits */
-#define ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
-
-#define ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
-#define ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
-#define ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
-#define ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
-#define ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
+#define ETH_VLAN_STRIP_MASK RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
+#define ETH_VLAN_FILTER_MASK RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
+#define ETH_VLAN_EXTEND_MASK RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
+#define ETH_QINQ_STRIP_MASK RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX RTE_ETH_VLAN_ID_MAX
/**@}*/
/* Definitions used for receive MAC address */
-#define ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR RTE_ETH_NUM_RECEIVE_MAC_ADDR
/* Definitions used for unicast hash */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
/**@{@name VMDq Rx mode
* @see rte_eth_vmdq_rx_conf.rx_mode
*/
-#define ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST RTE_ETH_VMDQ_ACCEPT_MULTICAST
/**@}*/
+/** Maximum nb. of vlan per mirror rule */
+#define RTE_ETH_MIRROR_MAX_VLANS 64
+#define ETH_MIRROR_MAX_VLANS RTE_ETH_MIRROR_MAX_VLANS
+
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP 0x01 /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT 0x02 /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT 0x04 /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN 0x08 /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN 0x10 /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
+
+/**
+ * A structure used to configure VLAN traffic mirror of an Ethernet port.
+ */
+struct rte_eth_vlan_mirror {
+ uint64_t vlan_mask; /**< mask for valid VLAN ID. */
+ /** VLAN ID list for vlan mirroring. */
+ uint16_t vlan_id[RTE_ETH_MIRROR_MAX_VLANS];
+};
+
+/**
+ * A structure used to configure traffic mirror of an Ethernet port.
+ */
+struct rte_eth_mirror_conf {
+ uint8_t rule_type; /**< Mirroring rule type */
+ uint8_t dst_pool; /**< Destination pool for this mirror rule. */
+ uint64_t pool_mask; /**< Bitmap of pool for pool mirroring */
+ /** VLAN ID setting for VLAN mirroring. */
+ struct rte_eth_vlan_mirror vlan;
+};
+
/**
* A structure used to configure 64 entries of Redirection Table of the
* Receive Side Scaling (RSS) feature of an Ethernet port. To configure
@@ -854,7 +1036,7 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
struct rte_eth_rss_reta_entry64 {
uint64_t mask;
/**< Mask bits indicate which entries need to be updated/queried. */
- uint16_t reta[RTE_RETA_GROUP_SIZE];
+ uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
/**< Group of 64 redirection table entries. */
};
@@ -863,38 +1045,44 @@ struct rte_eth_rss_reta_entry64 {
* in DCB configurations
*/
enum rte_eth_nb_tcs {
- ETH_4_TCS = 4, /**< 4 TCs with DCB. */
- ETH_8_TCS = 8 /**< 8 TCs with DCB. */
+ RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+ RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */
};
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
/**
* This enum indicates the possible number of queue pools
* in VMDQ configurations.
*/
enum rte_eth_nb_pools {
- ETH_8_POOLS = 8, /**< 8 VMDq pools. */
- ETH_16_POOLS = 16, /**< 16 VMDq pools. */
- ETH_32_POOLS = 32, /**< 32 VMDq pools. */
- ETH_64_POOLS = 64 /**< 64 VMDq pools. */
+ RTE_ETH_8_POOLS = 8, /**< 8 VMDq pools. */
+ RTE_ETH_16_POOLS = 16, /**< 16 VMDq pools. */
+ RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */
+ RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */
};
+#define ETH_8_POOLS RTE_ETH_8_POOLS
+#define ETH_16_POOLS RTE_ETH_16_POOLS
+#define ETH_32_POOLS RTE_ETH_32_POOLS
+#define ETH_64_POOLS RTE_ETH_64_POOLS
/* This structure may be extended in future. */
struct rte_eth_dcb_rx_conf {
enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_vmdq_dcb_tx_conf {
enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_dcb_tx_conf {
enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_vmdq_tx_conf {
@@ -920,8 +1108,8 @@ struct rte_eth_vmdq_dcb_conf {
struct {
uint16_t vlan_id; /**< The vlan id of the received frame */
uint64_t pools; /**< Bitmask of pools for packet rx */
- } pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
/**< Selects a queue in a pool */
};
@@ -932,7 +1120,7 @@ struct rte_eth_vmdq_dcb_conf {
* Using this feature, packets are routed to a pool of queues. By default,
* the pool selection is based on the MAC address, the vlan id in the
* vlan tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
* selection using only the MAC address. MAC address to pool mapping is done
* using the rte_eth_dev_mac_addr_add function, with the pool parameter
* corresponding to the pool id.
@@ -953,7 +1141,7 @@ struct rte_eth_vmdq_rx_conf {
struct {
uint16_t vlan_id; /**< The vlan id of the received frame */
uint64_t pools; /**< Bitmask of pools for packet rx */
- } pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+ } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
};
/**
@@ -962,7 +1150,7 @@ struct rte_eth_vmdq_rx_conf {
struct rte_eth_txmode {
enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
/**
- * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -1046,7 +1234,7 @@ struct rte_eth_rxconf {
uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
/**
- * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_queue_offload_capa or rx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1075,7 +1263,7 @@ struct rte_eth_txconf {
uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
/**
- * Per-queue Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_queue_offload_capa or tx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1186,12 +1374,17 @@ struct rte_eth_desc_lim {
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
- RTE_FC_NONE = 0, /**< Disable flow control. */
- RTE_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
- RTE_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
- RTE_FC_FULL /**< Enable flow control on both side. */
+ RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+ RTE_ETH_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
+ RTE_ETH_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
+ RTE_ETH_FC_FULL /**< Enable flow control on both side. */
};
+#define RTE_FC_NONE RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL RTE_ETH_FC_FULL
+
/**
* A structure used to configure Ethernet flow control parameter.
* These parameters will be configured into the register of the NIC.
@@ -1222,18 +1415,29 @@ struct rte_eth_pfc_conf {
* @see rte_eth_udp_tunnel
*/
enum rte_eth_tunnel_type {
- RTE_TUNNEL_TYPE_NONE = 0,
- RTE_TUNNEL_TYPE_VXLAN,
- RTE_TUNNEL_TYPE_GENEVE,
- RTE_TUNNEL_TYPE_TEREDO,
- RTE_TUNNEL_TYPE_NVGRE,
- RTE_TUNNEL_TYPE_IP_IN_GRE,
- RTE_L2_TUNNEL_TYPE_E_TAG,
- RTE_TUNNEL_TYPE_VXLAN_GPE,
- RTE_TUNNEL_TYPE_ECPRI,
- RTE_TUNNEL_TYPE_MAX,
+ RTE_ETH_TUNNEL_TYPE_NONE = 0,
+ RTE_ETH_TUNNEL_TYPE_VXLAN,
+ RTE_ETH_TUNNEL_TYPE_GENEVE,
+ RTE_ETH_TUNNEL_TYPE_TEREDO,
+ RTE_ETH_TUNNEL_TYPE_NVGRE,
+ RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+ RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+ RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+ RTE_ETH_TUNNEL_TYPE_ECPRI,
+ RTE_ETH_TUNNEL_TYPE_MAX,
};
+#define RTE_TUNNEL_TYPE_NONE RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX RTE_ETH_TUNNEL_TYPE_MAX
+
/* Deprecated API file for rte_eth_dev_filter_* functions */
#include "rte_eth_ctrl.h"
@@ -1241,11 +1445,16 @@ enum rte_eth_tunnel_type {
* Memory space that can be configured to store Flow Director filters
* in the board memory.
*/
-enum rte_fdir_pballoc_type {
- RTE_FDIR_PBALLOC_64K = 0, /**< 64k. */
- RTE_FDIR_PBALLOC_128K, /**< 128k. */
- RTE_FDIR_PBALLOC_256K, /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+ RTE_ETH_FDIR_PBALLOC_64K = 0, /**< 64k. */
+ RTE_ETH_FDIR_PBALLOC_128K, /**< 128k. */
+ RTE_ETH_FDIR_PBALLOC_256K, /**< 256k. */
};
+#define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K RTE_ETH_FDIR_PBALLOC_256K
/**
* Select report mode of FDIR hash information in RX descriptors.
@@ -1262,9 +1471,9 @@ enum rte_fdir_status_mode {
*
* If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
*/
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
enum rte_fdir_mode mode; /**< Flow Director mode. */
- enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+ enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
enum rte_fdir_status_mode status; /**< How to report FDIR hash. */
/** RX queue of packets matching a "drop" filter in perfect mode. */
uint8_t drop_queue;
@@ -1273,6 +1482,8 @@ struct rte_fdir_conf {
/**< Flex payload configuration. */
};
+#define rte_fdir_conf rte_eth_fdir_conf
+
/**
* UDP tunneling configuration.
*
@@ -1290,7 +1501,7 @@ struct rte_eth_udp_tunnel {
/**
* A structure used to enable/disable specific device interrupts.
*/
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
uint32_t lsc:1;
/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1299,18 +1510,20 @@ struct rte_intr_conf {
uint32_t rmv:1;
};
+#define rte_intr_conf rte_eth_intr_conf
+
/**
* A structure used to configure an Ethernet port.
* Depending upon the RX multi-queue mode, extra advanced
* configuration settings may be needed.
*/
struct rte_eth_conf {
- uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
- used. ETH_LINK_SPEED_FIXED disables link
+ uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+ used. RTE_ETH_LINK_SPEED_FIXED disables link
autonegotiation, and a unique speed shall be
set. Otherwise, the bitmap defines the set of
speeds to be advertised. If the special value
- ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+ RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
supported are advertised. */
struct rte_eth_rxmode rxmode; /**< Port RX configuration. */
struct rte_eth_txmode txmode; /**< Port TX configuration. */
@@ -1336,48 +1549,70 @@ struct rte_eth_conf {
struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
/**< Port vmdq TX configuration. */
} tx_adv_conf; /**< Port TX DCB configuration (union). */
- /** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
- is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+ /**
+ * Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
+ * is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT.
+ */
uint32_t dcb_capability_en;
- struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
- struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+ struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+ struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
};
/**
* RX offload capabilities of a device.
*/
-#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_SCATTER 0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP 0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP 0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP 0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT 0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER 0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND 0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_SCATTER 0x00002000
+#define DEV_RX_OFFLOAD_SCATTER RTE_ETH_RX_OFFLOAD_SCATTER
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
-#define DEV_RX_OFFLOAD_SECURITY 0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC 0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM 0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP 0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY 0x00008000
+#define DEV_RX_OFFLOAD_SECURITY RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC 0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM 0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH 0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN RTE_ETH_RX_OFFLOAD_VLAN
/*
* If new Rx offload capabilities are defined, they also must be
@@ -1387,52 +1622,74 @@ struct rte_eth_conf {
/**
* TX offload capabilities of a device.
*/
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT 0x00002000
-#define DEV_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM 0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO 0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO 0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT 0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
/**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
* tx queue without SW lock.
*/
-#define DEV_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_ETH_TX_OFFLOAD_MULTI_SEGS
/**< Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
/**< Device supports optimization for fast release of mbufs.
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
-#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define RTE_ETH_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_TX_OFFLOAD_SECURITY RTE_ETH_TX_OFFLOAD_SECURITY
/**
* Device supports generic UDP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
/**
* Device supports generic IP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
/** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
/**
* Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1564,7 +1821,7 @@ struct rte_eth_dev_info {
uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
struct rte_eth_desc_lim rx_desc_lim; /**< RX descriptors limits */
struct rte_eth_desc_lim tx_desc_lim; /**< TX descriptors limits */
- uint32_t speed_capa; /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ uint32_t speed_capa; /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
@@ -1668,8 +1925,10 @@ struct rte_eth_xstat_name {
char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
};
-#define ETH_DCB_NUM_TCS 8
-#define ETH_MAX_VMDQ_POOL 64
+#define RTE_ETH_DCB_NUM_TCS 8
+#define ETH_DCB_NUM_TCS RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL 64
+#define ETH_MAX_VMDQ_POOL RTE_ETH_MAX_VMDQ_POOL
/**
* A structure used to get the information of queue and
@@ -1680,12 +1939,12 @@ struct rte_eth_dcb_tc_queue_mapping {
struct {
uint16_t base;
uint16_t nb_queue;
- } tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+ } tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
/** rx queues assigned to tc per Pool */
struct {
uint16_t base;
uint16_t nb_queue;
- } tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+ } tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
};
/**
@@ -1694,8 +1953,8 @@ struct rte_eth_dcb_tc_queue_mapping {
*/
struct rte_eth_dcb_info {
uint8_t nb_tcs; /**< number of TCs */
- uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
- uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
+ uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+ uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
/** rx queues assigned to tc */
struct rte_eth_dcb_tc_queue_mapping tc_queue;
};
@@ -1719,7 +1978,7 @@ enum rte_eth_fec_mode {
/* A structure used to get capabilities per link speed */
struct rte_eth_fec_capa {
- uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+ uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
uint32_t capa; /**< FEC capabilities bitmask */
};
@@ -1742,13 +2001,17 @@ struct rte_eth_fec_capa {
/**@{@name L2 tunnel configuration */
/**< l2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK 0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK 0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK RTE_ETH_L2_TUNNEL_ENABLE_MASK
/**< l2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK 0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK 0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK RTE_ETH_L2_TUNNEL_INSERTION_MASK
/**< l2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK 0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK 0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK RTE_ETH_L2_TUNNEL_STRIPPING_MASK
/**< l2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK 0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK 0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK RTE_ETH_L2_TUNNEL_FORWARDING_MASK
/**@}*/
/**
@@ -2059,14 +2322,14 @@ uint16_t rte_eth_dev_count_total(void);
* @param speed
* Numerical speed value in Mbps
* @param duplex
- * ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ * RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
* @return
* 0 if the speed cannot be mapped
*/
uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
/**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2076,7 +2339,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
const char *rte_eth_dev_rx_offload_name(uint64_t offload);
/**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2170,7 +2433,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
* of the Prefetch, Host, and Write-Back threshold registers of the receive
* ring.
* In addition it contains the hardware offloads features to activate using
- * the DEV_RX_OFFLOAD_* flags.
+ * the RTE_ETH_RX_OFFLOAD_* flags.
* If an offloading set in rx_conf->offloads
* hasn't been set in the input argument eth_conf->rxmode.offloads
* to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2747,7 +3010,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
*
* @param str
* A pointer to a string to be filled with textual representation of
- * device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ * device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
* store default link status text.
* @param len
* Length of available memory at 'str' string.
@@ -3293,10 +3556,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
* The port identifier of the Ethernet device.
* @param offload_mask
* The VLAN Offload bit mask can be mixed use with "OR"
- * ETH_VLAN_STRIP_OFFLOAD
- * ETH_VLAN_FILTER_OFFLOAD
- * ETH_VLAN_EXTEND_OFFLOAD
- * ETH_QINQ_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_FILTER_OFFLOAD
+ * RTE_ETH_VLAN_EXTEND_OFFLOAD
+ * RTE_ETH_QINQ_STRIP_OFFLOAD
* @return
* - (0) if successful.
* - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3312,10 +3575,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
* The port identifier of the Ethernet device.
* @return
* - (>0) if successful. Bit mask to indicate
- * ETH_VLAN_STRIP_OFFLOAD
- * ETH_VLAN_FILTER_OFFLOAD
- * ETH_VLAN_EXTEND_OFFLOAD
- * ETH_QINQ_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_FILTER_OFFLOAD
+ * RTE_ETH_VLAN_EXTEND_OFFLOAD
+ * RTE_ETH_QINQ_STRIP_OFFLOAD
* - (-ENODEV) if *port_id* invalid.
*/
int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5340,7 +5603,7 @@ uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
* rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf* buffers
* of those packets whose transmission was effectively completed.
*
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
* invoke this function concurrently on the same tx queue without SW lock.
* @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
*
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 2b6efeef8cf5..555580ab4e71 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2890,7 +2890,7 @@ struct rte_flow_action_rss {
* through.
*/
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
#include "gso_udp4.h"
#define ILLEGAL_UDP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+ ((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
(ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
#define ILLEGAL_TCP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+ ((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
ol_flags = pkt->ol_flags;
if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_tunnel_udp4_segment(pkt, gso_size,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_TCP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_UDP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_udp4_segment(pkt, gso_size, direct_pool,
indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
uint32_t gso_types;
/**< the bit mask of required GSO types. The GSO library
* uses the same macros as that of describing device TX
- * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+ * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
* gso_types.
*
* For example, if applications want to segment TCP/IPv4
- * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+ * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
*/
uint16_t gso_size;
/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index d6f167994411..5a5b6b1e33c1 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -185,7 +185,7 @@ extern "C" {
* The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
* HW capability, At minimum, the PMD should support
* PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
*/
#define PKT_RX_OUTER_L4_CKSUM_MASK ((1ULL << 21) | (1ULL << 22))
@@ -208,7 +208,7 @@ extern "C" {
* a) Fill outer_l2_len and outer_l3_len in mbuf.
* b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
* c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
*/
#define PKT_TX_OUTER_UDP_CKSUM (1ULL << 41)
@@ -253,7 +253,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
* or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
@@ -266,7 +266,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
* if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index fb03cf1dcf90..29abe8da53cf 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
* of the dynamic field to be registered:
* const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
* - The application initializes the PMD, and asks for this feature
- * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ * at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
* rxconf. This will make the PMD to register the field by calling
* rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
* stores the returned offset.
--
2.31.1
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework
@ 2021-10-20 19:27 0% ` Ananyev, Konstantin
2021-10-21 6:53 0% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-20 19:27 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh, Zhang,
Roy Fan, jianjay.zhou, asomalap, ruifeng.wang, Nicolau, Radu,
ajit.khaparde, rnagadheeraj, adwivedi, Power, Ciara, Wang,
Haiyue, jiawenwu, jianwang
Hi Akhil,
> As per current design, rte_cryptodev_sym_session_create() and
> rte_cryptodev_sym_session_init() use separate mempool objects
> for a single session.
> And structure rte_cryptodev_sym_session is not directly used
> by the application, it may cause ABI breakage if the structure
> is modified in future.
>
> To address these two issues, the rte_cryptodev_sym_session_create
> will take one mempool object for both the session and session
> private data. The API rte_cryptodev_sym_session_init will now not
> take mempool object.
> rte_cryptodev_sym_session_create will now return an opaque session
> pointer which will be used by the app in rte_cryptodev_sym_session_init
> and other APIs.
>
> With this change, rte_cryptodev_sym_session_init will send
> pointer to session private data of corresponding driver to the PMD
> based on the driver_id for filling the PMD data.
>
> In data path, opaque session pointer is attached to rte_crypto_op
> and the PMD can call an internal library API to get the session
> private data pointer based on the driver id.
>
> Note: currently nb_drivers are getting updated in RTE_INIT which
> result in increasing the memory requirements for session.
> User can compile off drivers which are not in use to reduce the
> memory consumption of a session.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
With that patch ipsec-secgw functional tests crashes for AES_GCM test-cases.
To be more specific:
examples/ipsec-secgw/test/run_test.sh -4 tun_aesgcm
[24126592.561071] traps: dpdk-ipsec-secg[3254860] general protection fault ip:7f3ac2397027 sp:7ffeaade8848 error:0 in libIPSec_MB.so.1.0.0[7f3ac238f000+2a20000]
Looking a bit deeper, it fails at:
#0 0x00007ff9274f4027 in aes_keyexp_128_enc_avx512 ()
from /lib/libIPSec_MB.so.1
#1 0x00007ff929f0ac97 in aes_gcm_pre_128_avx_gen4 ()
from /lib/libIPSec_MB.so.1
#2 0x0000561757073753 in aesni_gcm_session_configure (mb_mgr=0x56175c5fe400,
session=0x17e3b72d8, xform=0x17e05d7c0)
at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
#3 0x00005617570592af in ipsec_mb_sym_session_configure (
dev=0x56175be0c940 <rte_crypto_devices>, xform=0x17e05d7c0,
sess=0x17e3b72d8) at ../drivers/crypto/ipsec_mb/ipsec_mb_ops.c:330
#4 0x0000561753b4d6ae in rte_cryptodev_sym_session_init (dev_id=0 '\000',
sess_opaque=0x17e3b4940, xforms=0x17e05d7c0)
at ../lib/cryptodev/rte_cryptodev.c:1736
#5 0x0000561752ef99b7 in create_lookaside_session (
ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140,
ips=0x17e05d140) at ../examples/ipsec-secgw/ipsec.c:145
#6 0x0000561752f0cf98 in fill_ipsec_session (ss=0x17e05d140,
ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140)
at ../examples/ipsec-secgw/ipsec_process.c:89
#7 0x0000561752f0d7dd in ipsec_process (
ctx=0x56175aa6a210 <lcore_conf+1105232>, trf=0x7ffd192326a0)
at ../examples/ipsec-secgw/ipsec_process.c:300
#8 0x0000561752f21027 in process_pkts_outbound (
--Type <RET> for more, q to quit, c to continue without paging--
ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>, traffic=0x7ffd192326a0)
at ../examples/ipsec-secgw/ipsec-secgw.c:839
#9 0x0000561752f21b2e in process_pkts (
qconf=0x56175aa57340 <lcore_conf+1027712>, pkts=0x7ffd19233c20,
nb_pkts=1 '\001', portid=1) at ../examples/ipsec-secgw/ipsec-secgw.c:1072
#10 0x0000561752f224db in ipsec_poll_mode_worker ()
at ../examples/ipsec-secgw/ipsec-secgw.c:1262
#11 0x0000561752f38adc in ipsec_launch_one_lcore (args=0x56175c549700)
at ../examples/ipsec-secgw/ipsec_worker.c:654
#12 0x0000561753cbc523 in rte_eal_mp_remote_launch (
f=0x561752f38ab5 <ipsec_launch_one_lcore>, arg=0x56175c549700,
call_main=CALL_MAIN) at ../lib/eal/common/eal_common_launch.c:64
#13 0x0000561752f265ed in main (argc=12, argv=0x7ffd19234168)
at ../examples/ipsec-secgw/ipsec-secgw.c:2978
(gdb) frame 2
#2 0x0000561757073753 in aesni_gcm_session_configure (mb_mgr=0x56175c5fe400,
session=0x17e3b72d8, xform=0x17e05d7c0)
at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
132 mb_mgr->gcm128_pre(key, &sess->gdata_key);
Because of un-expected unaligned memory access:
(gdb) disas
Dump of assembler code for function aes_keyexp_128_enc_avx512:
0x00007ff9274f400b <+0>: endbr64
0x00007ff9274f400f <+4>: cmp $0x0,%rdi
0x00007ff9274f4013 <+8>: je 0x7ff9274f41b4 <aes_keyexp_128_enc_avx512+425>
0x00007ff9274f4019 <+14>: cmp $0x0,%rsi
0x00007ff9274f401d <+18>: je 0x7ff9274f41b4 <aes_keyexp_128_enc_avx512+425>
0x00007ff9274f4023 <+24>: vmovdqu (%rdi),%xmm1
=> 0x00007ff9274f4027 <+28>: vmovdqa %xmm1,(%rsi)
(gdb) print/x $rsi
$12 = 0x17e3b72e8
And this is caused because now AES_GCM session private data is not 16B-bits
aligned anymore:
(gdb) print ((struct aesni_gcm_session *)sess->sess_data[index].data)
$29 = (struct aesni_gcm_session *) 0x17e3b72d8
print &((struct aesni_gcm_session *)sess->sess_data[index].data)->gdata_key
$31 = (struct gcm_key_data *) 0x17e3b72e8
As I understand the reason for that is that we changed the way how sess_data[index].data
is populated. Now it is just:
sess->sess_data[index].data = (void *)((uint8_t *)sess +
rte_cryptodev_sym_get_header_session_size() +
(index * sess->priv_sz));
So, as I can see, there is no guarantee that PMD's private sess data will be aligned on 16B
as expected.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] lpm: fix buffer overflow
@ 2021-10-20 19:55 3% ` David Marchand
2021-10-21 17:15 0% ` Medvedkin, Vladimir
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-20 19:55 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: dev, Bruce Richardson, alex, dpdk stable
Hello Vladimir,
On Fri, Oct 8, 2021 at 11:29 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> This patch fixes buffer overflow reported by ASAN,
> please reference https://bugs.dpdk.org/show_bug.cgi?id=819
>
> The rte_lpm6 keeps routing information for control plane purpose
> inside the rte_hash table which uses rte_jhash() as a hash function.
> From the rte_jhash() documentation: If input key is not aligned to
> four byte boundaries or a multiple of four bytes in length,
> the memory region just after may be read (but not used in the
> computation).
> rte_lpm6 uses 17 bytes keys consisting of IPv6 address (16 bytes) +
> depth (1 byte).
>
> This patch increases the size of the depth field up to uint32_t
> and sets the alignment to 4 bytes.
>
> Bugzilla ID: 819
> Fixes: 86b3b21952a8 ("lpm6: store rules in hash table")
> Cc: alex@therouter.net
> Cc: stable@dpdk.org
This change should be internal, and not breaking ABI, but are we sure
we want to backport it?
>
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
> lib/lpm/rte_lpm6.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c
> index 37baabb..d5e0918 100644
> --- a/lib/lpm/rte_lpm6.c
> +++ b/lib/lpm/rte_lpm6.c
> @@ -80,8 +80,8 @@ struct rte_lpm6_rule {
> /** Rules tbl entry key. */
> struct rte_lpm6_rule_key {
> uint8_t ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
> - uint8_t depth; /**< Rule depth. */
> -};
> + uint32_t depth; /**< Rule depth. */
> +} __rte_aligned(sizeof(uint32_t));
I would recommend doing the same than for hash tests: keep growing
depth to 32bits, but no enforcement of alignment and add build check
on structure size being sizeof(uin32_t) aligned.
>
> /* Header of tbl8 */
> struct rte_lpm_tbl8_hdr {
> --
> 2.7.4
>
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters memory to hugepage
@ 2021-10-20 20:24 0% ` Carrillo, Erik G
0 siblings, 0 replies; 200+ results
From: Carrillo, Erik G @ 2021-10-20 20:24 UTC (permalink / raw)
To: pbhagavatula, jerinj; +Cc: dev
Hi Pavan and Jerin,
> -----Original Message-----
> From: pbhagavatula@marvell.com <pbhagavatula@marvell.com>
> Sent: Monday, October 18, 2021 6:36 PM
> To: jerinj@marvell.com; Carrillo, Erik G <erik.g.carrillo@intel.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Subject: [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters
> memory to hugepage
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Move memory used by timer adapters to hugepage.
> Allocate memory on the first adapter create or lookup to address both
> primary and secondary process usecases.
> This will prevent TLB misses if any and aligns to memory structure of other
> subsystems.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 2 ++
> lib/eventdev/rte_event_timer_adapter.c | 36
> ++++++++++++++++++++++++--
> 2 files changed, 36 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 6442c79977..9694b32002 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -226,6 +226,8 @@ API Changes
> the crypto/security operation. This field will be used to communicate
> events such as soft expiry with IPsec in lookaside mode.
>
> +* eventdev: Move memory used by timer adapters to hugepage. This will
> +prevent
> + TLB misses if any and aligns to memory structure of other subsystems.
>
> ABI Changes
> -----------
> diff --git a/lib/eventdev/rte_event_timer_adapter.c
> b/lib/eventdev/rte_event_timer_adapter.c
> index ae55407042..894f532ef0 100644
> --- a/lib/eventdev/rte_event_timer_adapter.c
> +++ b/lib/eventdev/rte_event_timer_adapter.c
> @@ -33,7 +33,7 @@ RTE_LOG_REGISTER_SUFFIX(evtim_logtype,
> adapter.timer, NOTICE);
> RTE_LOG_REGISTER_SUFFIX(evtim_buffer_logtype, adapter.timer, NOTICE);
> RTE_LOG_REGISTER_SUFFIX(evtim_svc_logtype, adapter.timer.svc,
> NOTICE);
>
> -static struct rte_event_timer_adapter
> adapters[RTE_EVENT_TIMER_ADAPTER_NUM_MAX];
> +static struct rte_event_timer_adapter *adapters;
>
> static const struct event_timer_adapter_ops swtim_ops;
>
> @@ -138,6 +138,17 @@ rte_event_timer_adapter_create_ext(
> int n, ret;
> struct rte_eventdev *dev;
>
> + if (adapters == NULL) {
> + adapters = rte_zmalloc("Eventdev",
> + sizeof(struct rte_event_timer_adapter) *
> +
> RTE_EVENT_TIMER_ADAPTER_NUM_MAX,
> + RTE_CACHE_LINE_SIZE);
> + if (adapters == NULL) {
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> + }
> +
> if (conf == NULL) {
> rte_errno = EINVAL;
> return NULL;
> @@ -312,6 +323,17 @@ rte_event_timer_adapter_lookup(uint16_t
> adapter_id)
> int ret;
> struct rte_eventdev *dev;
>
> + if (adapters == NULL) {
> + adapters = rte_zmalloc("Eventdev",
> + sizeof(struct rte_event_timer_adapter) *
> +
> RTE_EVENT_TIMER_ADAPTER_NUM_MAX,
> + RTE_CACHE_LINE_SIZE);
> + if (adapters == NULL) {
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> + }
> +
> if (adapters[adapter_id].allocated)
> return &adapters[adapter_id]; /* Adapter is already loaded
> */
>
> @@ -358,7 +380,7 @@ rte_event_timer_adapter_lookup(uint16_t
> adapter_id) int rte_event_timer_adapter_free(struct
> rte_event_timer_adapter *adapter) {
> - int ret;
> + int i, ret;
>
> ADAPTER_VALID_OR_ERR_RET(adapter, -EINVAL);
> FUNC_PTR_OR_ERR_RET(adapter->ops->uninit, -EINVAL); @@ -
> 382,6 +404,16 @@ rte_event_timer_adapter_free(struct
> rte_event_timer_adapter *adapter)
> adapter->data = NULL;
> adapter->allocated = 0;
>
> + ret = 0;
> + for (i = 0; i < RTE_EVENT_TIMER_ADAPTER_NUM_MAX; i++)
> + if (adapters[i].allocated)
> + ret = adapter[i].allocated;
> +
I found a typo here, but it looks like this series has already been accepted, so I submitted the following patch for the issue:
http://patchwork.dpdk.org/project/dpdk/patch/20211020202021.1205135-1-erik.g.carrillo@intel.com/
Besides that, this patch and the others I was copied on look good to me.
Thanks,
Erik
> + if (!ret) {
> + rte_free(adapters);
> + adapters = NULL;
> + }
> +
> rte_eventdev_trace_timer_adapter_free(adapter);
> return 0;
> }
> --
> 2.17.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering
@ 2021-10-20 21:42 1% ` Stephen Hemminger
2021-10-21 14:16 0% ` Kinsella, Ray
2021-10-27 6:34 0% ` Wang, Yinan
2021-10-20 21:42 1% ` [dpdk-dev] [PATCH v15 11/12] doc: changes for new pcapng and dumpcap utility Stephen Hemminger
1 sibling, 2 replies; 200+ results
From: Stephen Hemminger @ 2021-10-20 21:42 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Reshma Pattan, Ray Kinsella, Anatoly Burakov
This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.
The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.
The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
---
lib/meson.build | 4 +-
lib/pdump/meson.build | 2 +-
lib/pdump/rte_pdump.c | 432 ++++++++++++++++++++++++++++++------------
lib/pdump/rte_pdump.h | 113 ++++++++++-
lib/pdump/version.map | 8 +
5 files changed, 433 insertions(+), 126 deletions(-)
diff --git a/lib/meson.build b/lib/meson.build
index 484b1da2b88d..1a8ac30c4da6 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -27,6 +27,7 @@ libraries = [
'acl',
'bbdev',
'bitratestats',
+ 'bpf',
'cfgfile',
'compressdev',
'cryptodev',
@@ -43,7 +44,6 @@ libraries = [
'member',
'pcapng',
'power',
- 'pdump',
'rawdev',
'regexdev',
'dmadev',
@@ -56,10 +56,10 @@ libraries = [
'ipsec', # ipsec lib depends on net, crypto and security
'fib', #fib lib depends on rib
'port', # pkt framework libs which use other libs from above
+ 'pdump', # pdump lib depends on bpf
'table',
'pipeline',
'flow_classify', # flow_classify lib depends on pkt framework table lib
- 'bpf',
'graph',
'node',
]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
sources = files('rte_pdump.c')
headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 46a87e233904..71602685d544 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
#include <rte_ethdev.h>
#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_memzone.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
+#include <rte_pcapng.h>
#include "rte_pdump.h"
@@ -27,30 +29,23 @@ enum pdump_operation {
ENABLE = 2
};
+/* Internal version number in request */
enum pdump_version {
- V1 = 1
+ V1 = 1, /* no filtering or snap */
+ V2 = 2,
};
struct pdump_request {
uint16_t ver;
uint16_t op;
uint32_t flags;
- union pdump_data {
- struct enable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } en_v1;
- struct disable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } dis_v1;
- } data;
+ char device[RTE_DEV_NAME_MAX_LEN];
+ uint16_t queue;
+ struct rte_ring *ring;
+ struct rte_mempool *mp;
+
+ const struct rte_bpf_prm *prm;
+ uint32_t snaplen;
};
struct pdump_response {
@@ -63,80 +58,140 @@ static struct pdump_rxtx_cbs {
struct rte_ring *ring;
struct rte_mempool *mp;
const struct rte_eth_rxtx_callback *cb;
- void *filter;
+ const struct rte_bpf *filter;
+ enum pdump_version ver;
+ uint32_t snaplen;
} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+/*
+ * The packet capture statistics keep track of packets
+ * accepted, filtered and dropped. These are per-queue
+ * and in memory between primary and secondary processes.
+ */
+static const char MZ_RTE_PDUMP_STATS[] = "rte_pdump_stats";
+static struct {
+ struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+ struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+ enum rte_pcapng_direction direction,
+ struct rte_mbuf **pkts, uint16_t nb_pkts,
+ const struct pdump_rxtx_cbs *cbs,
+ struct rte_pdump_stats *stats)
{
unsigned int i;
int ring_enq;
uint16_t d_pkts = 0;
struct rte_mbuf *dup_bufs[nb_pkts];
- struct pdump_rxtx_cbs *cbs;
+ uint64_t ts;
struct rte_ring *ring;
struct rte_mempool *mp;
struct rte_mbuf *p;
+ uint64_t rcs[nb_pkts];
+
+ if (cbs->filter)
+ rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts);
- cbs = user_params;
+ ts = rte_get_tsc_cycles();
ring = cbs->ring;
mp = cbs->mp;
for (i = 0; i < nb_pkts; i++) {
- p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
- if (p)
+ /*
+ * This uses same BPF return value convention as socket filter
+ * and pcap_offline_filter.
+ * if program returns zero
+ * then packet doesn't match the filter (will be ignored).
+ */
+ if (cbs->filter && rcs[i] == 0) {
+ __atomic_fetch_add(&stats->filtered,
+ 1, __ATOMIC_RELAXED);
+ continue;
+ }
+
+ /*
+ * If using pcapng then want to wrap packets
+ * otherwise a simple copy.
+ */
+ if (cbs->ver == V2)
+ p = rte_pcapng_copy(port_id, queue,
+ pkts[i], mp, cbs->snaplen,
+ ts, direction);
+ else
+ p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+ if (unlikely(p == NULL))
+ __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+ else
dup_bufs[d_pkts++] = p;
}
+ __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
if (unlikely(ring_enq < d_pkts)) {
unsigned int drops = d_pkts - ring_enq;
- PDUMP_LOG(DEBUG,
- "only %d of packets enqueued to ring\n", ring_enq);
+ __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
}
}
static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts,
- uint16_t max_pkts __rte_unused,
- void *user_params)
+ uint16_t max_pkts __rte_unused, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &rx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"rx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_first_rx_callback(port, qid,
pdump_rx, cbs);
if (cbs->cb == NULL) {
@@ -145,8 +200,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -170,26 +224,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
}
static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &tx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"tx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
cbs);
if (cbs->cb == NULL) {
@@ -198,8 +258,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -228,37 +287,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
uint16_t port;
int ret = 0;
+ struct rte_bpf *filter = NULL;
uint32_t flags;
uint16_t operation;
struct rte_ring *ring;
struct rte_mempool *mp;
- flags = p->flags;
- operation = p->op;
- if (operation == ENABLE) {
- ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
- &port);
- if (ret < 0) {
+ /* Check for possible DPDK version mismatch */
+ if (!(p->ver == V1 || p->ver == V2)) {
+ PDUMP_LOG(ERR,
+ "incorrect client version %u\n", p->ver);
+ return -EINVAL;
+ }
+
+ if (p->prm) {
+ if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.en_v1.device);
+ "invalid BPF program type: %u\n",
+ p->prm->prog_arg.type);
return -EINVAL;
}
- queue = p->data.en_v1.queue;
- ring = p->data.en_v1.ring;
- mp = p->data.en_v1.mp;
- } else {
- ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.dis_v1.device);
- return -EINVAL;
+
+ filter = rte_bpf_load(p->prm);
+ if (filter == NULL) {
+ PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+ rte_strerror(rte_errno));
+ return -rte_errno;
}
- queue = p->data.dis_v1.queue;
- ring = p->data.dis_v1.ring;
- mp = p->data.dis_v1.mp;
+ }
+
+ flags = p->flags;
+ operation = p->op;
+ queue = p->queue;
+ ring = p->ring;
+ mp = p->mp;
+
+ ret = rte_eth_dev_get_port_by_name(p->device, &port);
+ if (ret < 0) {
+ PDUMP_LOG(ERR,
+ "failed to get port id for device id=%s\n",
+ p->device);
+ return -EINVAL;
}
/* validation if packet capture is for all queues */
@@ -296,8 +365,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register RX callback */
if (flags & RTE_PDUMP_FLAG_RX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
- ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -305,8 +375,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register TX callback */
if (flags & RTE_PDUMP_FLAG_TX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
- ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -332,7 +403,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
resp->err_value = set_pdump_rxtx_cbs(cli_req);
}
- strlcpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+ rte_strscpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
mp_resp.len_param = sizeof(*resp);
mp_resp.num_fds = 0;
if (rte_mp_reply(&mp_resp, peer) < 0) {
@@ -347,8 +418,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
int
rte_pdump_init(void)
{
+ const struct rte_memzone *mz;
int ret;
+ mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+ rte_socket_id(), 0);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+
ret = rte_mp_action_register(PDUMP_MP, pdump_server);
if (ret && rte_errno != ENOTSUP)
return -1;
@@ -393,14 +474,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
static int
pdump_validate_flags(uint32_t flags)
{
- if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
- flags != RTE_PDUMP_FLAG_RXTX) {
+ if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
PDUMP_LOG(ERR,
"invalid flags, should be either rx/tx/rxtx\n");
rte_errno = EINVAL;
return -1;
}
+ /* mask off the flags we know about */
+ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "unknown flags: %#x\n", flags);
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
return 0;
}
@@ -427,12 +515,12 @@ pdump_validate_port(uint16_t port, char *name)
}
static int
-pdump_prepare_client_request(char *device, uint16_t queue,
- uint32_t flags,
- uint16_t operation,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ uint16_t operation,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret = -1;
struct rte_mp_msg mp_req, *mp_rep;
@@ -441,26 +529,22 @@ pdump_prepare_client_request(char *device, uint16_t queue,
struct pdump_request *req = (struct pdump_request *)mp_req.param;
struct pdump_response *resp;
- req->ver = 1;
- req->flags = flags;
+ memset(req, 0, sizeof(*req));
+
+ req->ver = (flags & RTE_PDUMP_FLAG_PCAPNG) ? V2 : V1;
+ req->flags = flags & RTE_PDUMP_FLAG_RXTX;
req->op = operation;
+ req->queue = queue;
+ rte_strscpy(req->device, device, sizeof(req->device));
+
if ((operation & ENABLE) != 0) {
- strlcpy(req->data.en_v1.device, device,
- sizeof(req->data.en_v1.device));
- req->data.en_v1.queue = queue;
- req->data.en_v1.ring = ring;
- req->data.en_v1.mp = mp;
- req->data.en_v1.filter = filter;
- } else {
- strlcpy(req->data.dis_v1.device, device,
- sizeof(req->data.dis_v1.device));
- req->data.dis_v1.queue = queue;
- req->data.dis_v1.ring = NULL;
- req->data.dis_v1.mp = NULL;
- req->data.dis_v1.filter = NULL;
+ req->ring = ring;
+ req->mp = mp;
+ req->prm = prm;
+ req->snaplen = snaplen;
}
- strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+ rte_strscpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
mp_req.len_param = sizeof(*req);
mp_req.num_fds = 0;
if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0) {
@@ -478,11 +562,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
return ret;
}
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret;
char name[RTE_DEV_NAME_MAX_LEN];
@@ -497,20 +587,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- ENABLE, ring, mp, filter);
+ if (snaplen == 0)
+ snaplen = UINT32_MAX;
- return ret;
+ return pdump_prepare_client_request(name, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
}
int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
- uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
{
- int ret = 0;
+ return pdump_enable(port, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable(port, queue, flags, snaplen,
+ ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ int ret;
ret = pdump_validate_ring_mp(ring, mp);
if (ret < 0)
@@ -519,10 +631,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- ENABLE, ring, mp, filter);
+ return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
+}
- return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+ uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+ ring, mp, prm);
}
int
@@ -538,8 +670,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(name, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
@@ -554,8 +686,68 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+ struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+ struct rte_pdump_stats *total)
+{
+ uint64_t *sum = (uint64_t *)total;
+ unsigned int i;
+ uint64_t val;
+ uint16_t qid;
+
+ for (qid = 0; qid < nq; qid++) {
+ const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+ for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+ val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+ sum[i] += val;
+ }
+ }
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+ struct rte_eth_dev_info dev_info;
+ const struct rte_memzone *mz;
+ int ret;
+
+ memset(stats, 0, sizeof(*stats));
+ ret = rte_eth_dev_info_get(port, &dev_info);
+ if (ret != 0) {
+ PDUMP_LOG(ERR,
+ "Error during getting device (port %u) info: %s\n",
+ port, strerror(-ret));
+ return ret;
+ }
+
+ if (pdump_stats == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ /* rte_pdump_init was not called */
+ PDUMP_LOG(ERR, "pdump stats not initialized\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ /* secondary process looks up the memzone */
+ mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+ if (mz == NULL) {
+ /* rte_pdump_init was not called in primary process?? */
+ PDUMP_LOG(ERR, "can not find pdump stats\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+ }
+
+ pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+ pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+ return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..6efa0274f2ce 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_mempool.h>
#include <rte_ring.h>
+#include <rte_bpf.h>
#ifdef __cplusplus
extern "C" {
@@ -26,7 +27,9 @@ enum {
RTE_PDUMP_FLAG_RX = 1, /* receive direction */
RTE_PDUMP_FLAG_TX = 2, /* transmit direction */
/* both receive and transmit directions */
- RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+ RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+ RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
};
/**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * Unused should be NULL.
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ * The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm);
+
/**
* Disables packet capturing on given port and queue.
*
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * unused should be NULL
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ * device id on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *filter);
+
+
/**
* Disables packet capturing on given device_id and queue.
* device_id can be name or pci address of device.
@@ -153,6 +228,38 @@ int
rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
uint32_t flags);
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+ uint64_t accepted; /**< Number of packets accepted by filter. */
+ uint64_t filtered; /**< Number of packets rejected by filter. */
+ uint64_t nombuf; /**< Number of mbuf allocation failures. */
+ uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+ uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param stats
+ * A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ * Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pdump_enable_bpf;
+ rte_pdump_enable_bpf_by_deviceid;
+ rte_pdump_stats;
+};
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v15 11/12] doc: changes for new pcapng and dumpcap utility
2021-10-20 21:42 1% ` [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-10-20 21:42 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-20 21:42 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Reshma Pattan
Describe the new packet capture library and utility.
Fix the title line on the pdump documentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
.../howto/img/packet_capture_framework.svg | 96 +++++++++----------
doc/guides/howto/packet_capture_framework.rst | 69 ++++++-------
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/pcapng_lib.rst | 46 +++++++++
doc/guides/prog_guide/pdump_lib.rst | 28 ++++--
doc/guides/rel_notes/release_21_11.rst | 10 ++
doc/guides/tools/dumpcap.rst | 86 +++++++++++++++++
doc/guides/tools/index.rst | 1 +
10 files changed, 251 insertions(+), 88 deletions(-)
create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
create mode 100644 doc/guides/tools/dumpcap.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 29390504318b..a447c1ab4ac0 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -224,3 +224,4 @@ The public API headers are grouped by topics:
[experimental APIs] (@ref rte_compat.h),
[ABI versioning] (@ref rte_function_versioning.h),
[version] (@ref rte_version.h)
+ [pcapng] (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 109ec1f6826b..096ebbaf0d1b 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -59,6 +59,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/metrics \
@TOPDIR@/lib/node \
@TOPDIR@/lib/net \
+ @TOPDIR@/lib/pcapng \
@TOPDIR@/lib/pci \
@TOPDIR@/lib/pdump \
@TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
viewBox="0 0 425.19685 283.46457"
id="svg2"
version="1.1"
- inkscape:version="0.91 r13725"
- sodipodi:docname="drawing-pcap.svg">
+ inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+ sodipodi:docname="packet_capture_framework.svg">
<defs
id="defs4">
<marker
@@ -228,7 +226,7 @@
x2="487.64606"
y2="258.38232"
gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-84.916417,744.90779)" />
+ gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
- inkscape:zoom="0.57434918"
- inkscape:cx="215.17857"
- inkscape:cy="285.26445"
+ inkscape:zoom="1"
+ inkscape:cx="226.77165"
+ inkscape:cy="78.124511"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
- inkscape:window-width="1874"
- inkscape:window-height="971"
- inkscape:window-x="2"
- inkscape:window-y="24"
- inkscape:window-maximized="0" />
+ inkscape:window-width="2560"
+ inkscape:window-height="1414"
+ inkscape:window-x="0"
+ inkscape:window-y="0"
+ inkscape:window-maximized="1"
+ inkscape:document-rotation="0" />
<metadata
id="metadata7">
<rdf:RDF>
@@ -296,7 +295,7 @@
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title></dc:title>
+ <dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
@@ -321,15 +320,15 @@
y="790.82452" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="61.050636"
y="807.3205"
- id="text4152"
- sodipodi:linespacing="125%"><tspan
+ id="text4152"><tspan
sodipodi:role="line"
id="tspan4154"
x="61.050636"
- y="807.3205">DPDK Primary Application</tspan></text>
+ y="807.3205"
+ style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6"
@@ -339,19 +338,20 @@
y="827.01843" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="350.68585"
y="841.16058"
- id="text4189"
- sodipodi:linespacing="125%"><tspan
+ id="text4189"><tspan
sodipodi:role="line"
id="tspan4191"
x="350.68585"
- y="841.16058">dpdk-pdump</tspan><tspan
+ y="841.16058"
+ style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
sodipodi:role="line"
x="350.68585"
y="856.78558"
- id="tspan4193">tool</tspan></text>
+ id="tspan4193"
+ style="font-size:12.5px;line-height:1.25">tool</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4"
@@ -361,15 +361,15 @@
y="891.16315" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70612"
y="905.3053"
- id="text4189-1"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1"><tspan
sodipodi:role="line"
x="352.70612"
y="905.3053"
- id="tspan4193-3">PCAP PMD</tspan></text>
+ id="tspan4193-3"
+ style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-6"
@@ -379,15 +379,15 @@
y="923.9931" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.02846"
y="938.13525"
- id="text4189-0"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-0"><tspan
sodipodi:role="line"
x="136.02846"
y="938.13525"
- id="tspan4193-6">dpdk_port0</tspan></text>
+ id="tspan4193-6"
+ style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-5"
@@ -397,33 +397,33 @@
y="824.99817" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="137.54369"
y="839.14026"
- id="text4189-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-4"><tspan
sodipodi:role="line"
x="137.54369"
y="839.14026"
- id="tspan4193-2">librte_pdump</tspan></text>
+ id="tspan4193-2"
+ style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
<rect
- style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5"
- width="94.449265"
- height="35.355339"
- x="307.7804"
- y="985.61243" />
+ width="108.21974"
+ height="35.335861"
+ x="297.9809"
+ y="985.62219" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70618"
y="999.75458"
- id="text4189-1-8"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8"><tspan
sodipodi:role="line"
x="352.70618"
y="999.75458"
- id="tspan4193-3-2">capture.pcap</tspan></text>
+ id="tspan4193-3-2"
+ style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
y="983.14984" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.53352"
y="1002.785"
- id="text4189-1-8-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8-4"><tspan
sodipodi:role="line"
x="136.53352"
y="1002.785"
- id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+ id="tspan4193-3-2-7"
+ style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..f933cc7e9311 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2017 Intel Corporation.
+ Copyright(c) 2017-2021 Intel Corporation.
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
This document describes how the Data Plane Development Kit (DPDK) Packet
Capture Framework is used for capturing packets on DPDK ports. It is intended
for users of DPDK who want to know more about the Packet Capture feature and
for those who want to monitor traffic on DPDK-controlled devices.
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.11. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
Introduction
------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
disable packet capture. The library works on a multi process communication model and its
usage is recommended for debugging purposes.
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library. It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format. The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
Some things to note:
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
application which has the packet capture framework initialized already. In
dpdk, only ``testpmd`` is modified to initialize packet capture framework,
- other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+ other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
be used with any application other than the testpmd, the user needs to
explicitly modify that application to call the packet capture framework
initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
for ``pdump`` keyword to see how this is done.
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+ library. The ``dpdk-pdump`` tool provides more limited functionality and
+ and depends on the Pcap PMD. It is retained only for compatibility reasons;
+ users should use ``dpdk-dumpcap`` instead.
Test Environment
----------------
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
for packet capturing on the DPDK port in
:numref:`figure_packet_capture_framework`.
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
.. figure:: img/packet_capture_framework.*
- Packet capturing on a DPDK port using the dpdk-pdump tool.
+ Packet capturing on a DPDK port using the dpdk-dumpcap utility.
Running the Application
-----------------------
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
inspect them using ``tcpdump``.
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dumpcap as follows::
- sudo <build_dir>/app/dpdk-pdump -- \
- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+ sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
#. Send traffic to dpdk_port0 from traffic generator.
- Inspect packets captured in the file capture.pcap using a tool
- that can interpret Pcap files, for example tcpdump::
+ Inspect packets captured in the file capture.pcapng using a tool such as
+ tcpdump or tshark that can interpret Pcapng files::
- $tcpdump -nr /tmp/capture.pcap
+ $ tcpdump -nr /tmp/capture.pcapng
reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 89af28dacb72..a8e8e759ecf2 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -44,6 +44,7 @@ Programmer's Guide
ip_fragment_reassembly_lib
generic_receive_offload_lib
generic_segmentation_offload_lib
+ pcapng_lib
pdump_lib
multi_proc_support
kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..fa1994c96f4d
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2021 Microsoft Corporation
+
+.. _pcapng_library:
+
+Packet Capture Next Generation Library
+======================================
+
+Exchanging packet traces becomes more and more critical every day.
+The de facto standard for this is the format define by libpcap;
+but that format is rather old and is lacking in functionality
+for more modern applications. The `Pcapng file format`_
+is the default capture file format for modern network capture
+processing tools such as `wireshark`_ (can also be read by `tcpdump`_).
+
+The Pcapng library is a an API for formatting packet data into
+into a Pcapng file.
+The format conforms to the current `Pcapng RFC`_ standard.
+It is designed to be integrated with the packet capture library.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+The output stream is created with ``rte_pcapng_fdopen``,
+and should be closed with ``rte_pcapng_close``.
+
+The library requires a DPDK mempool to allocate mbufs. The mbufs
+need to be able to accommodate additional space for the pcapng packet
+format header and trailer information; the function ``rte_pcapng_mbuf_size``
+should be used to determine the lower bound based on MTU.
+
+Collecting packets is done in two parts. The function ``rte_pcapng_copy``
+is used to format and copy mbuf data and ``rte_pcapng_write_packets``
+writes a burst of packets to the output file.
+
+The function ``rte_pcapng_write_stats`` can be used to write
+statistics information into the output file. The summary statistics
+information is automatically added by ``rte_pcapng_close``.
+
+.. _Tcpdump: https://tcpdump.org/
+.. _Wireshark: https://wireshark.org/
+.. _Pcapng file format: https://github.com/pcapng/pcapng/
+.. _Pcapng RFC: https://datatracker.ietf.org/doc/html/draft-tuexen-opsawg-pcapng
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..f3ff8fd828dc 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
.. _pdump_library:
-The librte_pdump Library
-========================
+Packet Capture Library
+======================
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
The library does the complete copy of the Rx and Tx mbufs to a new mempool and
hence it slows down the performance of the applications, so it is recommended
to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
* ``rte_pdump_enable()``:
This API enables the packet capture on a given port and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+ This API enables the packet capture on a given port and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_enable_by_deviceid()``:
This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+ This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_disable()``:
This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
the rte_ring that secondary process have passed to these APIs.
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
Use Case: Packet Capturing
--------------------------
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 30175246c74a..c91f36500a7c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -189,6 +189,16 @@ New Features
* Added tests to verify tunnel header verification in IPsec inbound.
* Added tests to verify inner checksum.
+* **Revised packet capture framework.**
+
+ * New dpdk-dumpcap program that has most of the features of the
+ wireshark dumpcap utility including: capture of multiple interfaces,
+ filtering, and stopping after number of bytes, packets.
+ * New library for writing pcapng packet capture files.
+ * Enhancements to the pdump library to support:
+ * Packet filter with BPF.
+ * Pcapng format with timestamps and meta-data.
+ * Fixes packet capture with stripped VLAN tags.
Removed Items
-------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool. The interface is similar to the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+ .. Note::
+ * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+ application which has the packet capture framework initialized already.
+ In dpdk, only the ``testpmd`` is modified to initialize packet capture
+ framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+ tool has to be used with any application other than the testpmd, user
+ needs to explicitly modify that application to call packet capture
+ framework initialization code. Refer ``app/test-pmd/testpmd.c``
+ code to see how this is done.
+
+ * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+ the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+ # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+ 0. 000:00:03.0
+ 1. 000:00:03.1
+
+ # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 6/0
+
+ # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+ * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+ * ``-C <byte_limit>`` -- its a kernel thing
+
+ * ``-t`` -- use a thread per interface
+
+ * Timestamp type.
+
+ * Link data types. Only EN10MB (Ethernet) is supported.
+
+ * Wireless related options: ``-I|--monitor-mode`` and ``-k <freq>``
+
+
+.. Note::
+ * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+ are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
:maxdepth: 2
:numbered:
+ dumpcap
proc_info
pdump
pmdinfo
--
2.30.2
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH] ring: fix size of name array in ring structure
@ 2021-10-20 23:06 0% ` Ananyev, Konstantin
2021-10-21 7:35 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-20 23:06 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev, andrew.rybchenko; +Cc: nd, zoltan.kiss
>
> Use correct define for the name array size. The change breaks ABI and
> hence cannot be backported to stable branches.
>
> Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data types")
> Cc: zoltan.kiss@schaman.hu
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> ---
> lib/ring/rte_ring_core.h | 7 +------
> 1 file changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
> index 31f7200fa9..46ad584f9c 100644
> --- a/lib/ring/rte_ring_core.h
> +++ b/lib/ring/rte_ring_core.h
> @@ -118,12 +118,7 @@ struct rte_ring_hts_headtail {
> * a problem.
> */
> struct rte_ring {
> - /*
> - * Note: this field kept the RTE_MEMZONE_NAMESIZE size due to ABI
> - * compatibility requirements, it could be changed to RTE_RING_NAMESIZE
> - * next time the ABI changes
> - */
> - char name[RTE_MEMZONE_NAMESIZE] __rte_cache_aligned;
> + char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
> /**< Name of the ring. */
> int flags; /**< Flags supplied at creation. */
> const struct rte_memzone *memzone;
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.25.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework
2021-10-20 19:27 0% ` Ananyev, Konstantin
@ 2021-10-21 6:53 0% ` Akhil Goyal
2021-10-21 10:38 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-21 6:53 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
De Lara Guarch, Pablo, Trahe, Fiona, Doherty, Declan, matan,
g.singh, Zhang, Roy Fan, jianjay.zhou, asomalap, ruifeng.wang,
Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
Power, Ciara, Wang, Haiyue, jiawenwu, jianwang
> > As per current design, rte_cryptodev_sym_session_create() and
> > rte_cryptodev_sym_session_init() use separate mempool objects
> > for a single session.
> > And structure rte_cryptodev_sym_session is not directly used
> > by the application, it may cause ABI breakage if the structure
> > is modified in future.
> >
> > To address these two issues, the rte_cryptodev_sym_session_create
> > will take one mempool object for both the session and session
> > private data. The API rte_cryptodev_sym_session_init will now not
> > take mempool object.
> > rte_cryptodev_sym_session_create will now return an opaque session
> > pointer which will be used by the app in rte_cryptodev_sym_session_init
> > and other APIs.
> >
> > With this change, rte_cryptodev_sym_session_init will send
> > pointer to session private data of corresponding driver to the PMD
> > based on the driver_id for filling the PMD data.
> >
> > In data path, opaque session pointer is attached to rte_crypto_op
> > and the PMD can call an internal library API to get the session
> > private data pointer based on the driver id.
> >
> > Note: currently nb_drivers are getting updated in RTE_INIT which
> > result in increasing the memory requirements for session.
> > User can compile off drivers which are not in use to reduce the
> > memory consumption of a session.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
>
> With that patch ipsec-secgw functional tests crashes for AES_GCM test-cases.
> To be more specific:
> examples/ipsec-secgw/test/run_test.sh -4 tun_aesgcm
>
> [24126592.561071] traps: dpdk-ipsec-secg[3254860] general protection fault
> ip:7f3ac2397027 sp:7ffeaade8848 error:0 in
> libIPSec_MB.so.1.0.0[7f3ac238f000+2a20000]
>
> Looking a bit deeper, it fails at:
> #0 0x00007ff9274f4027 in aes_keyexp_128_enc_avx512 ()
> from /lib/libIPSec_MB.so.1
> #1 0x00007ff929f0ac97 in aes_gcm_pre_128_avx_gen4 ()
> from /lib/libIPSec_MB.so.1
> #2 0x0000561757073753 in aesni_gcm_session_configure
> (mb_mgr=0x56175c5fe400,
> session=0x17e3b72d8, xform=0x17e05d7c0)
> at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
> #3 0x00005617570592af in ipsec_mb_sym_session_configure (
> dev=0x56175be0c940 <rte_crypto_devices>, xform=0x17e05d7c0,
> sess=0x17e3b72d8) at ../drivers/crypto/ipsec_mb/ipsec_mb_ops.c:330
> #4 0x0000561753b4d6ae in rte_cryptodev_sym_session_init (dev_id=0
> '\000',
> sess_opaque=0x17e3b4940, xforms=0x17e05d7c0)
> at ../lib/cryptodev/rte_cryptodev.c:1736
> #5 0x0000561752ef99b7 in create_lookaside_session (
> ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140,
> ips=0x17e05d140) at ../examples/ipsec-secgw/ipsec.c:145
> #6 0x0000561752f0cf98 in fill_ipsec_session (ss=0x17e05d140,
> ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140)
> at ../examples/ipsec-secgw/ipsec_process.c:89
> #7 0x0000561752f0d7dd in ipsec_process (
> ctx=0x56175aa6a210 <lcore_conf+1105232>, trf=0x7ffd192326a0)
> at ../examples/ipsec-secgw/ipsec_process.c:300
> #8 0x0000561752f21027 in process_pkts_outbound (
> --Type <RET> for more, q to quit, c to continue without paging--
> ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>,
> traffic=0x7ffd192326a0)
> at ../examples/ipsec-secgw/ipsec-secgw.c:839
> #9 0x0000561752f21b2e in process_pkts (
> qconf=0x56175aa57340 <lcore_conf+1027712>, pkts=0x7ffd19233c20,
> nb_pkts=1 '\001', portid=1) at ../examples/ipsec-secgw/ipsec-secgw.c:1072
> #10 0x0000561752f224db in ipsec_poll_mode_worker ()
> at ../examples/ipsec-secgw/ipsec-secgw.c:1262
> #11 0x0000561752f38adc in ipsec_launch_one_lcore (args=0x56175c549700)
> at ../examples/ipsec-secgw/ipsec_worker.c:654
> #12 0x0000561753cbc523 in rte_eal_mp_remote_launch (
> f=0x561752f38ab5 <ipsec_launch_one_lcore>, arg=0x56175c549700,
> call_main=CALL_MAIN) at ../lib/eal/common/eal_common_launch.c:64
> #13 0x0000561752f265ed in main (argc=12, argv=0x7ffd19234168)
> at ../examples/ipsec-secgw/ipsec-secgw.c:2978
> (gdb) frame 2
> #2 0x0000561757073753 in aesni_gcm_session_configure
> (mb_mgr=0x56175c5fe400,
> session=0x17e3b72d8, xform=0x17e05d7c0)
> at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
> 132 mb_mgr->gcm128_pre(key, &sess->gdata_key);
>
> Because of un-expected unaligned memory access:
> (gdb) disas
> Dump of assembler code for function aes_keyexp_128_enc_avx512:
> 0x00007ff9274f400b <+0>: endbr64
> 0x00007ff9274f400f <+4>: cmp $0x0,%rdi
> 0x00007ff9274f4013 <+8>: je 0x7ff9274f41b4
> <aes_keyexp_128_enc_avx512+425>
> 0x00007ff9274f4019 <+14>: cmp $0x0,%rsi
> 0x00007ff9274f401d <+18>: je 0x7ff9274f41b4
> <aes_keyexp_128_enc_avx512+425>
> 0x00007ff9274f4023 <+24>: vmovdqu (%rdi),%xmm1
> => 0x00007ff9274f4027 <+28>: vmovdqa %xmm1,(%rsi)
>
> (gdb) print/x $rsi
> $12 = 0x17e3b72e8
>
> And this is caused because now AES_GCM session private data is not 16B-bits
> aligned anymore:
> (gdb) print ((struct aesni_gcm_session *)sess->sess_data[index].data)
> $29 = (struct aesni_gcm_session *) 0x17e3b72d8
>
> print &((struct aesni_gcm_session *)sess->sess_data[index].data)-
> >gdata_key
> $31 = (struct gcm_key_data *) 0x17e3b72e8
>
> As I understand the reason for that is that we changed the way how
> sess_data[index].data
> is populated. Now it is just:
> sess->sess_data[index].data = (void *)((uint8_t *)sess +
> rte_cryptodev_sym_get_header_session_size() +
> (index * sess->priv_sz));
>
> So, as I can see, there is no guarantee that PMD's private sess data will be
> aligned on 16B
> as expected.
>
Agreed, that there is no guarantee that the sess_priv will be aligned.
I believe this is requirement from the PMD side for a particular alignment.
Is it possible for the PMD to use __rte_aligned for the fields which are required to
Be aligned. For aesni_gcm it is 16B aligned requirement, for some other PMD it may be
64B alignment.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] ring: fix size of name array in ring structure
2021-10-20 23:06 0% ` Ananyev, Konstantin
@ 2021-10-21 7:35 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-21 7:35 UTC (permalink / raw)
To: Ananyev, Konstantin, Honnappa Nagarahalli
Cc: dev, andrew.rybchenko, nd, zoltan.kiss
On Thu, Oct 21, 2021 at 1:07 AM Ananyev, Konstantin
<konstantin.ananyev@intel.com> wrote:
> > Use correct define for the name array size. The change breaks ABI and
> > hence cannot be backported to stable branches.
> >
> > Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data types")
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Applied, thanks.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework
2021-10-20 18:04 0% ` Akhil Goyal
@ 2021-10-21 8:43 0% ` Zhang, Roy Fan
0 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-21 8:43 UTC (permalink / raw)
To: Akhil Goyal, Power, Ciara, dev, Ananyev, Konstantin, thomas,
De Lara Guarch, Pablo
Cc: david.marchand, hemant.agrawal, Anoob Joseph, Trahe, Fiona,
Doherty, Declan, matan, g.singh, jianjay.zhou, asomalap,
ruifeng.wang, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
Ankur Dwivedi, Wang, Haiyue, jiawenwu, jianwang,
Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, October 20, 2021 7:05 PM
> To: Power, Ciara <ciara.power@intel.com>; dev@dpdk.org; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net; Zhang,
> Roy Fan <roy.fan.zhang@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Cc: david.marchand@redhat.com; hemant.agrawal@nxp.com; Anoob Joseph
> <anoobj@marvell.com>; Trahe, Fiona <fiona.trahe@intel.com>; Doherty,
> Declan <declan.doherty@intel.com>; matan@nvidia.com; g.singh@nxp.com;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Nicolau, Radu <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> Nagadheeraj Rottela <rnagadheeraj@marvell.com>; Ankur Dwivedi
> <adwivedi@marvell.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> jiawenwu@trustnetic.com; jianwang@trustnetic.com; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>
> Subject: RE: [PATCH v3 0/8] crypto/security session framework rework
>
> > > > I am seeing test failures for cryptodev_scheduler_autotest:
> > > > + Tests Total : 638
> > > > + Tests Skipped : 280
> > > > + Tests Executed : 638
> > > > + Tests Unsupported: 0
> > > > + Tests Passed : 18
> > > > + Tests Failed : 340
> > > >
> > > > The error showing for each testcase:
> > > > scheduler_pmd_sym_session_configure() line 487: unable to config
> sym
> > > > session
> > > > CRYPTODEV: rte_cryptodev_sym_session_init() line 1743: dev_id 2
> failed
> > to
> > > > configure session details
> > > >
> > > > I believe the problem happens in
> > scheduler_pmd_sym_session_configure.
> > > > The full sess object is no longer accessible in here, but it is required to
> be
> > > > passed to rte_cryptodev_sym_session_init.
> > > > The init function expects access to sess rather than the private data,
> and
> > > now
> > > > fails as a result.
> > > >
> > > > static int
> > > > scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > > > struct rte_crypto_sym_xform *xform, void *sess,
> > > > rte_iova_t sess_iova __rte_unused)
> > > > {
> > > > struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> > > > uint32_t i;
> > > > int ret;
> > > > for (i = 0; i < sched_ctx->nb_workers; i++) {
> > > > struct scheduler_worker *worker = &sched_ctx->workers[i];
> > > > ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> > > > xform);
> > > > if (ret < 0) {
> > > > CR_SCHED_LOG(ERR, "unable to config sym session");
> > > > return ret;
> > > > }
> > > > }
> > > > return 0;
> > > > }
> > > >
> > > It looks like scheduler PMD is managing the stuff on its own for other
> > PMDs.
> > > The APIs are designed such that the app can call session_init multiple
> times
> > > With different dev_id on same sess.
> > > But here scheduler PMD internally want to configure other PMDs
> sess_priv
> > > By calling session_init.
> > >
> > > I wonder, why we have this 2 step session_create and session_init?
> > > Why can't we have it similar to security session create and let the
> scheduler
> > > PMD have its big session private data which can hold priv_data of as many
> > > PMDs
> > > as it want to schedule.
> > >
> > > Konstantin/Fan/Pablo what are your thoughts on this issue?
> > > Can we resolve this issue at priority in RC1(or probably RC2) for this
> release
> > > or
> > > else we defer it for next ABI break release?
> > >
> > > Thomas,
> > > Can we defer this for RC2? It does not seem to be fixed in 1 day.
> >
> > On another thought, this can be fixed with current patch also by having a
> big
> > session
> > Private data for scheduler PMD which is big enough to hold all other PMDs
> > data which
> > it want to schedule and then call the sess_configure function pointer of dev
> > directly.
> > What say? And this PMD change can be done in RC2. And this patchset go
> as
> > is in RC1.
> Here is the diff in scheduler PMD which should fix this issue in current
> patchset.
>
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> index b92ffd6026..0611ea2c6a 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> +++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> @@ -450,9 +450,8 @@ scheduler_pmd_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
> }
>
> static uint32_t
> -scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev
> __rte_unused)
> +get_max_session_priv_size(struct scheduler_ctx *sched_ctx)
> {
> - struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> uint8_t i = 0;
> uint32_t max_priv_sess_size = 0;
>
> @@ -469,20 +468,35 @@ scheduler_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
> return max_priv_sess_size;
> }
>
> +static uint32_t
> +scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev)
> +{
> + struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +
> + return get_max_session_priv_size(sched_ctx) * sched_ctx-
> >nb_workers;
> +}
> +
> static int
> scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> struct rte_crypto_sym_xform *xform, void *sess,
> rte_iova_t sess_iova __rte_unused)
> {
> struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> + uint32_t worker_sess_priv_sz = get_max_session_priv_size(sched_ctx);
> uint32_t i;
> int ret;
>
> for (i = 0; i < sched_ctx->nb_workers; i++) {
> struct scheduler_worker *worker = &sched_ctx->workers[i];
> + struct rte_cryptodev *worker_dev =
> + rte_cryptodev_pmd_get_dev(worker->dev_id);
> + uint8_t index = worker_dev->driver_id;
>
> - ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> - xform);
> + ret = worker_dev->dev_ops->sym_session_configure(
> + worker_dev,
> + xform,
> + (uint8_t *)sess + (index * worker_sess_priv_sz),
> + sess_iova + (index * worker_sess_priv_sz));
This won't work. This will make the session configuration finish successfully
but the private data the worker initialized is not the private data the worker
will use during enqueue/dequeue (workers only uses the session private
data based on its driver id).
> if (ret < 0) {
> CR_SCHED_LOG(ERR, "unable to config sym session");
> return ret;
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-20 15:30 3% ` Dmitry Kozlyuk
@ 2021-10-21 9:16 0% ` Harman Kalra
2021-10-21 12:33 0% ` Dmitry Kozlyuk
0 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-10-21 9:16 UTC (permalink / raw)
To: Dmitry Kozlyuk
Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Wednesday, October 20, 2021 9:01 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Thomas
> Monjalon <thomas@monjalon.net>; david.marchand@redhat.com;
> dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement
> get set APIs
>
> > >
> > > > + /* Detect if DPDK malloc APIs are ready to be used. */
> > > > + mem_allocator = rte_malloc_is_ready();
> > > > + if (mem_allocator)
> > > > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> > > rte_intr_handle),
> > > > + 0);
> > > > + else
> > > > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> > >
> > > This is problematic way to do this.
> > > The reason to use rte_malloc vs malloc should be determined by usage.
> > >
> > > If the pointer will be shared between primary/secondary process then
> > > it has to be in hugepages (ie rte_malloc). If it is not shared then
> > > then use regular malloc.
> > >
> > > But what you have done is created a method which will be a latent
> > > bug for anyone using primary/secondary process.
> > >
> > > Either:
> > > intr_handle is not allowed to be used in secondary.
> > > Then always use malloc().
> > > Or.
> > > intr_handle can be used by both primary and secondary.
> > > Then always use rte_malloc().
> > > Any code path that allocates intr_handle before pool is
> > > ready is broken.
> >
> > Hi Stephan,
> >
> > Till V2, I implemented this API in a way where user of the API can
> > choose If he wants intr handle to be allocated using malloc or
> > rte_malloc by passing a flag arg to the rte_intr_instanc_alloc API.
> > User of the API will best know if the intr handle is to be shared with
> secondary or not.
> >
> > But after some discussions and suggestions from the community we
> > decided to drop that flag argument and auto detect on whether
> > rte_malloc APIs are ready to be used and thereafter make all further
> allocations via rte_malloc.
> > Currently alarm subsystem (or any driver doing allocation in
> > constructor) gets interrupt instance allocated using glibc malloc that
> > too because rte_malloc* is not ready by rte_eal_alarm_init(), while
> > all further consumers gets instance allocated via rte_malloc.
>
> Just as a comment, bus scanning is the real issue, not the alarms.
> Alarms could be initialized after the memory management (but it's irrelevant
> because their handle is not accessed from the outside).
> However, MM needs to know bus IOVA requirements to initialize, which is
> usually determined by at least bus device requirements.
>
> > I think this should not cause any issue in primary/secondary model as
> > all interrupt instance pointer will be shared.
>
> What do you mean? Aren't we discussing the issue that those allocated early
> are not shared?
>
> > Infact to avoid any surprises of primary/secondary not working we
> > thought of making all allocations via rte_malloc.
>
> I don't see why anyone would not make them shared.
> In order to only use rte_malloc(), we need:
> 1. In bus drivers, move handle allocation from scan to probe stage.
> 2. In EAL, move alarm initialization to after the MM.
> It all can be done later with v3 design---but there are out-of-tree drivers.
> We need to force them to make step 1 at some point.
> I see two options:
> a) Right now have an external API that only works with rte_malloc()
> and internal API with autodetection. Fix DPDK and drop internal API.
> b) Have external API with autodetection. Fix DPDK.
> At the next ABI breakage drop autodetection and libc-malloc.
>
> > David, Thomas, Dmitry, please add if I missed anything.
> >
> > Can we please conclude on this series APIs as API freeze deadline (rc1) is
> very near.
>
> I support v3 design with no options and autodetection, because that's the
> interface we want in the end.
> Implementation can be improved later.
Hi All,
I came across 2 issues introduced with auto detection mechanism.
1. In case of primary secondary model. Primary application is started which makes lots of allocations via
rte_malloc*
Secondary side:
a. Secondary starts, in its "rte_eal_init()" it makes some allocation via rte_*, and in one of the allocation
request for heap expand is made as current memseg got exhausted. (malloc_heap_alloc_on_heap_id ()->
alloc_more_mem_on_socket()->try_expand_heap())
b. A request to primary for heap expand is sent. Please note secondary holds the spinlock while making
the request. (malloc_heap_alloc_on_heap_id ()->rte_spinlock_lock(&(heap->lock));)
Primary side:
a. Primary receives the request, install a new hugepage and setups up the heap (handle_alloc_request())
b. To inform all the secondaries about the new memseg, primary sends a sync notice where it sets up an
alarm (rte_mp_request_async ()->mp_request_async()).
c. Inside alarm setup API, we register an interrupt callback.
d. Inside rte_intr_callback_register(), a new interrupt instance allocation is requested for "src->intr_handle"
e. Since memory management is detected as up, inside "rte_intr_instance_alloc()", call to "rte_zmalloc" for
allocating memory and further inside "malloc_heap_alloc_on_heap_id()", primary will experience a deadlock
while taking up the spinlock because this spinlock is already hold by secondary.
2. "eal_flags_file_prefix_autotest" is failing because the spawned process by this tests are expected to cleanup
their hugepage traces from respective directories (eg /dev/hugepage).
a. Inside eal_cleanup, rte_free()->malloc_heap_free(), where element to be freed is added to the free list and
checked if nearby elements can be joined together and form a big free chunk (malloc_elem_free()).
b. If this free chunk is big enough than the hugepage size, respective hugepage can be uninstalled after making
sure no allocation from this hugepage exists. (malloc_heap_free()->malloc_heap_free_pages()->eal_memalloc_free_seg())
But because of interrupt allocations made for pci intr handles (used for VFIO) and other driver specific interrupt
handles are not cleaned up in "rte_eal_cleanup()", these hugepage files are not removed and test fails.
There could be more such issues, I think we should firstly fix the DPDK.
1. Memory management should be made independent and should be the first thing to come up in rte_eal_init()
2. rte_eal_cleanup() should be exactly opposite to rte_eal_init(), just like bus_probe, we should have bus_remove
to clean up all the memory allocations.
Regarding this IRQ series, I would like to fall back to our original design i.e. rte_intr_instance_alloc() should take
an argument whether its memory should be allocated using glibc malloc or rte_malloc*. Decision for allocation
(malloc or rte_malloc) can be made on fact that in the existing code is the interrupt handle is shared?
Eg. a. In case of alarm intr_handle was global entry and not confined to any structure, so this can be allocated from
normal malloc.
b. PCI device, had static entry for intr_handle inside "struct rte_pci_device" and memory for struct rte_pci_device is
via normal malloc, so it intr_handle can also be malloc'ed
c. Some driver with intr_handle inside its priv structure, and this priv structure gets allocated via rte_malloc, so
Intr_handle can also be rte_malloc.
Later once DPDK is fixed up, this argument can be removed and all allocations can be via rte_malloc family without
any auto detection.
David, Dmitry, Thomas, Stephan, please share your views....
Thanks
Harman
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework
2021-10-21 6:53 0% ` Akhil Goyal
@ 2021-10-21 10:38 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-21 10:38 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
De Lara Guarch, Pablo, Trahe, Fiona, Doherty, Declan, matan,
g.singh, Zhang, Roy Fan, jianjay.zhou, asomalap, ruifeng.wang,
Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
Power, Ciara, Wang, Haiyue, jiawenwu, jianwang
> > > As per current design, rte_cryptodev_sym_session_create() and
> > > rte_cryptodev_sym_session_init() use separate mempool objects
> > > for a single session.
> > > And structure rte_cryptodev_sym_session is not directly used
> > > by the application, it may cause ABI breakage if the structure
> > > is modified in future.
> > >
> > > To address these two issues, the rte_cryptodev_sym_session_create
> > > will take one mempool object for both the session and session
> > > private data. The API rte_cryptodev_sym_session_init will now not
> > > take mempool object.
> > > rte_cryptodev_sym_session_create will now return an opaque session
> > > pointer which will be used by the app in rte_cryptodev_sym_session_init
> > > and other APIs.
> > >
> > > With this change, rte_cryptodev_sym_session_init will send
> > > pointer to session private data of corresponding driver to the PMD
> > > based on the driver_id for filling the PMD data.
> > >
> > > In data path, opaque session pointer is attached to rte_crypto_op
> > > and the PMD can call an internal library API to get the session
> > > private data pointer based on the driver id.
> > >
> > > Note: currently nb_drivers are getting updated in RTE_INIT which
> > > result in increasing the memory requirements for session.
> > > User can compile off drivers which are not in use to reduce the
> > > memory consumption of a session.
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > ---
> >
> > With that patch ipsec-secgw functional tests crashes for AES_GCM test-cases.
> > To be more specific:
> > examples/ipsec-secgw/test/run_test.sh -4 tun_aesgcm
> >
> > [24126592.561071] traps: dpdk-ipsec-secg[3254860] general protection fault
> > ip:7f3ac2397027 sp:7ffeaade8848 error:0 in
> > libIPSec_MB.so.1.0.0[7f3ac238f000+2a20000]
> >
> > Looking a bit deeper, it fails at:
> > #0 0x00007ff9274f4027 in aes_keyexp_128_enc_avx512 ()
> > from /lib/libIPSec_MB.so.1
> > #1 0x00007ff929f0ac97 in aes_gcm_pre_128_avx_gen4 ()
> > from /lib/libIPSec_MB.so.1
> > #2 0x0000561757073753 in aesni_gcm_session_configure
> > (mb_mgr=0x56175c5fe400,
> > session=0x17e3b72d8, xform=0x17e05d7c0)
> > at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
> > #3 0x00005617570592af in ipsec_mb_sym_session_configure (
> > dev=0x56175be0c940 <rte_crypto_devices>, xform=0x17e05d7c0,
> > sess=0x17e3b72d8) at ../drivers/crypto/ipsec_mb/ipsec_mb_ops.c:330
> > #4 0x0000561753b4d6ae in rte_cryptodev_sym_session_init (dev_id=0
> > '\000',
> > sess_opaque=0x17e3b4940, xforms=0x17e05d7c0)
> > at ../lib/cryptodev/rte_cryptodev.c:1736
> > #5 0x0000561752ef99b7 in create_lookaside_session (
> > ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140,
> > ips=0x17e05d140) at ../examples/ipsec-secgw/ipsec.c:145
> > #6 0x0000561752f0cf98 in fill_ipsec_session (ss=0x17e05d140,
> > ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140)
> > at ../examples/ipsec-secgw/ipsec_process.c:89
> > #7 0x0000561752f0d7dd in ipsec_process (
> > ctx=0x56175aa6a210 <lcore_conf+1105232>, trf=0x7ffd192326a0)
> > at ../examples/ipsec-secgw/ipsec_process.c:300
> > #8 0x0000561752f21027 in process_pkts_outbound (
> > --Type <RET> for more, q to quit, c to continue without paging--
> > ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>,
> > traffic=0x7ffd192326a0)
> > at ../examples/ipsec-secgw/ipsec-secgw.c:839
> > #9 0x0000561752f21b2e in process_pkts (
> > qconf=0x56175aa57340 <lcore_conf+1027712>, pkts=0x7ffd19233c20,
> > nb_pkts=1 '\001', portid=1) at ../examples/ipsec-secgw/ipsec-secgw.c:1072
> > #10 0x0000561752f224db in ipsec_poll_mode_worker ()
> > at ../examples/ipsec-secgw/ipsec-secgw.c:1262
> > #11 0x0000561752f38adc in ipsec_launch_one_lcore (args=0x56175c549700)
> > at ../examples/ipsec-secgw/ipsec_worker.c:654
> > #12 0x0000561753cbc523 in rte_eal_mp_remote_launch (
> > f=0x561752f38ab5 <ipsec_launch_one_lcore>, arg=0x56175c549700,
> > call_main=CALL_MAIN) at ../lib/eal/common/eal_common_launch.c:64
> > #13 0x0000561752f265ed in main (argc=12, argv=0x7ffd19234168)
> > at ../examples/ipsec-secgw/ipsec-secgw.c:2978
> > (gdb) frame 2
> > #2 0x0000561757073753 in aesni_gcm_session_configure
> > (mb_mgr=0x56175c5fe400,
> > session=0x17e3b72d8, xform=0x17e05d7c0)
> > at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
> > 132 mb_mgr->gcm128_pre(key, &sess->gdata_key);
> >
> > Because of un-expected unaligned memory access:
> > (gdb) disas
> > Dump of assembler code for function aes_keyexp_128_enc_avx512:
> > 0x00007ff9274f400b <+0>: endbr64
> > 0x00007ff9274f400f <+4>: cmp $0x0,%rdi
> > 0x00007ff9274f4013 <+8>: je 0x7ff9274f41b4
> > <aes_keyexp_128_enc_avx512+425>
> > 0x00007ff9274f4019 <+14>: cmp $0x0,%rsi
> > 0x00007ff9274f401d <+18>: je 0x7ff9274f41b4
> > <aes_keyexp_128_enc_avx512+425>
> > 0x00007ff9274f4023 <+24>: vmovdqu (%rdi),%xmm1
> > => 0x00007ff9274f4027 <+28>: vmovdqa %xmm1,(%rsi)
> >
> > (gdb) print/x $rsi
> > $12 = 0x17e3b72e8
> >
> > And this is caused because now AES_GCM session private data is not 16B-bits
> > aligned anymore:
> > (gdb) print ((struct aesni_gcm_session *)sess->sess_data[index].data)
> > $29 = (struct aesni_gcm_session *) 0x17e3b72d8
> >
> > print &((struct aesni_gcm_session *)sess->sess_data[index].data)-
> > >gdata_key
> > $31 = (struct gcm_key_data *) 0x17e3b72e8
> >
> > As I understand the reason for that is that we changed the way how
> > sess_data[index].data
> > is populated. Now it is just:
> > sess->sess_data[index].data = (void *)((uint8_t *)sess +
> > rte_cryptodev_sym_get_header_session_size() +
> > (index * sess->priv_sz));
> >
> > So, as I can see, there is no guarantee that PMD's private sess data will be
> > aligned on 16B
> > as expected.
> >
> Agreed, that there is no guarantee that the sess_priv will be aligned.
> I believe this is requirement from the PMD side for a particular alignment.
Yes, it is PMD specific requirement.
The problem is that with new approach you proposed there is no simple way for PMD to
fulfil that requirement.
In current version of DPDK:
- PMD reports size of private data, note that it reports extra space needed
to align its data properly inside provided buffer.
- Then it ss up to higher layer to allocate mempool with elements big enough to hold
PMD private data.
- At session init that mempool is passed to PMD sym_session_confgure() and it is
PMD responsibility to allocate buffer (from given mempool) for its private data
align it properly, and update sess->sess_data[].data.
With this patch:
- PMD still reports size of private data, but now it is cryptodev layer who allocates
memory for PMD private data and updates sess->sess_data[].data.
So PMD simply has no way to allocate/align its private data in a way it likes to.
Of course it can simply do alignment on the fly for each operation, something like:
void *p = get_sym_session_private_data(sess, dev->driver_id);
sess_priv = RTE_PTR_ALIGN_FLOOR(p, PMD_SES_ALIGN);
But it is way too ugly and error-prone.
Another potential problem with that approach (when cryptodev allocates memory for
PMD private session data and updates sess->sess_data[].data for it) - it could happen
that private data for different PMDs can endup on the same cache-line.
If we'll ever have a case with simultaneous session processing by multiple-devices
it can cause all sorts of performance problems.
All in all - these changes for (remove second mempool, change the way we allocate/setup
session private data) seems premature to me.
So, I think to go ahead with this series (hiding rte_cryptodev_sym_session) for 21.11
we need to drop changes for sess_data[] management allocation and keep only changes
directly related to hide sym_session.
My apologies for not reviewing/testing properly that series earlier.
> Is it possible for the PMD to use __rte_aligned for the fields which are required to
The data structure inside PMD is properly aligned.
The problem is that now cryptodev layer might provide to PMD memory that is not properly aligned.
> Be aligned. For aesni_gcm it is 16B aligned requirement, for some other PMD it may be
> 64B alignment.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-21 9:16 0% ` Harman Kalra
@ 2021-10-21 12:33 0% ` Dmitry Kozlyuk
0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-21 12:33 UTC (permalink / raw)
To: Harman Kalra
Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella
2021-10-21 09:16 (UTC+0000), Harman Kalra:
> > -----Original Message-----
> > From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > Sent: Wednesday, October 20, 2021 9:01 PM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: Stephen Hemminger <stephen@networkplumber.org>; Thomas
> > Monjalon <thomas@monjalon.net>; david.marchand@redhat.com;
> > dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> > Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement
> > get set APIs
> >
> > > >
> > > > > + /* Detect if DPDK malloc APIs are ready to be used. */
> > > > > + mem_allocator = rte_malloc_is_ready();
> > > > > + if (mem_allocator)
> > > > > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> > > > rte_intr_handle),
> > > > > + 0);
> > > > > + else
> > > > > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> > > >
> > > > This is problematic way to do this.
> > > > The reason to use rte_malloc vs malloc should be determined by usage.
> > > >
> > > > If the pointer will be shared between primary/secondary process then
> > > > it has to be in hugepages (ie rte_malloc). If it is not shared then
> > > > then use regular malloc.
> > > >
> > > > But what you have done is created a method which will be a latent
> > > > bug for anyone using primary/secondary process.
> > > >
> > > > Either:
> > > > intr_handle is not allowed to be used in secondary.
> > > > Then always use malloc().
> > > > Or.
> > > > intr_handle can be used by both primary and secondary.
> > > > Then always use rte_malloc().
> > > > Any code path that allocates intr_handle before pool is
> > > > ready is broken.
> > >
> > > Hi Stephan,
> > >
> > > Till V2, I implemented this API in a way where user of the API can
> > > choose If he wants intr handle to be allocated using malloc or
> > > rte_malloc by passing a flag arg to the rte_intr_instanc_alloc API.
> > > User of the API will best know if the intr handle is to be shared with
> > secondary or not.
> > >
> > > But after some discussions and suggestions from the community we
> > > decided to drop that flag argument and auto detect on whether
> > > rte_malloc APIs are ready to be used and thereafter make all further
> > allocations via rte_malloc.
> > > Currently alarm subsystem (or any driver doing allocation in
> > > constructor) gets interrupt instance allocated using glibc malloc that
> > > too because rte_malloc* is not ready by rte_eal_alarm_init(), while
> > > all further consumers gets instance allocated via rte_malloc.
> >
> > Just as a comment, bus scanning is the real issue, not the alarms.
> > Alarms could be initialized after the memory management (but it's irrelevant
> > because their handle is not accessed from the outside).
> > However, MM needs to know bus IOVA requirements to initialize, which is
> > usually determined by at least bus device requirements.
> >
> > > I think this should not cause any issue in primary/secondary model as
> > > all interrupt instance pointer will be shared.
> >
> > What do you mean? Aren't we discussing the issue that those allocated early
> > are not shared?
> >
> > > Infact to avoid any surprises of primary/secondary not working we
> > > thought of making all allocations via rte_malloc.
> >
> > I don't see why anyone would not make them shared.
> > In order to only use rte_malloc(), we need:
> > 1. In bus drivers, move handle allocation from scan to probe stage.
> > 2. In EAL, move alarm initialization to after the MM.
> > It all can be done later with v3 design---but there are out-of-tree drivers.
> > We need to force them to make step 1 at some point.
> > I see two options:
> > a) Right now have an external API that only works with rte_malloc()
> > and internal API with autodetection. Fix DPDK and drop internal API.
> > b) Have external API with autodetection. Fix DPDK.
> > At the next ABI breakage drop autodetection and libc-malloc.
> >
> > > David, Thomas, Dmitry, please add if I missed anything.
> > >
> > > Can we please conclude on this series APIs as API freeze deadline (rc1) is
> > very near.
> >
> > I support v3 design with no options and autodetection, because that's the
> > interface we want in the end.
> > Implementation can be improved later.
>
> Hi All,
>
> I came across 2 issues introduced with auto detection mechanism.
> 1. In case of primary secondary model. Primary application is started which makes lots of allocations via
> rte_malloc*
>
> Secondary side:
> a. Secondary starts, in its "rte_eal_init()" it makes some allocation via rte_*, and in one of the allocation
> request for heap expand is made as current memseg got exhausted. (malloc_heap_alloc_on_heap_id ()->
> alloc_more_mem_on_socket()->try_expand_heap())
> b. A request to primary for heap expand is sent. Please note secondary holds the spinlock while making
> the request. (malloc_heap_alloc_on_heap_id ()->rte_spinlock_lock(&(heap->lock));)
>
> Primary side:
> a. Primary receives the request, install a new hugepage and setups up the heap (handle_alloc_request())
> b. To inform all the secondaries about the new memseg, primary sends a sync notice where it sets up an
> alarm (rte_mp_request_async ()->mp_request_async()).
> c. Inside alarm setup API, we register an interrupt callback.
> d. Inside rte_intr_callback_register(), a new interrupt instance allocation is requested for "src->intr_handle"
> e. Since memory management is detected as up, inside "rte_intr_instance_alloc()", call to "rte_zmalloc" for
> allocating memory and further inside "malloc_heap_alloc_on_heap_id()", primary will experience a deadlock
> while taking up the spinlock because this spinlock is already hold by secondary.
>
>
> 2. "eal_flags_file_prefix_autotest" is failing because the spawned process by this tests are expected to cleanup
> their hugepage traces from respective directories (eg /dev/hugepage).
> a. Inside eal_cleanup, rte_free()->malloc_heap_free(), where element to be freed is added to the free list and
> checked if nearby elements can be joined together and form a big free chunk (malloc_elem_free()).
> b. If this free chunk is big enough than the hugepage size, respective hugepage can be uninstalled after making
> sure no allocation from this hugepage exists. (malloc_heap_free()->malloc_heap_free_pages()->eal_memalloc_free_seg())
>
> But because of interrupt allocations made for pci intr handles (used for VFIO) and other driver specific interrupt
> handles are not cleaned up in "rte_eal_cleanup()", these hugepage files are not removed and test fails.
Sad to hear. But it's a great and thorough analysis.
> There could be more such issues, I think we should firstly fix the DPDK.
> 1. Memory management should be made independent and should be the first thing to come up in rte_eal_init()
As I have explained, buses must be able to report IOVA requirement
at this point (`get_iommu_class()` bus method).
Either `scan()` must complete before that
or `get_iommu_class()` must be able to work before `scan()` is called.
> 2. rte_eal_cleanup() should be exactly opposite to rte_eal_init(), just like bus_probe, we should have bus_remove
> to clean up all the memory allocations.
Yes. For most buses it will be just "unplug each device".
In fact, EAL could do it with `unplug()`, but it is not mandatory.
>
> Regarding this IRQ series, I would like to fall back to our original design i.e. rte_intr_instance_alloc() should take
> an argument whether its memory should be allocated using glibc malloc or rte_malloc*.
Seems there's no other option to make it on time.
> Decision for allocation
> (malloc or rte_malloc) can be made on fact that in the existing code is the interrupt handle is shared?
> Eg. a. In case of alarm intr_handle was global entry and not confined to any structure, so this can be allocated from
> normal malloc.
> b. PCI device, had static entry for intr_handle inside "struct rte_pci_device" and memory for struct rte_pci_device is
> via normal malloc, so it intr_handle can also be malloc'ed
> c. Some driver with intr_handle inside its priv structure, and this priv structure gets allocated via rte_malloc, so
> Intr_handle can also be rte_malloc.
>
> Later once DPDK is fixed up, this argument can be removed and all allocations can be via rte_malloc family without
> any auto detection.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 0/2] Support IOMMU for DMA device
@ 2021-10-21 12:33 0% ` Maxime Coquelin
0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-10-21 12:33 UTC (permalink / raw)
To: Xuan Ding, dev, anatoly.burakov, chenbo.xia
Cc: jiayu.hu, cheng1.jiang, bruce.richardson, sunil.pai.g,
yinan.wang, yvonnex.yang
On 10/11/21 09:59, Xuan Ding wrote:
> This series supports DMA device to use vfio in async vhost.
>
> The first patch extends the capability of current vfio dma mapping
> API to allow partial unmapping for adjacent memory if the platform
> does not support partial unmapping. The second patch involves the
> IOMMU programming for guest memory in async vhost.
>
> v7:
> * Fix an operator error.
>
> v6:
> * Fix a potential memory leak.
>
> v5:
> * Fix issue of a pointer be freed early.
>
> v4:
> * Fix a format issue.
>
> v3:
> * Move the async_map_status flag to virtio_net structure to avoid
> ABI breaking.
>
> v2:
> * Add rte_errno filtering for some devices bound in the kernel driver.
> * Add a flag to check the status of region mapping.
> * Fix one typo.
>
> Xuan Ding (2):
> vfio: allow partially unmapping adjacent memory
> vhost: enable IOMMU for async vhost
>
> lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------
> lib/vhost/vhost.h | 4 +
> lib/vhost/vhost_user.c | 116 +++++++++++++-
> 3 files changed, 346 insertions(+), 112 deletions(-)
>
Applied to dpdk-next-virtio/main.
Thanks,
Maxime
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering
2021-10-20 21:42 1% ` [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-10-21 14:16 0% ` Kinsella, Ray
2021-10-27 6:34 0% ` Wang, Yinan
1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-10-21 14:16 UTC (permalink / raw)
To: Stephen Hemminger, dev; +Cc: Reshma Pattan, Anatoly Burakov
On 20/10/2021 22:42, Stephen Hemminger wrote:
> This enhances the DPDK pdump library to support new
> pcapng format and filtering via BPF.
>
> The internal client/server protocol is changed to support
> two versions: the original pdump basic version and a
> new pcapng version.
>
> The internal version number (not part of exposed API or ABI)
> is intentionally increased to cause any attempt to try
> mismatched primary/secondary process to fail.
>
> Add new API to do allow filtering of captured packets with
> DPDK BPF (eBPF) filter program. It keeps statistics
> on packets captured, filtered, and missed (because ring was full).
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> Acked-by: Reshma Pattan <reshma.pattan@intel.com>
> ---
> lib/meson.build | 4 +-
> lib/pdump/meson.build | 2 +-
> lib/pdump/rte_pdump.c | 432 ++++++++++++++++++++++++++++++------------
> lib/pdump/rte_pdump.h | 113 ++++++++++-
> lib/pdump/version.map | 8 +
> 5 files changed, 433 insertions(+), 126 deletions(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] lpm: fix buffer overflow
2021-10-20 19:55 3% ` David Marchand
@ 2021-10-21 17:15 0% ` Medvedkin, Vladimir
0 siblings, 0 replies; 200+ results
From: Medvedkin, Vladimir @ 2021-10-21 17:15 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Bruce Richardson, alex, dpdk stable
Hi David,
On 20/10/2021 21:55, David Marchand wrote:
> Hello Vladimir,
>
> On Fri, Oct 8, 2021 at 11:29 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>>
>> This patch fixes buffer overflow reported by ASAN,
>> please reference https://bugs.dpdk.org/show_bug.cgi?id=819
>>
>> The rte_lpm6 keeps routing information for control plane purpose
>> inside the rte_hash table which uses rte_jhash() as a hash function.
>> From the rte_jhash() documentation: If input key is not aligned to
>> four byte boundaries or a multiple of four bytes in length,
>> the memory region just after may be read (but not used in the
>> computation).
>> rte_lpm6 uses 17 bytes keys consisting of IPv6 address (16 bytes) +
>> depth (1 byte).
>>
>> This patch increases the size of the depth field up to uint32_t
>> and sets the alignment to 4 bytes.
>>
>> Bugzilla ID: 819
>> Fixes: 86b3b21952a8 ("lpm6: store rules in hash table")
>> Cc: alex@therouter.net
>> Cc: stable@dpdk.org
>
> This change should be internal, and not breaking ABI, but are we sure
> we want to backport it?
>
I think yes, I don't see any reason why we should not backport it.
Do you think we should not?
>
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> ---
>> lib/lpm/rte_lpm6.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c
>> index 37baabb..d5e0918 100644
>> --- a/lib/lpm/rte_lpm6.c
>> +++ b/lib/lpm/rte_lpm6.c
>> @@ -80,8 +80,8 @@ struct rte_lpm6_rule {
>> /** Rules tbl entry key. */
>> struct rte_lpm6_rule_key {
>> uint8_t ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
>> - uint8_t depth; /**< Rule depth. */
>> -};
>> + uint32_t depth; /**< Rule depth. */
>> +} __rte_aligned(sizeof(uint32_t));
>
> I would recommend doing the same than for hash tests: keep growing
> depth to 32bits, but no enforcement of alignment and add build check
> on structure size being sizeof(uin32_t) aligned.
>
Agree, will send v2
>
>>
>> /* Header of tbl8 */
>> struct rte_lpm_tbl8_hdr {
>> --
>> 2.7.4
>>
>
>
--
Regards,
Vladimir
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v6] ethdev: add namespace
2021-10-20 19:23 1% ` [dpdk-dev] [PATCH v5] " Ferruh Yigit
@ 2021-10-22 2:02 1% ` Ferruh Yigit
2021-10-22 11:03 1% ` [dpdk-dev] [PATCH v7] " Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-22 2:02 UTC (permalink / raw)
To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
Min Hu (Connor),
Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Haiyue Wang,
Beilei Xing, Matan Azrad, Viacheslav Ovsiienko, Keith Wiles,
Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal, Declan Doherty,
Ray Kinsella, Radu Nicolau, Hemant Agrawal, Sachin Saxena,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
John W. Linville, Ciara Loftus, Shepard Siegel, Ed Czeck,
John Miller, Igor Russkikh, Steven Webster, Matt Peters,
Chandubabu Namburu, Rasesh Mody, Shahed Shaikh, Bruce Richardson,
Konstantin Ananyev, Ruifeng Wang, Rahul Lakkireddy,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, Gaetan Rivet, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu,
Srisivasubramanian Srinivasan, Jakub Grajciar, Zyta Szpak,
Liron Himi, Stephen Hemminger, Long Li, Martin Spinler,
Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa, Harman Kalra,
Anoob Joseph, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Jasvinder Singh,
Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Nicolas Chautru, David Hunt, Harry van Haaren, Bernard Iremonger,
Anatoly Burakov, John McNamara, Kirill Rybalchenko, Byron Marohn,
Yipeng Wang
Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 1216805 bytes --]
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
v2:
* Updated internal components
* Removed deprecation notice
v3:
* Updated missing macros / structs that David highlighted
* Added release notes update
v4:
* rebased on latest next-net
* depends on https://patches.dpdk.org/user/todo/dpdk/?series=19744
* Not able to complete scripts to update user code, although some
shared by Aman:
https://patches.dpdk.org/project/dpdk/patch/20211008102949.70716-1-aman.deep.singh@intel.com/
Sending new version for possible option to get this patch for -rc1 and
work for scripts later, before release.
v5:
* rebased on latest next-net
v6:
* rebased on latest next-net
---
app/proc-info/main.c | 8 +-
app/test-eventdev/test_perf_common.c | 4 +-
app/test-eventdev/test_pipeline_common.c | 10 +-
app/test-flow-perf/config.h | 2 +-
app/test-pipeline/init.c | 8 +-
app/test-pmd/cmdline.c | 286 ++---
app/test-pmd/config.c | 200 ++--
app/test-pmd/csumonly.c | 28 +-
app/test-pmd/flowgen.c | 6 +-
app/test-pmd/macfwd.c | 6 +-
app/test-pmd/macswap_common.h | 6 +-
app/test-pmd/parameters.c | 54 +-
app/test-pmd/testpmd.c | 52 +-
app/test-pmd/testpmd.h | 2 +-
app/test-pmd/txonly.c | 6 +-
app/test/test_ethdev_link.c | 68 +-
app/test/test_event_eth_rx_adapter.c | 4 +-
app/test/test_kni.c | 2 +-
app/test/test_link_bonding.c | 4 +-
app/test/test_link_bonding_mode4.c | 4 +-
| 28 +-
app/test/test_pmd_perf.c | 12 +-
app/test/virtual_pmd.c | 10 +-
doc/guides/eventdevs/cnxk.rst | 2 +-
doc/guides/eventdevs/octeontx2.rst | 2 +-
doc/guides/nics/af_packet.rst | 2 +-
doc/guides/nics/bnxt.rst | 24 +-
doc/guides/nics/enic.rst | 2 +-
doc/guides/nics/features.rst | 114 +-
doc/guides/nics/fm10k.rst | 6 +-
doc/guides/nics/intel_vf.rst | 10 +-
doc/guides/nics/ixgbe.rst | 12 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/tap.rst | 2 +-
.../generic_segmentation_offload_lib.rst | 8 +-
doc/guides/prog_guide/mbuf_lib.rst | 18 +-
doc/guides/prog_guide/poll_mode_drv.rst | 8 +-
doc/guides/prog_guide/rte_flow.rst | 34 +-
doc/guides/prog_guide/rte_security.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 10 +-
doc/guides/rel_notes/release_21_11.rst | 3 +
doc/guides/sample_app_ug/ipsec_secgw.rst | 4 +-
doc/guides/testpmd_app_ug/run_app.rst | 2 +-
drivers/bus/dpaa/include/process.h | 16 +-
drivers/common/cnxk/roc_npc.h | 2 +-
drivers/net/af_packet/rte_eth_af_packet.c | 20 +-
drivers/net/af_xdp/rte_eth_af_xdp.c | 12 +-
drivers/net/ark/ark_ethdev.c | 16 +-
drivers/net/atlantic/atl_ethdev.c | 88 +-
drivers/net/atlantic/atl_ethdev.h | 18 +-
drivers/net/atlantic/atl_rxtx.c | 6 +-
drivers/net/avp/avp_ethdev.c | 26 +-
drivers/net/axgbe/axgbe_dev.c | 6 +-
drivers/net/axgbe/axgbe_ethdev.c | 104 +-
drivers/net/axgbe/axgbe_ethdev.h | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 2 +-
drivers/net/axgbe/axgbe_rxtx.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 12 +-
drivers/net/bnxt/bnxt.h | 62 +-
drivers/net/bnxt/bnxt_ethdev.c | 172 +--
drivers/net/bnxt/bnxt_flow.c | 6 +-
drivers/net/bnxt/bnxt_hwrm.c | 112 +-
drivers/net/bnxt/bnxt_reps.c | 2 +-
drivers/net/bnxt/bnxt_ring.c | 4 +-
drivers/net/bnxt/bnxt_rxq.c | 28 +-
drivers/net/bnxt/bnxt_rxr.c | 4 +-
drivers/net/bnxt/bnxt_rxtx_vec_avx2.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_common.h | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_neon.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
drivers/net/bnxt/bnxt_txr.c | 4 +-
drivers/net/bnxt/bnxt_vnic.c | 30 +-
drivers/net/bnxt/rte_pmd_bnxt.c | 8 +-
drivers/net/bonding/eth_bond_private.h | 4 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 16 +-
drivers/net/bonding/rte_eth_bond_api.c | 6 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 50 +-
drivers/net/cnxk/cn10k_ethdev.c | 42 +-
drivers/net/cnxk/cn10k_rte_flow.c | 2 +-
drivers/net/cnxk/cn10k_rx.c | 4 +-
drivers/net/cnxk/cn10k_tx.c | 4 +-
drivers/net/cnxk/cn9k_ethdev.c | 60 +-
drivers/net/cnxk/cn9k_rx.c | 4 +-
drivers/net/cnxk/cn9k_tx.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 112 +-
drivers/net/cnxk/cnxk_ethdev.h | 49 +-
drivers/net/cnxk/cnxk_ethdev_devargs.c | 6 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 106 +-
drivers/net/cnxk/cnxk_link.c | 14 +-
drivers/net/cnxk/cnxk_ptp.c | 4 +-
drivers/net/cnxk/cnxk_rte_flow.c | 2 +-
drivers/net/cxgbe/cxgbe.h | 46 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 42 +-
drivers/net/cxgbe/cxgbe_main.c | 12 +-
drivers/net/dpaa/dpaa_ethdev.c | 180 +--
drivers/net/dpaa/dpaa_ethdev.h | 10 +-
drivers/net/dpaa/dpaa_flow.c | 32 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 138 +--
drivers/net/dpaa2/dpaa2_ethdev.h | 22 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 8 +-
drivers/net/e1000/e1000_ethdev.h | 18 +-
drivers/net/e1000/em_ethdev.c | 64 +-
drivers/net/e1000/em_rxtx.c | 38 +-
drivers/net/e1000/igb_ethdev.c | 158 +--
drivers/net/e1000/igb_pf.c | 2 +-
drivers/net/e1000/igb_rxtx.c | 116 +-
drivers/net/ena/ena_ethdev.c | 70 +-
drivers/net/ena/ena_ethdev.h | 4 +-
| 74 +-
drivers/net/enetc/enetc_ethdev.c | 30 +-
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 88 +-
drivers/net/enic/enic_main.c | 40 +-
drivers/net/enic/enic_res.c | 50 +-
drivers/net/failsafe/failsafe.c | 8 +-
drivers/net/failsafe/failsafe_intr.c | 4 +-
drivers/net/failsafe/failsafe_ops.c | 78 +-
drivers/net/fm10k/fm10k.h | 4 +-
drivers/net/fm10k/fm10k_ethdev.c | 146 +--
drivers/net/fm10k/fm10k_rxtx_vec.c | 6 +-
drivers/net/hinic/base/hinic_pmd_hwdev.c | 22 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 136 +--
drivers/net/hinic/hinic_pmd_rx.c | 36 +-
drivers/net/hinic/hinic_pmd_rx.h | 22 +-
drivers/net/hns3/hns3_dcb.c | 14 +-
drivers/net/hns3/hns3_ethdev.c | 352 +++---
drivers/net/hns3/hns3_ethdev.h | 12 +-
drivers/net/hns3/hns3_ethdev_vf.c | 100 +-
drivers/net/hns3/hns3_flow.c | 6 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
| 108 +-
| 28 +-
drivers/net/hns3/hns3_rxtx.c | 30 +-
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/hns3/hns3_rxtx_vec.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 272 ++---
drivers/net/i40e/i40e_ethdev.h | 24 +-
drivers/net/i40e/i40e_flow.c | 32 +-
drivers/net/i40e/i40e_hash.c | 158 +--
drivers/net/i40e/i40e_pf.c | 14 +-
drivers/net/i40e/i40e_rxtx.c | 8 +-
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 8 +-
drivers/net/i40e/i40e_vf_representor.c | 48 +-
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 178 +--
drivers/net/iavf/iavf_hash.c | 320 ++---
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/iavf/iavf_rxtx.h | 24 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 6 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 86 +-
drivers/net/ice/ice_dcf_vf_representor.c | 56 +-
drivers/net/ice/ice_ethdev.c | 180 +--
drivers/net/ice/ice_ethdev.h | 26 +-
drivers/net/ice/ice_hash.c | 290 ++---
drivers/net/ice/ice_rxtx.c | 16 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 4 +-
drivers/net/ice/ice_rxtx_vec_common.h | 28 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
drivers/net/igc/igc_ethdev.c | 138 +--
drivers/net/igc/igc_ethdev.h | 54 +-
drivers/net/igc/igc_txrx.c | 48 +-
drivers/net/ionic/ionic_ethdev.c | 138 +--
drivers/net/ionic/ionic_ethdev.h | 12 +-
drivers/net/ionic/ionic_lif.c | 36 +-
drivers/net/ionic/ionic_rxtx.c | 10 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 64 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 285 +++--
drivers/net/ixgbe/ixgbe_ethdev.h | 18 +-
drivers/net/ixgbe/ixgbe_fdir.c | 24 +-
drivers/net/ixgbe/ixgbe_flow.c | 2 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 12 +-
drivers/net/ixgbe/ixgbe_pf.c | 34 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 249 ++--
drivers/net/ixgbe/ixgbe_rxtx.h | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 2 +-
drivers/net/ixgbe/ixgbe_tm.c | 16 +-
drivers/net/ixgbe/ixgbe_vf_representor.c | 16 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 14 +-
drivers/net/ixgbe/rte_pmd_ixgbe.h | 4 +-
drivers/net/kni/rte_eth_kni.c | 8 +-
drivers/net/liquidio/lio_ethdev.c | 114 +-
drivers/net/memif/memif_socket.c | 2 +-
drivers/net/memif/rte_eth_memif.c | 16 +-
drivers/net/mlx4/mlx4_ethdev.c | 32 +-
drivers/net/mlx4/mlx4_flow.c | 30 +-
drivers/net/mlx4/mlx4_intr.c | 8 +-
drivers/net/mlx4/mlx4_rxq.c | 18 +-
drivers/net/mlx4/mlx4_txq.c | 24 +-
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 54 +-
drivers/net/mlx5/linux/mlx5_os.c | 6 +-
drivers/net/mlx5/mlx5.c | 4 +-
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_defs.h | 6 +-
drivers/net/mlx5/mlx5_ethdev.c | 6 +-
drivers/net/mlx5/mlx5_flow.c | 54 +-
drivers/net/mlx5/mlx5_flow.h | 12 +-
drivers/net/mlx5/mlx5_flow_dv.c | 44 +-
drivers/net/mlx5/mlx5_flow_verbs.c | 4 +-
| 10 +-
drivers/net/mlx5/mlx5_rxq.c | 40 +-
drivers/net/mlx5/mlx5_rxtx_vec.h | 8 +-
drivers/net/mlx5/mlx5_tx.c | 30 +-
drivers/net/mlx5/mlx5_txq.c | 58 +-
drivers/net/mlx5/mlx5_vlan.c | 4 +-
drivers/net/mlx5/windows/mlx5_os.c | 4 +-
drivers/net/mvneta/mvneta_ethdev.c | 32 +-
drivers/net/mvneta/mvneta_ethdev.h | 10 +-
drivers/net/mvneta/mvneta_rxtx.c | 2 +-
drivers/net/mvpp2/mrvl_ethdev.c | 112 +-
drivers/net/netvsc/hn_ethdev.c | 70 +-
drivers/net/netvsc/hn_rndis.c | 50 +-
drivers/net/nfb/nfb_ethdev.c | 20 +-
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfp/nfp_common.c | 122 +-
drivers/net/nfp/nfp_ethdev.c | 2 +-
drivers/net/nfp/nfp_ethdev_vf.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 50 +-
drivers/net/null/rte_eth_null.c | 28 +-
drivers/net/octeontx/octeontx_ethdev.c | 74 +-
drivers/net/octeontx/octeontx_ethdev.h | 30 +-
drivers/net/octeontx/octeontx_ethdev_ops.c | 26 +-
drivers/net/octeontx2/otx2_ethdev.c | 96 +-
drivers/net/octeontx2/otx2_ethdev.h | 64 +-
drivers/net/octeontx2/otx2_ethdev_devargs.c | 12 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 14 +-
drivers/net/octeontx2/otx2_ethdev_sec.c | 8 +-
drivers/net/octeontx2/otx2_flow.c | 2 +-
drivers/net/octeontx2/otx2_flow_ctrl.c | 36 +-
drivers/net/octeontx2/otx2_flow_parse.c | 4 +-
drivers/net/octeontx2/otx2_link.c | 40 +-
drivers/net/octeontx2/otx2_mcast.c | 2 +-
drivers/net/octeontx2/otx2_ptp.c | 4 +-
| 70 +-
drivers/net/octeontx2/otx2_rx.c | 4 +-
drivers/net/octeontx2/otx2_tx.c | 2 +-
drivers/net/octeontx2/otx2_vlan.c | 42 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 6 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 +-
drivers/net/pcap/pcap_ethdev.c | 12 +-
drivers/net/pfe/pfe_ethdev.c | 18 +-
drivers/net/qede/base/mcp_public.h | 4 +-
drivers/net/qede/qede_ethdev.c | 156 +--
drivers/net/qede/qede_filter.c | 42 +-
drivers/net/qede/qede_rxtx.c | 2 +-
drivers/net/qede/qede_rxtx.h | 16 +-
drivers/net/ring/rte_eth_ring.c | 20 +-
drivers/net/sfc/sfc.c | 30 +-
drivers/net/sfc/sfc_ef100_rx.c | 10 +-
drivers/net/sfc/sfc_ef100_tx.c | 20 +-
drivers/net/sfc/sfc_ef10_essb_rx.c | 4 +-
drivers/net/sfc/sfc_ef10_rx.c | 8 +-
drivers/net/sfc/sfc_ef10_tx.c | 32 +-
drivers/net/sfc/sfc_ethdev.c | 50 +-
drivers/net/sfc/sfc_flow.c | 2 +-
drivers/net/sfc/sfc_port.c | 52 +-
drivers/net/sfc/sfc_repr.c | 10 +-
drivers/net/sfc/sfc_rx.c | 50 +-
drivers/net/sfc/sfc_tx.c | 50 +-
drivers/net/softnic/rte_eth_softnic.c | 12 +-
drivers/net/szedata2/rte_eth_szedata2.c | 14 +-
drivers/net/tap/rte_eth_tap.c | 104 +-
| 2 +-
drivers/net/thunderx/nicvf_ethdev.c | 102 +-
drivers/net/thunderx/nicvf_ethdev.h | 40 +-
drivers/net/txgbe/txgbe_ethdev.c | 242 ++--
drivers/net/txgbe/txgbe_ethdev.h | 18 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 24 +-
drivers/net/txgbe/txgbe_fdir.c | 20 +-
drivers/net/txgbe/txgbe_flow.c | 2 +-
drivers/net/txgbe/txgbe_ipsec.c | 12 +-
drivers/net/txgbe/txgbe_pf.c | 34 +-
drivers/net/txgbe/txgbe_rxtx.c | 308 ++---
drivers/net/txgbe/txgbe_rxtx.h | 4 +-
drivers/net/txgbe/txgbe_tm.c | 16 +-
drivers/net/vhost/rte_eth_vhost.c | 16 +-
drivers/net/virtio/virtio_ethdev.c | 124 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 72 +-
drivers/net/vmxnet3/vmxnet3_ethdev.h | 16 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 16 +-
examples/bbdev_app/main.c | 6 +-
examples/bond/main.c | 14 +-
examples/distributor/main.c | 12 +-
examples/ethtool/ethtool-app/main.c | 2 +-
examples/ethtool/lib/rte_ethtool.c | 18 +-
.../pipeline_worker_generic.c | 16 +-
.../eventdev_pipeline/pipeline_worker_tx.c | 12 +-
examples/flow_classify/flow_classify.c | 4 +-
examples/flow_filtering/main.c | 16 +-
examples/ioat/ioatfwd.c | 8 +-
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 20 +-
examples/ip_reassembly/main.c | 18 +-
examples/ipsec-secgw/ipsec-secgw.c | 32 +-
examples/ipsec-secgw/sa.c | 8 +-
examples/ipv4_multicast/main.c | 6 +-
examples/kni/main.c | 8 +-
examples/l2fwd-crypto/main.c | 10 +-
examples/l2fwd-event/l2fwd_common.c | 10 +-
examples/l2fwd-event/main.c | 2 +-
examples/l2fwd-jobstats/main.c | 8 +-
examples/l2fwd-keepalive/main.c | 8 +-
examples/l2fwd/main.c | 8 +-
examples/l3fwd-acl/main.c | 18 +-
examples/l3fwd-graph/main.c | 14 +-
examples/l3fwd-power/main.c | 16 +-
examples/l3fwd/l3fwd_event.c | 4 +-
examples/l3fwd/main.c | 18 +-
examples/link_status_interrupt/main.c | 10 +-
.../client_server_mp/mp_server/init.c | 4 +-
examples/multi_process/symmetric_mp/main.c | 14 +-
examples/ntb/ntb_fwd.c | 6 +-
examples/packet_ordering/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 16 +-
examples/pipeline/obj.c | 20 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 16 +-
examples/qos_sched/init.c | 6 +-
examples/rxtx_callbacks/main.c | 8 +-
examples/server_node_efd/server/init.c | 8 +-
examples/skeleton/basicfwd.c | 4 +-
examples/vhost/main.c | 26 +-
examples/vm_power_manager/main.c | 6 +-
examples/vmdq/main.c | 20 +-
examples/vmdq_dcb/main.c | 40 +-
lib/ethdev/ethdev_driver.h | 36 +-
lib/ethdev/rte_ethdev.c | 181 ++-
lib/ethdev/rte_ethdev.h | 1035 +++++++++++------
lib/ethdev/rte_flow.h | 2 +-
lib/gso/rte_gso.c | 20 +-
lib/gso/rte_gso.h | 4 +-
lib/mbuf/rte_mbuf_core.h | 8 +-
lib/mbuf/rte_mbuf_dyn.h | 2 +-
339 files changed, 6645 insertions(+), 6390 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index bfe5ce825b70..a4271047e693 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
}
ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
- if (ret == 0 && fc_conf.mode != RTE_FC_NONE) {
+ if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE) {
printf("\t -- flow control mode %s%s high %u low %u pause %u%s%s\n",
- fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
- fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
- fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+ fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+ fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
fc_conf.autoneg ? " auto" : "",
fc_conf.high_water,
fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 660d5a0364b6..31d1b0e14653 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,13 +668,13 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct test_perf *t = evt_test_priv(test);
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 2775e72c580d..d202091077a6 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_rxconf rx_conf;
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
@@ -223,7 +223,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
local_port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = rte_eth_dev_info_get(i, &dev_info);
if (ret != 0) {
@@ -233,9 +233,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
}
/* Enable mbuf fast free if PMD has the capability. */
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
#define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
/* Configuration */
#define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -178,7 +178,7 @@ app_ports_check_link(void)
RTE_LOG(INFO, USER1, "Port %u %s\n",
port,
link_status_text);
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
all_ports_up = 0;
}
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3221f6e1aa40..ebea13f86ab0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1478,51 +1478,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
int duplex;
if (!strcmp(duplexstr, "half")) {
- duplex = ETH_LINK_HALF_DUPLEX;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
} else if (!strcmp(duplexstr, "full")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else if (!strcmp(duplexstr, "auto")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
fprintf(stderr, "Unknown duplex parameter\n");
return -1;
}
if (!strcmp(speedstr, "10")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
} else if (!strcmp(speedstr, "100")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
} else {
- if (duplex != ETH_LINK_FULL_DUPLEX) {
+ if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
fprintf(stderr, "Invalid speed/duplex parameters\n");
return -1;
}
if (!strcmp(speedstr, "1000")) {
- *speed = ETH_LINK_SPEED_1G;
+ *speed = RTE_ETH_LINK_SPEED_1G;
} else if (!strcmp(speedstr, "10000")) {
- *speed = ETH_LINK_SPEED_10G;
+ *speed = RTE_ETH_LINK_SPEED_10G;
} else if (!strcmp(speedstr, "25000")) {
- *speed = ETH_LINK_SPEED_25G;
+ *speed = RTE_ETH_LINK_SPEED_25G;
} else if (!strcmp(speedstr, "40000")) {
- *speed = ETH_LINK_SPEED_40G;
+ *speed = RTE_ETH_LINK_SPEED_40G;
} else if (!strcmp(speedstr, "50000")) {
- *speed = ETH_LINK_SPEED_50G;
+ *speed = RTE_ETH_LINK_SPEED_50G;
} else if (!strcmp(speedstr, "100000")) {
- *speed = ETH_LINK_SPEED_100G;
+ *speed = RTE_ETH_LINK_SPEED_100G;
} else if (!strcmp(speedstr, "200000")) {
- *speed = ETH_LINK_SPEED_200G;
+ *speed = RTE_ETH_LINK_SPEED_200G;
} else if (!strcmp(speedstr, "auto")) {
- *speed = ETH_LINK_SPEED_AUTONEG;
+ *speed = RTE_ETH_LINK_SPEED_AUTONEG;
} else {
fprintf(stderr, "Unknown speed parameter\n");
return -1;
}
}
- if (*speed != ETH_LINK_SPEED_AUTONEG)
- *speed |= ETH_LINK_SPEED_FIXED;
+ if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+ *speed |= RTE_ETH_LINK_SPEED_FIXED;
return 0;
}
@@ -2166,33 +2166,33 @@ cmd_config_rss_parsed(void *parsed_result,
int ret;
if (!strcmp(res->value, "all"))
- rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
- ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
- ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
- ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
- ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+ RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+ RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+ RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "eth"))
- rss_conf.rss_hf = ETH_RSS_ETH;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH;
else if (!strcmp(res->value, "vlan"))
- rss_conf.rss_hf = ETH_RSS_VLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
else if (!strcmp(res->value, "ip"))
- rss_conf.rss_hf = ETH_RSS_IP;
+ rss_conf.rss_hf = RTE_ETH_RSS_IP;
else if (!strcmp(res->value, "udp"))
- rss_conf.rss_hf = ETH_RSS_UDP;
+ rss_conf.rss_hf = RTE_ETH_RSS_UDP;
else if (!strcmp(res->value, "tcp"))
- rss_conf.rss_hf = ETH_RSS_TCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_TCP;
else if (!strcmp(res->value, "sctp"))
- rss_conf.rss_hf = ETH_RSS_SCTP;
+ rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
else if (!strcmp(res->value, "ether"))
- rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
else if (!strcmp(res->value, "port"))
- rss_conf.rss_hf = ETH_RSS_PORT;
+ rss_conf.rss_hf = RTE_ETH_RSS_PORT;
else if (!strcmp(res->value, "vxlan"))
- rss_conf.rss_hf = ETH_RSS_VXLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
else if (!strcmp(res->value, "geneve"))
- rss_conf.rss_hf = ETH_RSS_GENEVE;
+ rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
else if (!strcmp(res->value, "nvgre"))
- rss_conf.rss_hf = ETH_RSS_NVGRE;
+ rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
else if (!strcmp(res->value, "l3-pre32"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
else if (!strcmp(res->value, "l3-pre40"))
@@ -2206,46 +2206,46 @@ cmd_config_rss_parsed(void *parsed_result,
else if (!strcmp(res->value, "l3-pre96"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
else if (!strcmp(res->value, "l3-src-only"))
- rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
else if (!strcmp(res->value, "l3-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
else if (!strcmp(res->value, "l4-src-only"))
- rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
else if (!strcmp(res->value, "l4-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
else if (!strcmp(res->value, "l2-src-only"))
- rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
else if (!strcmp(res->value, "l2-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
else if (!strcmp(res->value, "l2tpv3"))
- rss_conf.rss_hf = ETH_RSS_L2TPV3;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
else if (!strcmp(res->value, "esp"))
- rss_conf.rss_hf = ETH_RSS_ESP;
+ rss_conf.rss_hf = RTE_ETH_RSS_ESP;
else if (!strcmp(res->value, "ah"))
- rss_conf.rss_hf = ETH_RSS_AH;
+ rss_conf.rss_hf = RTE_ETH_RSS_AH;
else if (!strcmp(res->value, "pfcp"))
- rss_conf.rss_hf = ETH_RSS_PFCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
else if (!strcmp(res->value, "pppoe"))
- rss_conf.rss_hf = ETH_RSS_PPPOE;
+ rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
else if (!strcmp(res->value, "gtpu"))
- rss_conf.rss_hf = ETH_RSS_GTPU;
+ rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
else if (!strcmp(res->value, "ecpri"))
- rss_conf.rss_hf = ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "mpls"))
- rss_conf.rss_hf = ETH_RSS_MPLS;
+ rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
else if (!strcmp(res->value, "ipv4-chksum"))
- rss_conf.rss_hf = ETH_RSS_IPV4_CHKSUM;
+ rss_conf.rss_hf = RTE_ETH_RSS_IPV4_CHKSUM;
else if (!strcmp(res->value, "none"))
rss_conf.rss_hf = 0;
else if (!strcmp(res->value, "level-default")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
} else if (!strcmp(res->value, "level-outer")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
} else if (!strcmp(res->value, "level-inner")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
} else if (!strcmp(res->value, "default"))
use_default = 1;
else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2982,8 +2982,8 @@ parse_reta_config(const char *str,
return -1;
}
- idx = hash_index / RTE_RETA_GROUP_SIZE;
- shift = hash_index % RTE_RETA_GROUP_SIZE;
+ idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+ shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
reta_conf[idx].mask |= (1ULL << shift);
reta_conf[idx].reta[shift] = nb_queue;
}
@@ -3012,10 +3012,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
} else
printf("The reta size of port %d is %u\n",
res->port_id, dev_info.reta_size);
- if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+ if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
fprintf(stderr,
"Currently do not support more than %u entries of redirection table\n",
- ETH_RSS_RETA_SIZE_512);
+ RTE_ETH_RSS_RETA_SIZE_512);
return;
}
@@ -3086,8 +3086,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
char *end;
char *str_fld[8];
uint16_t i;
- uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
- RTE_RETA_GROUP_SIZE;
+ uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+ RTE_ETH_RETA_GROUP_SIZE;
int ret;
p = strchr(p0, '(');
@@ -3132,7 +3132,7 @@ cmd_showport_reta_parsed(void *parsed_result,
if (ret != 0)
return;
- max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+ max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
if (res->size == 0 || res->size > max_reta_size) {
fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
res->size, max_reta_size);
@@ -3272,7 +3272,7 @@ cmd_config_dcb_parsed(void *parsed_result,
return;
}
- if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+ if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
fprintf(stderr,
"The invalid number of traffic class, only 4 or 8 allowed.\n");
return;
@@ -4276,9 +4276,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
enum rte_vlan_type vlan_type;
if (!strcmp(res->vlan_type, "inner"))
- vlan_type = ETH_VLAN_TYPE_INNER;
+ vlan_type = RTE_ETH_VLAN_TYPE_INNER;
else if (!strcmp(res->vlan_type, "outer"))
- vlan_type = ETH_VLAN_TYPE_OUTER;
+ vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
else {
fprintf(stderr, "Unknown vlan type\n");
return;
@@ -4615,55 +4615,55 @@ csum_show(int port_id)
printf("Parse tunnel is %s\n",
(ports[port_id].parse_tunnel) ? "on" : "off");
printf("IP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
printf("UDP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
printf("TCP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
printf("SCTP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
printf("Outer-Ip checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
printf("Outer-Udp checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
/* display warnings if configuration is not supported by the NIC */
ret = eth_dev_info_get_print_err(port_id, &dev_info);
if (ret != 0)
return;
- if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware UDP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware TCP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
== 0) {
fprintf(stderr,
"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4713,8 +4713,8 @@ cmd_csum_parsed(void *parsed_result,
if (!strcmp(res->proto, "ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_IPV4_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
} else {
fprintf(stderr,
"IP checksum offload is not supported by port %u\n",
@@ -4722,8 +4722,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_UDP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
} else {
fprintf(stderr,
"UDP checksum offload is not supported by port %u\n",
@@ -4731,8 +4731,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "tcp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_TCP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
} else {
fprintf(stderr,
"TCP checksum offload is not supported by port %u\n",
@@ -4740,8 +4740,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "sctp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SCTP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
} else {
fprintf(stderr,
"SCTP checksum offload is not supported by port %u\n",
@@ -4749,9 +4749,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
} else {
fprintf(stderr,
"Outer IP checksum offload is not supported by port %u\n",
@@ -4759,9 +4759,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
} else {
fprintf(stderr,
"Outer UDP checksum offload is not supported by port %u\n",
@@ -4916,7 +4916,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr, "Error: TSO is not supported by port %d\n",
res->port_id);
return;
@@ -4924,11 +4924,11 @@ cmd_tso_set_parsed(void *parsed_result,
if (ports[res->port_id].tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_TCP_TSO;
+ ~RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO for non-tunneled packets is disabled\n");
} else {
ports[res->port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO segment size for non-tunneled packets is %d\n",
ports[res->port_id].tso_segsz);
}
@@ -4940,7 +4940,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr,
"Warning: TSO enabled but not supported by port %d\n",
res->port_id);
@@ -5011,27 +5011,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
return dev_info;
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
fprintf(stderr,
"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
fprintf(stderr,
"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
fprintf(stderr,
"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
fprintf(stderr,
"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
fprintf(stderr,
"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
fprintf(stderr,
"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
@@ -5059,20 +5059,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
dev_info = check_tunnel_tso_nic_support(res->port_id);
if (ports[res->port_id].tunnel_tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ ~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
printf("TSO for tunneled packets is disabled\n");
} else {
- uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
ports[res->port_id].dev_conf.txmode.offloads |=
(tso_offloads & dev_info.tx_offload_capa);
@@ -5095,7 +5095,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
fprintf(stderr,
"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
if (!(ports[res->port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
fprintf(stderr,
"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
}
@@ -7227,9 +7227,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
return;
}
- if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
rx_fc_en = true;
- if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
tx_fc_en = true;
printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7507,12 +7507,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
/* Partial command line, retrieve current configuration */
@@ -7525,11 +7525,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
return;
}
- if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
rx_fc_en = 1;
- if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
tx_fc_en = 1;
}
@@ -7597,12 +7597,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -9250,13 +9250,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
if (!strcmp(res->what,"rxmode")) {
if (!strcmp(res->mode, "AUPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
else if (!strcmp(res->mode, "ROPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
else if (!strcmp(res->mode, "BAM"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
else if (!strncmp(res->mode, "MPE",3))
- vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
}
RTE_SET_USED(is_on);
@@ -9656,7 +9656,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
int ret;
tunnel_udp.udp_port = res->udp_port;
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
if (!strcmp(res->what, "add"))
ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9722,13 +9722,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
tunnel_udp.udp_port = res->udp_port;
if (!strcmp(res->tunnel_type, "vxlan")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
} else if (!strcmp(res->tunnel_type, "geneve")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
} else if (!strcmp(res->tunnel_type, "ecpri")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
} else {
fprintf(stderr, "Invalid tunnel type\n");
return;
@@ -11859,7 +11859,7 @@ cmd_set_macsec_offload_on_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
#endif
@@ -11870,7 +11870,7 @@ cmd_set_macsec_offload_on_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MACSEC_INSERT;
+ RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
@@ -11956,7 +11956,7 @@ cmd_set_macsec_offload_off_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_disable(port_id);
#endif
@@ -11964,7 +11964,7 @@ cmd_set_macsec_offload_off_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MACSEC_INSERT;
+ ~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cad78350dcc9..a18871d461c4 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,62 +86,62 @@ static const struct {
};
const struct rss_type_info rss_type_table[] = {
- { "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
- ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
- ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
- ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+ { "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+ RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+ RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
{ "none", 0 },
- { "eth", ETH_RSS_ETH },
- { "l2-src-only", ETH_RSS_L2_SRC_ONLY },
- { "l2-dst-only", ETH_RSS_L2_DST_ONLY },
- { "vlan", ETH_RSS_VLAN },
- { "s-vlan", ETH_RSS_S_VLAN },
- { "c-vlan", ETH_RSS_C_VLAN },
- { "ipv4", ETH_RSS_IPV4 },
- { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
- { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
- { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
- { "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
- { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
- { "ipv6", ETH_RSS_IPV6 },
- { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
- { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
- { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
- { "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
- { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
- { "l2-payload", ETH_RSS_L2_PAYLOAD },
- { "ipv6-ex", ETH_RSS_IPV6_EX },
- { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
- { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
- { "port", ETH_RSS_PORT },
- { "vxlan", ETH_RSS_VXLAN },
- { "geneve", ETH_RSS_GENEVE },
- { "nvgre", ETH_RSS_NVGRE },
- { "ip", ETH_RSS_IP },
- { "udp", ETH_RSS_UDP },
- { "tcp", ETH_RSS_TCP },
- { "sctp", ETH_RSS_SCTP },
- { "tunnel", ETH_RSS_TUNNEL },
+ { "eth", RTE_ETH_RSS_ETH },
+ { "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+ { "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+ { "vlan", RTE_ETH_RSS_VLAN },
+ { "s-vlan", RTE_ETH_RSS_S_VLAN },
+ { "c-vlan", RTE_ETH_RSS_C_VLAN },
+ { "ipv4", RTE_ETH_RSS_IPV4 },
+ { "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+ { "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+ { "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+ { "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+ { "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+ { "ipv6", RTE_ETH_RSS_IPV6 },
+ { "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+ { "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+ { "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+ { "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+ { "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+ { "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+ { "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+ { "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+ { "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+ { "port", RTE_ETH_RSS_PORT },
+ { "vxlan", RTE_ETH_RSS_VXLAN },
+ { "geneve", RTE_ETH_RSS_GENEVE },
+ { "nvgre", RTE_ETH_RSS_NVGRE },
+ { "ip", RTE_ETH_RSS_IP },
+ { "udp", RTE_ETH_RSS_UDP },
+ { "tcp", RTE_ETH_RSS_TCP },
+ { "sctp", RTE_ETH_RSS_SCTP },
+ { "tunnel", RTE_ETH_RSS_TUNNEL },
{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
- { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
- { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
- { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
- { "l4-dst-only", ETH_RSS_L4_DST_ONLY },
- { "esp", ETH_RSS_ESP },
- { "ah", ETH_RSS_AH },
- { "l2tpv3", ETH_RSS_L2TPV3 },
- { "pfcp", ETH_RSS_PFCP },
- { "pppoe", ETH_RSS_PPPOE },
- { "gtpu", ETH_RSS_GTPU },
- { "ecpri", ETH_RSS_ECPRI },
- { "mpls", ETH_RSS_MPLS },
- { "ipv4-chksum", ETH_RSS_IPV4_CHKSUM },
- { "l4-chksum", ETH_RSS_L4_CHKSUM },
+ { "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+ { "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+ { "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+ { "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+ { "esp", RTE_ETH_RSS_ESP },
+ { "ah", RTE_ETH_RSS_AH },
+ { "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+ { "pfcp", RTE_ETH_RSS_PFCP },
+ { "pppoe", RTE_ETH_RSS_PPPOE },
+ { "gtpu", RTE_ETH_RSS_GTPU },
+ { "ecpri", RTE_ETH_RSS_ECPRI },
+ { "mpls", RTE_ETH_RSS_MPLS },
+ { "ipv4-chksum", RTE_ETH_RSS_IPV4_CHKSUM },
+ { "l4-chksum", RTE_ETH_RSS_L4_CHKSUM },
{ NULL, 0 },
};
@@ -538,39 +538,39 @@ static void
device_infos_display_speeds(uint32_t speed_capa)
{
printf("\n\tDevice speed capability:");
- if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+ if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
printf(" Autonegotiate (all speeds)");
- if (speed_capa & ETH_LINK_SPEED_FIXED)
+ if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
printf(" Disable autonegotiate (fixed speed) ");
- if (speed_capa & ETH_LINK_SPEED_10M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
printf(" 10 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_10M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M)
printf(" 10 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
printf(" 100 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M)
printf(" 100 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_1G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_1G)
printf(" 1 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_2_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
printf(" 2.5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_5G)
printf(" 5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_10G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10G)
printf(" 10 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_20G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_20G)
printf(" 20 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_25G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_25G)
printf(" 25 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_40G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_40G)
printf(" 40 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_50G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_50G)
printf(" 50 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_56G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_56G)
printf(" 56 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_100G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100G)
printf(" 100 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_200G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_200G)
printf(" 200 Gbps ");
}
@@ -723,9 +723,9 @@ port_infos_display(portid_t port_id)
printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
- printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex"));
- printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+ printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
("On") : ("Off"));
if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -743,22 +743,22 @@ port_infos_display(portid_t port_id)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (vlan_offload >= 0){
printf("VLAN offload: \n");
- if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
printf(" strip on, ");
else
printf(" strip off, ");
- if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
printf("filter on, ");
else
printf("filter off, ");
- if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
printf("extend on, ");
else
printf("extend off, ");
- if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
printf("qinq strip on\n");
else
printf("qinq strip off\n");
@@ -2953,8 +2953,8 @@ port_rss_reta_info(portid_t port_id,
}
for (i = 0; i < nb_entries; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3427,7 +3427,7 @@ dcb_fwd_config_setup(void)
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
fwd_lcores[lc_id]->stream_nb = 0;
fwd_lcores[lc_id]->stream_idx = sm_id;
- for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+ for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
@@ -4490,11 +4490,11 @@ vlan_extend_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
} else {
- vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4520,11 +4520,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
- vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4565,11 +4565,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
- vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4595,11 +4595,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
} else {
- vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4669,7 +4669,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (ports[port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_QINQ_INSERT) {
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
fprintf(stderr, "Error, as QinQ has been enabled.\n");
return;
}
@@ -4678,7 +4678,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
fprintf(stderr,
"Error: vlan insert is not supported by port %d\n",
port_id);
@@ -4686,7 +4686,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
ports[port_id].tx_vlan_id = vlan_id;
}
@@ -4705,7 +4705,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
fprintf(stderr,
"Error: qinq insert not supported by port %d\n",
port_id);
@@ -4713,8 +4713,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = vlan_id;
ports[port_id].tx_vlan_id_outer = vlan_id_outer;
}
@@ -4723,8 +4723,8 @@ void
tx_vlan_reset(portid_t port_id)
{
ports[port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = 0;
ports[port_id].tx_vlan_id_outer = 0;
}
@@ -5130,7 +5130,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
ret = eth_link_get_nowait_print_err(port_id, &link);
if (ret < 0)
return 1;
- if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+ if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
rate > link.link_speed) {
fprintf(stderr,
"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a35..75b24487e72e 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
/* do not recalculate udp cksum if it was 0 */
if (udp_hdr->dgram_cksum != 0) {
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
ol_flags |= PKT_TX_UDP_CKSUM;
} else {
udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
if (tso_segsz)
ol_flags |= PKT_TX_TCP_SEG;
- else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
ol_flags |= PKT_TX_TCP_CKSUM;
} else {
tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
((char *)l3_hdr + info->l3_len);
/* sctp payload must be a multiple of 4 to be
* offloaded */
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
((ipv4_hdr->total_length & 0x3) == 0)) {
ol_flags |= PKT_TX_SCTP_CKSUM;
} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ipv4_hdr->hdr_checksum = 0;
ol_flags |= PKT_TX_OUTER_IPV4;
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
ol_flags |= PKT_TX_OUTER_IP_CKSUM;
else
ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ol_flags |= PKT_TX_TCP_SEG;
/* Skip SW outer UDP checksum generation if HW supports it */
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
udp_hdr->dgram_cksum
= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (info.is_tunnel == 1) {
if (info.tunnel_tso_segsz ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
m->outer_l2_len = info.outer_l2_len;
m->outer_l3_len = info.outer_l3_len;
m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
rte_be_to_cpu_16(info.outer_ethertype),
info.outer_l3_len);
/* dump tx packet info */
- if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+ if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
info.tso_segsz != 0)
printf("tx: m->l2_len=%d m->l3_len=%d "
"m->l4_len=%d\n",
m->l2_len, m->l3_len, m->l4_len);
if (info.is_tunnel == 1) {
if ((tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
(tx_ol_flags & PKT_TX_OUTER_IPV6))
printf("tx: m->outer_l2_len=%d "
"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 7ebed9fed334..03d026dec169 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -99,11 +99,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags |= PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ee76df7f0323..57e00bca20e7 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
fs->rx_packets += nb_rx;
txp = &ports[fs->tx_port];
tx_offloads = txp->dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (i = 0; i < nb_rx; i++) {
if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
{
uint64_t ol_flags = 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
PKT_TX_VLAN : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
PKT_TX_QINQ : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
PKT_TX_MACSEC : 0;
return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index afc75f6bd213..cb40917077ea 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -547,29 +547,29 @@ parse_xstats_list(const char *in_str, struct rte_eth_xstat_name **xstats,
static int
parse_link_speed(int n)
{
- uint32_t speed = ETH_LINK_SPEED_FIXED;
+ uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
switch (n) {
case 1000:
- speed |= ETH_LINK_SPEED_1G;
+ speed |= RTE_ETH_LINK_SPEED_1G;
break;
case 10000:
- speed |= ETH_LINK_SPEED_10G;
+ speed |= RTE_ETH_LINK_SPEED_10G;
break;
case 25000:
- speed |= ETH_LINK_SPEED_25G;
+ speed |= RTE_ETH_LINK_SPEED_25G;
break;
case 40000:
- speed |= ETH_LINK_SPEED_40G;
+ speed |= RTE_ETH_LINK_SPEED_40G;
break;
case 50000:
- speed |= ETH_LINK_SPEED_50G;
+ speed |= RTE_ETH_LINK_SPEED_50G;
break;
case 100000:
- speed |= ETH_LINK_SPEED_100G;
+ speed |= RTE_ETH_LINK_SPEED_100G;
break;
case 200000:
- speed |= ETH_LINK_SPEED_200G;
+ speed |= RTE_ETH_LINK_SPEED_200G;
break;
case 100:
case 10:
@@ -1002,13 +1002,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
if (!strcmp(optarg, "64K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_64K;
+ RTE_ETH_FDIR_PBALLOC_64K;
else if (!strcmp(optarg, "128K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_128K;
+ RTE_ETH_FDIR_PBALLOC_128K;
else if (!strcmp(optarg, "256K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_256K;
+ RTE_ETH_FDIR_PBALLOC_256K;
else
rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
" must be: 64K or 128K or 256K\n",
@@ -1050,34 +1050,34 @@ launch_args_parse(int argc, char** argv)
}
#endif
if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
- rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
- rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
- rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
if (!strcmp(lgopts[opt_idx].name,
"enable-rx-timestamp"))
- rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-filter"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-extend"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-qinq-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
rx_drop_en = 1;
@@ -1099,13 +1099,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
set_pkt_forwarding_mode(optarg);
if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
- rss_hf = ETH_RSS_IP;
+ rss_hf = RTE_ETH_RSS_IP;
if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
- rss_hf = ETH_RSS_UDP;
+ rss_hf = RTE_ETH_RSS_UDP;
if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
- rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
- rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
if (!strcmp(lgopts[opt_idx].name, "rxq")) {
n = atoi(optarg);
if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1495,12 +1495,12 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
char *end = NULL;
n = strtoul(optarg, &end, 16);
- if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+ if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
else
rte_exit(EXIT_FAILURE,
"rx-mq-mode must be >= 0 and <= %d\n",
- ETH_MQ_RX_VMDQ_DCB_RSS);
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
}
if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 6d5bbc82404e..abfa8395ccdc 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -349,7 +349,7 @@ uint64_t noisy_lkup_num_reads_writes;
/*
* Receive Side Scaling (RSS) configuration.
*/
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
/*
* Port topology configuration
@@ -460,12 +460,12 @@ lcoreid_t latencystats_lcore_id = -1;
struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
};
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
.mode = RTE_FDIR_MODE_NONE,
- .pballoc = RTE_FDIR_PBALLOC_64K,
+ .pballoc = RTE_ETH_FDIR_PBALLOC_64K,
.status = RTE_FDIR_REPORT_STATUS,
.mask = {
.vlan_tci_mask = 0xFFEF,
@@ -524,7 +524,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
/*
* hexadecimal bitmask of RX mq mode can be enabled.
*/
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
/*
* Used to set forced link speed
@@ -1578,9 +1578,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Apply Rx offloads configuration */
for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1717,8 +1717,8 @@ init_config(void)
init_port_config();
- gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
/*
* Records which Mbuf pool to use by each logical core, if needed.
*/
@@ -3466,7 +3466,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3769,17 +3769,17 @@ init_port_config(void)
if (port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0) {
port->dev_conf.rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_RSS);
} else {
- port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_RSS_HASH;
+ ~RTE_ETH_RX_OFFLOAD_RSS_HASH;
for (i = 0;
i < port->dev_info.nb_rx_queues;
i++)
port->rx_conf[i].offloads &=
- ~DEV_RX_OFFLOAD_RSS_HASH;
+ ~RTE_ETH_RX_OFFLOAD_RSS_HASH;
}
}
@@ -3867,9 +3867,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->enable_default_pool = 0;
vmdq_rx_conf->default_pool = 0;
vmdq_rx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_tx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3877,7 +3877,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->pool_map[i].pools =
1 << (i % vmdq_rx_conf->nb_queue_pools);
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
}
@@ -3885,8 +3885,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
/* set DCB mode of RX and TX of multiple queues */
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
- eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ (rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
} else {
struct rte_eth_dcb_rx_conf *rx_conf =
ð_conf->rx_adv_conf.dcb_rx_conf;
@@ -3902,23 +3902,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
rx_conf->nb_tcs = num_tcs;
tx_conf->nb_tcs = num_tcs;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
rx_conf->dcb_tc[i] = i % num_tcs;
tx_conf->dcb_tc[i] = i % num_tcs;
}
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
eth_conf->rx_adv_conf.rss_conf = rss_conf;
- eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
}
if (pfc_en)
eth_conf->dcb_capability_en =
- ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+ RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
else
- eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+ eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
return 0;
}
@@ -3947,7 +3947,7 @@ init_port_dcb_config(portid_t pid,
retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
if (retval < 0)
return retval;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
/* re-configure the device . */
retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3997,7 +3997,7 @@ init_port_dcb_config(portid_t pid,
rxtx_port_config(pid);
/* VLAN filter */
- rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
for (i = 0; i < RTE_DIM(vlan_tags); i++)
rx_vft_set(pid, vlan_tags[i], 1);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bf3669134aa0..cd1e623ad67a 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -493,7 +493,7 @@ extern lcoreid_t bitrate_lcore_id;
extern uint8_t bitrate_enabled;
#endif
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
extern uint32_t max_rx_pkt_len;
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index e45f8840c91c..9eb7992815e8 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -354,11 +354,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
tx_offloads = txp->dev_conf.txmode.offloads;
vlan_tci = txp->tx_vlan_id;
vlan_tci_outer = txp->tx_vlan_id_outer;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
text, strlen(text), "Invalid default link status string");
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_FIXED;
- link_status.link_speed = ETH_SPEED_NUM_10M,
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #2: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_NONE;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
"string with HDX");
/* test max str len */
- link_status.link_speed = ETH_SPEED_NUM_200G;
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_AUTONEG;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #4:len = %d, %s\n", ret, text);
RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
int ret = 0;
struct rte_eth_link link_status = {
.link_speed = 55555,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
const char *value;
uint32_t link_speed;
} speed_str_map[] = {
- { "None", ETH_SPEED_NUM_NONE },
- { "10 Mbps", ETH_SPEED_NUM_10M },
- { "100 Mbps", ETH_SPEED_NUM_100M },
- { "1 Gbps", ETH_SPEED_NUM_1G },
- { "2.5 Gbps", ETH_SPEED_NUM_2_5G },
- { "5 Gbps", ETH_SPEED_NUM_5G },
- { "10 Gbps", ETH_SPEED_NUM_10G },
- { "20 Gbps", ETH_SPEED_NUM_20G },
- { "25 Gbps", ETH_SPEED_NUM_25G },
- { "40 Gbps", ETH_SPEED_NUM_40G },
- { "50 Gbps", ETH_SPEED_NUM_50G },
- { "56 Gbps", ETH_SPEED_NUM_56G },
- { "100 Gbps", ETH_SPEED_NUM_100G },
- { "200 Gbps", ETH_SPEED_NUM_200G },
- { "Unknown", ETH_SPEED_NUM_UNKNOWN },
+ { "None", RTE_ETH_SPEED_NUM_NONE },
+ { "10 Mbps", RTE_ETH_SPEED_NUM_10M },
+ { "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+ { "1 Gbps", RTE_ETH_SPEED_NUM_1G },
+ { "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+ { "5 Gbps", RTE_ETH_SPEED_NUM_5G },
+ { "10 Gbps", RTE_ETH_SPEED_NUM_10G },
+ { "20 Gbps", RTE_ETH_SPEED_NUM_20G },
+ { "25 Gbps", RTE_ETH_SPEED_NUM_25G },
+ { "40 Gbps", RTE_ETH_SPEED_NUM_40G },
+ { "50 Gbps", RTE_ETH_SPEED_NUM_50G },
+ { "56 Gbps", RTE_ETH_SPEED_NUM_56G },
+ { "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+ { "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+ { "Unknown", RTE_ETH_SPEED_NUM_UNKNOWN },
{ "Invalid", 50505 }
};
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index add4d8a67821..a09253e91814 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -103,7 +103,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
.intr_conf = {
.rxq = 1,
@@ -118,7 +118,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
};
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
static const struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5388d18125a6..8a9ef851789f 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,11 +134,11 @@ static uint16_t vlan_id = 0x100;
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 189d2430f27e..351129de2f9b 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,11 +107,11 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e7bb0497b663..f9eae9397386 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
struct rte_eth_rss_conf rss_conf;
uint8_t rss_key[40];
- struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
uint8_t is_slave;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
- struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
struct slave_conf slave_ports[SLAVE_COUNT];
struct rte_mempool *mbuf_pool;
@@ -80,27 +80,27 @@ static struct link_bonding_rssconf_unittest_params test_params = {
*/
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IPV6,
+ .rss_hf = RTE_ETH_RSS_IPV6,
},
},
.lpbk_mode = 0,
@@ -207,13 +207,13 @@ bond_slaves(void)
static int
reta_set(uint16_t port_id, uint8_t value, int reta_size)
{
- struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
int i, j;
- for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+ for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
/* select all fields to set */
reta_conf[i].mask = ~0LL;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
reta_conf[i].reta[j] = value;
}
@@ -232,8 +232,8 @@ reta_check_synced(struct slave_conf *port)
for (i = 0; i < test_params.bond_dev_info.reta_size;
i++) {
- int index = i / RTE_RETA_GROUP_SIZE;
- int shift = i % RTE_RETA_GROUP_SIZE;
+ int index = i / RTE_ETH_RETA_GROUP_SIZE;
+ int shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (port->reta_conf[index].reta[shift] !=
test_params.bond_reta_conf[index].reta[shift])
@@ -251,7 +251,7 @@ static int
bond_reta_fetch(void) {
unsigned j;
- for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+ for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
j++)
test_params.bond_reta_conf[j].mask = ~0LL;
@@ -268,7 +268,7 @@ static int
slave_reta_fetch(struct slave_conf *port) {
unsigned j;
- for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
port->reta_conf[j].mask = ~0LL;
TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index a3b4f52c65e6..1df86ce080e5 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,11 +62,11 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 1, /* enable loopback */
};
@@ -155,7 +155,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -822,7 +822,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
/* bulk alloc rx, full-featured tx */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "hybrid")) {
/* bulk alloc rx, vector tx
@@ -831,13 +831,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
*/
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "full")) {
/* full feature rx,tx pair */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
return 0;
}
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7e15b47eb0fb..d9f2e4f66bde 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
void *pkt = NULL;
struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
rte_pktmbuf_free(pkt);
@@ -168,7 +168,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
int wait_to_complete __rte_unused)
{
if (!bonded_eth_dev->data->dev_started)
- bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -562,9 +562,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
eth_dev->data->nb_rx_queues = (uint16_t)1;
eth_dev->data->nb_tx_queues = (uint16_t)1;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
- eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue configuration.
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue config.
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index bdd6e7263c85..54feffdef4bd 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -70,5 +70,5 @@ Features and Limitations
------------------------
The PMD will re-insert the VLAN tag transparently to the packet if the kernel
-strips it, as long as the ``DEV_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
+strips it, as long as the ``RTE_ETH_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
application.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index aa6032889a55..b3d10f30dc77 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,21 +877,21 @@ processing. This improved performance is derived from a number of optimizations:
* TX: only the following reduced set of transmit offloads is supported in
vector mode::
- DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* RX: only the following reduced set of receive offloads is supported in
vector mode (note that jumbo MTU is allowed only when the MTU setting
- does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
- DEV_RX_OFFLOAD_VLAN_STRIP
- DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_IPV4_CKSUM
- DEV_RX_OFFLOAD_UDP_CKSUM
- DEV_RX_OFFLOAD_TCP_CKSUM
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
- DEV_RX_OFFLOAD_RSS_HASH
- DEV_RX_OFFLOAD_VLAN_FILTER
+ does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_RSS_HASH
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER
The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
.. code-block:: console
vlan_offload = rte_eth_dev_get_vlan_offload(port);
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
rte_eth_dev_set_vlan_offload(port, vlan_offload);
Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d35751d5b5a7..594e98a6b803 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
Supports getting the speed capabilities that the current device is capable of.
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
* **[related] API**: ``rte_eth_dev_info_get()``.
@@ -101,11 +101,11 @@ Supports Rx interrupts.
Lock-free Tx queue
------------------
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
* **[related] API**: ``rte_eth_tx_burst()``.
@@ -117,8 +117,8 @@ Fast mbuf free
Supports optimization for fast release of mbufs following successful Tx.
Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
.. _nic_features_free_tx_mbuf_on_demand:
@@ -177,7 +177,7 @@ Scattered Rx
Supports receiving segmented mbufs.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
* **[implements] datapath**: ``Scattered Rx function``.
* **[implements] rte_eth_dev_data**: ``scattered_rx``.
* **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -205,12 +205,12 @@ LRO
Supports Large Receive Offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
@@ -221,12 +221,12 @@ TSO
Supports TCP Segmentation Offloading.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
* **[uses] rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
* **[uses] mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
* **[uses] mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
* **[implements] datapath**: ``TSO functionality``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
.. _nic_features_promiscuous_mode:
@@ -287,9 +287,9 @@ RSS hash
Supports RSS hashing on RX.
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -302,7 +302,7 @@ Inner RSS
Supports RX RSS hashing on Inner headers.
* **[uses] rte_flow_action_rss**: ``level``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -339,7 +339,7 @@ VMDq
Supports Virtual Machine Device Queues (VMDq).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -362,7 +362,7 @@ DCB
Supports Data Center Bridging (DCB).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -378,7 +378,7 @@ VLAN filter
Supports filtering of a VLAN Tag identifier.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
* **[implements] eth_dev_ops**: ``vlan_filter_set``.
* **[related] API**: ``rte_eth_dev_vlan_filter()``.
@@ -416,13 +416,13 @@ Supports inline crypto processing defined by rte_security library to perform cry
operations of security protocol while packet is received in NIC. NIC is not aware
of protocol operations. See Security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[uses] mbuf**: ``mbuf.l2_len``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -438,14 +438,14 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
packet is received at NIC. The NIC is capable of understanding the security
protocol operations. See security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[uses] mbuf**: ``mbuf.l2_len``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -459,7 +459,7 @@ CRC offload
Supports CRC stripping by hardware.
A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
.. _nic_features_vlan_offload:
@@ -469,13 +469,13 @@ VLAN offload
Supports VLAN offload to hardware.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
* **[implements] eth_dev_ops**: ``vlan_offload_set``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[related] API**: ``rte_eth_dev_set_vlan_offload()``,
``rte_eth_dev_get_vlan_offload()``.
@@ -487,14 +487,14 @@ QinQ offload
Supports QinQ (queue in queue) offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
.. _nic_features_fec:
@@ -508,7 +508,7 @@ information to correct the bit errors generated during data packet transmission
improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
* **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides] rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides] rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
* **[related] API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
@@ -519,16 +519,16 @@ L3 checksum offload
Supports L3 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
* **[uses] mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
.. _nic_features_l4_checksum_offload:
@@ -538,8 +538,8 @@ L4 checksum offload
Supports L4 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -547,8 +547,8 @@ Supports L4 checksum offload.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
.. _nic_features_hw_timestamp:
@@ -557,10 +557,10 @@ Timestamp offload
Supports Timestamp.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[related] eth_dev_ops**: ``read_clock``.
.. _nic_features_macsec_offload:
@@ -570,11 +570,11 @@ MACsec offload
Supports MACsec.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
.. _nic_features_inner_l3_checksum:
@@ -584,16 +584,16 @@ Inner L3 checksum
Supports inner packet L3 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
.. _nic_features_inner_l4_checksum:
@@ -603,15 +603,15 @@ Inner L4 checksum
Supports inner packet L4 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
.. _nic_features_shared_rx_queue:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index ed6afd62703d..bba53f5a64ee 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
will be checked:
-* ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+* ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
-* ``DEV_RX_OFFLOAD_CHECKSUM``
+* ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
-* ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+* ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
* ``fdir_conf->mode``
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 2efdd1a41bb4..a1e236ad75e5 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -216,21 +216,21 @@ For example,
* If the max number of VFs (max_vfs) is set in the range of 1 to 32:
If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
* If the max number of VFs (max_vfs) is in the range of 33 to 64:
If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
as ``rxq`` is not correct at this case;
- If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+ If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
and each VF have 2 Rx queues;
- On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
- or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+ On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+ or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
It also needs config VF RSS information like hash function, RSS key, RSS key length.
.. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5bcd..148d2f5fc2be 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,13 +89,13 @@ Other features are supported using optional MACRO configuration. They include:
To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
-* DEV_RX_OFFLOAD_VLAN_STRIP
+* RTE_ETH_RX_OFFLOAD_VLAN_STRIP
-* DEV_RX_OFFLOAD_VLAN_EXTEND
+* RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
-* DEV_RX_OFFLOAD_CHECKSUM
+* RTE_ETH_RX_OFFLOAD_CHECKSUM
-* DEV_RX_OFFLOAD_HEADER_SPLIT
+* RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
* dev_conf
@@ -163,13 +163,13 @@ l3fwd
~~~~~
When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
Otherwise, by default, RX vPMD is disabled.
load_balancer
~~~~~~~~~~~~~
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index dd059b227d8e..86927a0b56b0 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
- CRC:
- - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+ - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
@@ -611,7 +611,7 @@ Driver options
small-packet traffic.
When MPRQ is enabled, MTU can be larger than the size of
- user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+ user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
be added in next releases
TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
**Known limitation:** TAP supports all of the above hash functions together
and not in partial combinations.
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
- the bit mask of required GSO types. The GSO library uses the same macros as
those that describe a physical device's TX offloading capabilities (i.e.
- ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+ ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
wants to segment TCP/IPv4 packets, it should set gso_types to
- ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
- supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
- ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+ ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+ supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+ ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
allowed.
- a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
set out_ip checksum to 0 in the packet
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
- calculate checksum of out_ip and out_udp::
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
set out_ip checksum to 0 in the packet
set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
- and DEV_TX_OFFLOAD_UDP_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+ and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
- calculate checksum of in_ip::
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
This is similar to case 1), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
This is similar to case 2), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
- DEV_TX_OFFLOAD_TCP_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header without including the IP
payload length using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
- DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
The list of flags and their precise meaning is described in the mbuf API
documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
Avoiding lock contention is a key issue in a multi-core environment.
To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
enables more scaling as all workers can send the packets.
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
Device Identification, Ownership and Configuration
--------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
Any requested offloading by an application must be within the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index a2169517c3f9..d798adb83e1d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1993,23 +1993,23 @@ only matching traffic goes through.
.. table:: RSS
- +---------------+---------------------------------------------+
- | Field | Value |
- +===============+=============================================+
- | ``func`` | RSS hash function to apply |
- +---------------+---------------------------------------------+
- | ``level`` | encapsulation level for ``types`` |
- +---------------+---------------------------------------------+
- | ``types`` | specific RSS hash types (see ``ETH_RSS_*``) |
- +---------------+---------------------------------------------+
- | ``key_len`` | hash key length in bytes |
- +---------------+---------------------------------------------+
- | ``queue_num`` | number of entries in ``queue`` |
- +---------------+---------------------------------------------+
- | ``key`` | hash key |
- +---------------+---------------------------------------------+
- | ``queue`` | queue indices to use |
- +---------------+---------------------------------------------+
+ +---------------+-------------------------------------------------+
+ | Field | Value |
+ +===============+=================================================+
+ | ``func`` | RSS hash function to apply |
+ +---------------+-------------------------------------------------+
+ | ``level`` | encapsulation level for ``types`` |
+ +---------------+-------------------------------------------------+
+ | ``types`` | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+ +---------------+-------------------------------------------------+
+ | ``key_len`` | hash key length in bytes |
+ +---------------+-------------------------------------------------+
+ | ``queue_num`` | number of entries in ``queue`` |
+ +---------------+-------------------------------------------------+
+ | ``key`` | hash key |
+ +---------------+-------------------------------------------------+
+ | ``queue`` | queue indices to use |
+ +---------------+-------------------------------------------------+
Action: ``PF``
^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index ad92c16868c1..46c9b51d1bf9 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -569,7 +569,7 @@ created by the application is attached to the security session by the API
For Inline Crypto and Inline protocol offload, device specific defined metadata is
updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
For inline protocol offloaded ingress traffic, the application can register a
pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index cc2b89850b07..f11550dc78ac 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -69,22 +69,16 @@ Deprecation Notices
``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
usage in following public struct hierarchy:
- ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+ ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
Need to identify this kind of usages and fix in 20.11, otherwise this blocks
us extending existing enum/define.
One solution can be using a fixed size array instead of ``.*MAX.*`` value.
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
- Macros will be added for backward compatibility.
- Backward compatibility macros will be removed on v22.11.
- A few old backward compatibility macros from 2013 that does not have
- proper prefix will be removed on v21.11.
-
* ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
will be removed in DPDK 20.11.
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
This will allow application to enable or disable PMDs from updating
``rte_mbuf::hash::fdir``.
This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 569d3c00b9ee..b327c2bfca1c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -446,6 +446,9 @@ ABI Changes
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+ updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
Known Issues
------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
* ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
- (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the RX HW offload capabilities.
By default all HW RX offloads are enabled.
* ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
- (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the TX HW offload capabilities.
By default all HW TX offloads are enabled.
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index d23e0b6a7a2e..30edef07ea20 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -546,7 +546,7 @@ The command line options are:
Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
The default value is 0x7::
- ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+ RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
* ``--record-core-cycles``
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
struct usdpaa_ioctl_link_status_args_old {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
- /* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+ /* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
int link_autoneg;
};
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
struct usdpaa_ioctl_update_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_update_link_speed {
/* network device node name*/
char if_name[IF_NAME_MAX_LEN];
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
};
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index ef85073b17e1..e13d55713625 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -167,7 +167,7 @@ enum roc_npc_rss_hash_function {
struct roc_npc_action_rss {
enum roc_npc_rss_hash_function func;
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index a077376dc0fb..8f778f0c2419 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -93,10 +93,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -290,7 +290,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -320,7 +320,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
internals->tx_queue[i].sockfd = -1;
}
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -331,7 +331,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
const struct rte_eth_rxmode *rxmode = &dev_conf->rxmode;
struct pmd_internals *internals = dev->data->dev_private;
- internals->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ internals->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -346,9 +346,9 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return 0;
}
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index b362ccdcd38c..e156246f24df 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
/* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -652,7 +652,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -661,7 +661,7 @@ eth_dev_start(struct rte_eth_dev *dev)
static int
eth_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
/* ARK PMD supports all line rates, how do we indicate that here ?? */
- dev_info->speed_capa = (ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G);
-
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G);
+
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return 0;
}
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 5a198f53fce7..f7bfac796c07 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,20 +154,20 @@ static struct rte_pci_driver rte_atl_pmd = {
.remove = eth_atl_pci_remove,
};
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
- | DEV_RX_OFFLOAD_IPV4_CKSUM \
- | DEV_RX_OFFLOAD_UDP_CKSUM \
- | DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_MACSEC_STRIP \
- | DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
- | DEV_TX_OFFLOAD_IPV4_CKSUM \
- | DEV_TX_OFFLOAD_UDP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_TSO \
- | DEV_TX_OFFLOAD_MACSEC_INSERT \
- | DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+ | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+ | RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+ | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_TSO \
+ | RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+ | RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define SFP_EEPROM_SIZE 0x100
@@ -488,7 +488,7 @@ atl_dev_start(struct rte_eth_dev *dev)
/* set adapter started */
hw->adapter_stopped = 0;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR,
"Invalid link_speeds for port %u, fix speed not supported",
dev->data->port_id);
@@ -655,18 +655,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
uint32_t link_speeds = dev->data->dev_conf.link_speeds;
uint32_t speed_mask = 0;
- if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed_mask = hw->aq_nic_cfg->link_speed_msk;
} else {
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
speed_mask |= AQ_NIC_RATE_10G;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
speed_mask |= AQ_NIC_RATE_5G;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
speed_mask |= AQ_NIC_RATE_1G;
- if (link_speeds & ETH_LINK_SPEED_2_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed_mask |= AQ_NIC_RATE_2G5;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
speed_mask |= AQ_NIC_RATE_100M;
}
@@ -1127,10 +1127,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
return 0;
}
@@ -1175,10 +1175,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
u32 fc = AQ_NIC_FC_OFF;
int err = 0;
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
memset(&old, 0, sizeof(old));
/* load old link status */
@@ -1198,8 +1198,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
return 0;
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = hw->aq_link_status.mbps;
rte_eth_linkstatus_set(dev, &link);
@@ -1333,7 +1333,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1532,13 +1532,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
hw->aq_fw_ops->get_flow_control(hw, &fc);
if (fc == AQ_NIC_FC_OFF)
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (fc & AQ_NIC_FC_RX)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (fc & AQ_NIC_FC_TX)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
return 0;
}
@@ -1553,13 +1553,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
if (hw->aq_fw_ops->set_flow_control == NULL)
return -ENOTSUP;
- if (fc_conf->mode == RTE_FC_NONE)
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
- else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
- else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
- else if (fc_conf->mode == RTE_FC_FULL)
+ else if (fc_conf->mode == RTE_ETH_FC_FULL)
hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1727,14 +1727,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+ ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
- cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+ cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
for (i = 0; i < dev->data->nb_rx_queues; i++)
hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
- if (mask & ETH_VLAN_EXTEND_MASK)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK)
ret = -ENOTSUP;
return ret;
@@ -1750,10 +1750,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
PMD_INIT_FUNC_TRACE();
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
break;
default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index fbc9917ed30d..ed9ef9f0cc52 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
#include "hw_atl/hw_atl_utils.h"
#define ATL_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define ATL_DEV_PRIVATE_TO_HW(adapter) \
(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 0d3460383a50..2ff426892df2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 932ec90265cf..5d94db02c506 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1998,9 +1998,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
/* Setup required number of queues */
_avp_set_queue_counts(eth_dev);
- mask = (ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ mask = (RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
ret = avp_vlan_offload_set(eth_dev, mask);
if (ret < 0) {
PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2140,8 +2140,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_eth_link *link = ð_dev->data->dev_link;
- link->link_speed = ETH_SPEED_NUM_10G;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_status = !!(avp->flags & AVP_F_LINKUP);
return -1;
@@ -2191,8 +2191,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
}
return 0;
@@ -2205,9 +2205,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
uint64_t offloads = dev_conf->rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
else
avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2216,13 +2216,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
}
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index ca32ad641873..3aaa2193272f 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
pdata->rss_hf = rss_conf->rss_hf;
rss_hf = rss_conf->rss_hf;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
}
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 0250256830ac..dab0c6775d1d 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
/* Checksum offload to hardware */
pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
}
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
{
struct axgbe_port *pdata = dev->data->dev_private;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
pdata->rss_enable = 1;
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
pdata->rss_enable = 0;
else
return -1;
@@ -385,7 +385,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -521,8 +521,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
continue;
pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -552,8 +552,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
continue;
reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -590,13 +590,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
- if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
/* Set the RSS options */
@@ -765,7 +765,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
link.link_status = pdata->phy_link;
link.link_speed = pdata->phy_speed;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1208,24 +1208,24 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (pdata->hw_feat.rss) {
dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1262,13 +1262,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
fc.autoneg = pdata->pause_autoneg;
if (pdata->rx_pause && pdata->tx_pause)
- fc.mode = RTE_FC_FULL;
+ fc.mode = RTE_ETH_FC_FULL;
else if (pdata->rx_pause)
- fc.mode = RTE_FC_RX_PAUSE;
+ fc.mode = RTE_ETH_FC_RX_PAUSE;
else if (pdata->tx_pause)
- fc.mode = RTE_FC_TX_PAUSE;
+ fc.mode = RTE_ETH_FC_TX_PAUSE;
else
- fc.mode = RTE_FC_NONE;
+ fc.mode = RTE_ETH_FC_NONE;
fc_conf->high_water = (1024 + (fc.low_water[0] << 9)) / 1024;
fc_conf->low_water = (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1298,13 +1298,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
AXGMAC_IOWRITE(pdata, reg, reg_val);
fc.mode = fc_conf->mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
} else {
@@ -1386,15 +1386,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
fc.mode = pfc_conf->fc.mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1830,8 +1830,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+ case RTE_ETH_VLAN_TYPE_INNER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
if (qinq) {
if (tpid != 0x8100 && tpid != 0x88a8)
PMD_DRV_LOG(ERR,
@@ -1848,8 +1848,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"Inner type not supported in single tag\n");
}
break;
- case ETH_VLAN_TYPE_OUTER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+ case RTE_ETH_VLAN_TYPE_OUTER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
if (qinq) {
PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
/*Enable outer VLAN tag*/
@@ -1866,11 +1866,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"tag supported 0x8100/0x88A8\n");
}
break;
- case ETH_VLAN_TYPE_MAX:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+ case RTE_ETH_VLAN_TYPE_MAX:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
break;
- case ETH_VLAN_TYPE_UNKNOWN:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+ case RTE_ETH_VLAN_TYPE_UNKNOWN:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
break;
}
return 0;
@@ -1904,8 +1904,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1915,8 +1915,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_stripping(pdata);
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1926,14 +1926,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_filtering(pdata);
}
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
axgbe_vlan_extend_enable(pdata);
/* Set global registers with default ethertype*/
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
} else {
PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
/* Receive Side Scaling */
#define AXGBE_RSS_OFFLOAD ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define AXGBE_RSS_HASH_KEY_SIZE 40
#define AXGBE_RSS_MAX_TABLE_SIZE 256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
pdata->an_int = 0;
axgbe_an73_clear_interrupts(pdata);
pdata->eth_dev->data->dev_link.link_status =
- ETH_LINK_DOWN;
+ RTE_ETH_LINK_DOWN;
} else if (pdata->an_state == AXGBE_AN_ERROR) {
PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c8618d2d6daa..aa2c27ebaa49 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
(DMA_CH_INC * rxq->queue_id));
rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
DMA_CH_RDTR_LO);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 567ea2382864..78fc717ec44a 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
link.link_speed = sc->link_vars.line_speed;
switch (sc->link_vars.duplex) {
case DUPLEX_FULL:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case DUPLEX_HALF:
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
link.link_status = sc->link_vars.link_up;
return rte_eth_linkstatus_set(dev, &link);
@@ -408,7 +408,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
"VF device is no longer operational");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
}
return ret;
@@ -534,7 +534,7 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
- dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -669,7 +669,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
bnx2x_load_firmware(sc);
assert(sc->firmware);
- if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
sc->udp_rss = 1;
sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 6743cf92b0e6..39bd739c7bc9 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,37 +569,37 @@ struct bnxt_rep_info {
#define BNXT_FW_STATUS_SHUTDOWN 0x100000
#define BNXT_ETH_RSS_SUPPORT ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define BNXT_HWRM_SHORT_REQ_LEN sizeof(struct hwrm_short_input)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f65..2791a5c62db1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
goto err_out;
/* Alloc RSS context only if RSS mode is enabled */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int j, nr_ctxs = bnxt_rss_ctxts(bp);
/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
* setting is not available at this time, it will not be
* configured correctly in the CFA.
*/
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
- (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
true : false);
if (rc)
goto err_out;
@@ -923,35 +923,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
link_speed = bp->link_info->support_pam4_speeds;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
- speed_capa |= ETH_LINK_SPEED_2_5G;
+ speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
- speed_capa |= ETH_LINK_SPEED_20G;
+ speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
if (bp->link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -995,14 +995,14 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
dev_info->tx_queue_offload_capa;
if (bp->fw_cap & BNXT_FW_CAP_VLAN_TX_INSERT)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
@@ -1049,8 +1049,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
*/
/* VMDq resources */
- vpool = 64; /* ETH_64_POOLS */
- vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+ vpool = 64; /* RTE_ETH_64_POOLS */
+ vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
for (i = 0; i < 4; vpool >>= 1, i++) {
if (max_vnics > vpool) {
for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1145,15 +1145,15 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
(uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
goto resource_error;
- if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+ if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
bp->max_vnics < eth_dev->data->nb_rx_queues)
goto resource_error;
bp->rx_cp_nr_rings = bp->rx_nr_rings;
bp->tx_cp_nr_rings = bp->tx_nr_rings;
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
@@ -1182,7 +1182,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
eth_dev->data->port_id,
(uint32_t)link->link_speed,
- (link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex\n"));
else
PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1199,10 +1199,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
uint16_t buf_size;
int i;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return 1;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
return 1;
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1247,15 +1247,15 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
* a limited subset have been enabled.
*/
if (eth_dev->data->dev_conf.rxmode.offloads &
- ~(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_VLAN_FILTER))
+ ~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
goto use_scalar_rx;
#if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1307,7 +1307,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
* or tx offloads.
*/
if (eth_dev->data->scattered_rx ||
- (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+ (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
BNXT_TRUFLOW_EN(bp))
goto use_scalar_tx;
@@ -1608,10 +1608,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
bnxt_link_update_op(eth_dev, 1);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- vlan_mask |= ETH_VLAN_FILTER_MASK;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- vlan_mask |= ETH_VLAN_STRIP_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
if (rc)
goto error;
@@ -1833,8 +1833,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
/* Retrieve link info from hardware */
rc = bnxt_get_hwrm_link_config(bp, &new);
if (rc) {
- new.link_speed = ETH_LINK_SPEED_100M;
- new.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new.link_speed = RTE_ETH_LINK_SPEED_100M;
+ new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR,
"Failed to retrieve link rc = 0x%x!\n", rc);
goto out;
@@ -2028,7 +2028,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
if (!vnic->rss_table)
return -EINVAL;
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return -EINVAL;
if (reta_size != tbl_size) {
@@ -2041,8 +2041,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
for (i = 0; i < reta_size; i++) {
struct bnxt_rx_queue *rxq;
- idx = i / RTE_RETA_GROUP_SIZE;
- sft = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ sft = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << sft)))
continue;
@@ -2095,8 +2095,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
}
for (idx = 0, i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- sft = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ sft = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << sft)) {
uint16_t qid;
@@ -2134,7 +2134,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
* If RSS enablement were different than dev_configure,
* then return -EINVAL
*/
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (!rss_conf->rss_hf)
PMD_DRV_LOG(ERR, "Hash type NONE\n");
} else {
@@ -2152,7 +2152,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
vnic->hash_mode =
bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
- ETH_RSS_LEVEL(rss_conf->rss_hf));
+ RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
/*
* If hashkey is not specified, use the previously configured
@@ -2197,30 +2197,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
hash_types = vnic->hash_type;
rss_conf->rss_hf = 0;
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
}
@@ -2260,17 +2260,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
fc_conf->autoneg = 1;
switch (bp->link_info->pause) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
}
return 0;
@@ -2293,11 +2293,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
bp->link_info->auto_pause = 0;
bp->link_info->force_pause = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2308,7 +2308,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
}
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2319,7 +2319,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
}
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2350,7 +2350,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2364,7 +2364,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
tunnel_type =
HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2413,7 +2413,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (!bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2430,7 +2430,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
port = bp->vxlan_fw_dst_port_id;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (!bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2608,7 +2608,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
int rc;
vnic = BNXT_GET_DEFAULT_VNIC(bp);
- if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
/* Remove any VLAN filters programmed */
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
@@ -2628,7 +2628,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
bnxt_add_vlan_filter(bp, 0);
}
PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
return 0;
}
@@ -2641,7 +2641,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
/* Destroy vnic filters and vnic */
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
}
@@ -2680,7 +2680,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = bnxt_add_vlan_filter(bp, 0);
if (rc)
return rc;
@@ -2698,7 +2698,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
return rc;
}
@@ -2718,22 +2718,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
if (!dev->data->dev_started)
return 0;
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering */
rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
else
PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2748,10 +2748,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
{
struct bnxt *bp = dev->data->dev_private;
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
- if (vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -2763,7 +2763,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
return -EINVAL;
}
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
switch (tpid) {
case RTE_ETHER_TYPE_QINQ:
bp->outer_tpid_bd =
@@ -2791,7 +2791,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
}
bp->outer_tpid_bd |= tpid;
PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer vlan in QinQ\n");
return -EINVAL;
@@ -2831,7 +2831,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
bnxt_del_dflt_mac_filter(bp, vnic);
memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
/* This filter will allow only untagged packets */
rc = bnxt_add_vlan_filter(bp, 0);
} else {
@@ -6556,4 +6556,4 @@ bool is_bnxt_supported(struct rte_eth_dev *dev)
RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE);
RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_bnxt, "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index b2ebb5634e3a..ced697a73980 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -978,7 +978,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -1177,7 +1177,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp,
}
/* If RSS types is 0, use a best effort configuration */
- types = rss->types ? rss->types : ETH_RSS_IPV4;
+ types = rss->types ? rss->types : RTE_ETH_RSS_IPV4;
hash_type = bnxt_rte_to_hwrm_hash_types(types);
@@ -1322,7 +1322,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
rxq = bp->rx_queues[act_q->index];
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
vnic->fw_vnic_id != INVALID_HW_RING_ID)
goto use_vnic;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 181e607d7bf8..82e89b7c8af7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
uint16_t j = dst_id - 1;
//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
- if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+ if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
conf->pool_map[j].pools & (1UL << j)) {
PMD_DRV_LOG(DEBUG,
"Add vlan %u to vmdq pool %u\n",
@@ -2979,12 +2979,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
{
uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
- if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+ if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
switch (conf_link_speed) {
- case ETH_LINK_SPEED_10M_HD:
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
}
@@ -3001,51 +3001,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
{
uint16_t eth_link_speed = 0;
- if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
- return ETH_LINK_SPEED_AUTONEG;
+ if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+ return RTE_ETH_LINK_SPEED_AUTONEG;
- switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_100M:
- case ETH_LINK_SPEED_100M_HD:
+ switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
break;
- case ETH_LINK_SPEED_2_5G:
+ case RTE_ETH_LINK_SPEED_2_5G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
break;
- case ETH_LINK_SPEED_20G:
+ case RTE_ETH_LINK_SPEED_20G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
break;
@@ -3058,11 +3058,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
return eth_link_speed;
}
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
- ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+ RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
static int bnxt_validate_link_speed(struct bnxt *bp)
{
@@ -3071,13 +3071,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
uint32_t link_speed_capa;
uint32_t one_speed;
- if (link_speed == ETH_LINK_SPEED_AUTONEG)
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
return 0;
link_speed_capa = bnxt_get_speed_capabilities(bp);
- if (link_speed & ETH_LINK_SPEED_FIXED) {
- one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+ if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+ one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
if (one_speed & (one_speed - 1)) {
PMD_DRV_LOG(ERR,
@@ -3107,71 +3107,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
{
uint16_t ret = 0;
- if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
if (bp->link_info->support_speeds)
return bp->link_info->support_speeds;
link_speed = BNXT_SUPPORTED_SPEEDS;
}
- if (link_speed & ETH_LINK_SPEED_100M)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_100M_HD)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_1G)
+ if (link_speed & RTE_ETH_LINK_SPEED_1G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
- if (link_speed & ETH_LINK_SPEED_2_5G)
+ if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
- if (link_speed & ETH_LINK_SPEED_10G)
+ if (link_speed & RTE_ETH_LINK_SPEED_10G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
- if (link_speed & ETH_LINK_SPEED_20G)
+ if (link_speed & RTE_ETH_LINK_SPEED_20G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
- if (link_speed & ETH_LINK_SPEED_25G)
+ if (link_speed & RTE_ETH_LINK_SPEED_25G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
- if (link_speed & ETH_LINK_SPEED_40G)
+ if (link_speed & RTE_ETH_LINK_SPEED_40G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
- if (link_speed & ETH_LINK_SPEED_50G)
+ if (link_speed & RTE_ETH_LINK_SPEED_50G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
- if (link_speed & ETH_LINK_SPEED_100G)
+ if (link_speed & RTE_ETH_LINK_SPEED_100G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
- if (link_speed & ETH_LINK_SPEED_200G)
+ if (link_speed & RTE_ETH_LINK_SPEED_200G)
ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
return ret;
}
static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
{
- uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+ uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
switch (hw_link_speed) {
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
- eth_link_speed = ETH_SPEED_NUM_100M;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
- eth_link_speed = ETH_SPEED_NUM_1G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
- eth_link_speed = ETH_SPEED_NUM_2_5G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
- eth_link_speed = ETH_SPEED_NUM_10G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
- eth_link_speed = ETH_SPEED_NUM_20G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
- eth_link_speed = ETH_SPEED_NUM_25G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
- eth_link_speed = ETH_SPEED_NUM_40G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
- eth_link_speed = ETH_SPEED_NUM_50G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
- eth_link_speed = ETH_SPEED_NUM_100G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
- eth_link_speed = ETH_SPEED_NUM_200G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_200G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
default:
@@ -3184,16 +3184,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
{
- uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (hw_link_duplex) {
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
/* FALLTHROUGH */
- eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
- eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
default:
PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3222,12 +3222,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
link->link_speed =
bnxt_parse_hw_link_speed(link_info->link_speed);
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
link->link_status = link_info->link_up;
link->link_autoneg = link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
- ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+ RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
exit:
return rc;
}
@@ -3253,7 +3253,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
if (BNXT_CHIP_P5(bp) &&
- dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+ dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
/* 40G is not supported as part of media auto detect.
* The speed should be forced and autoneg disabled
* to configure 40G speed.
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
HWRM_CHECK_RESULT();
- bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+ bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
svif_info = rte_le_to_cpu_16(resp->svif_info);
if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a84..1c07db3ca9c5 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -537,7 +537,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 08cefa1baaef..7940d489a102 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -187,7 +187,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
rx_ring_info->rx_ring_struct->ring_size *
AGG_RING_SIZE_FACTOR)) : 0;
- if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
int tpa_max = BNXT_TPA_MAX_AGGS(bp);
tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -283,7 +283,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
ag_bitmap_start, ag_bitmap_len);
/* TPA info */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rx_ring_info->tpa_info =
((struct bnxt_tpa_info *)
((char *)mz->addr + tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 38ec4aa14b77..1456f8b54ffa 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -52,13 +52,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->nr_vnics = 0;
/* Multi-queue mode */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_RSS:
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* FALLTHROUGH */
/* ETH_8/64_POOLs */
pools = conf->nb_queue_pools;
@@ -66,14 +66,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
max_pools = RTE_MIN(bp->max_vnics,
RTE_MIN(bp->max_l2_ctx,
RTE_MIN(bp->max_rsscos_ctx,
- ETH_64_POOLS)));
+ RTE_ETH_64_POOLS)));
PMD_DRV_LOG(DEBUG,
"pools = %u max_pools = %u\n",
pools, max_pools);
if (pools > max_pools)
pools = max_pools;
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
break;
default:
@@ -111,7 +111,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
ring_idx, rxq, i, vnic);
}
if (i == 0) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
bp->eth_dev->data->promiscuous = 1;
vnic->flags |= BNXT_VNIC_INFO_PROMISC;
}
@@ -121,8 +121,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
vnic->end_grp_id = end_grp_id;
if (i) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
- !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+ !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
vnic->rss_dflt_cr = true;
goto skip_filter_allocation;
}
@@ -147,14 +147,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->rx_num_qs_per_vnic = nb_q_per_grp;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
if (bp->flags & BNXT_FLAG_UPDATE_HASH)
bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
for (i = 0; i < bp->nr_vnics; i++) {
- uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+ uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
vnic = &bp->vnic_info[i];
vnic->hash_type =
@@ -363,7 +363,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
rxq->queue_id = queue_idx;
rxq->port_id = eth_dev->data->port_id;
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -478,7 +478,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
vnic = rxq->vnic;
if (BNXT_HAS_RING_GRPS(bp)) {
@@ -549,7 +549,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
rxq->rx_started = false;
PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (BNXT_HAS_RING_GRPS(bp))
vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index aeacc60a0127..eb555c4545e6 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
dev_conf = &rxq->bp->eth_dev->data->dev_conf;
offloads = dev_conf->rxmode.offloads;
- outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+ outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
/* Initialize ol_flags table. */
pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 9e45ddd7a82e..f2fcaf53021c 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -353,7 +353,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -479,7 +479,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
{
uint16_t hwrm_type = 0;
- if (rte_type & ETH_RSS_IPV4)
+ if (rte_type & RTE_ETH_RSS_IPV4)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
- if (rte_type & ETH_RSS_IPV6)
+ if (rte_type & RTE_ETH_RSS_IPV6)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
{
uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
- bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
- bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP));
+ bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+ bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP));
bool l3_only = l3 && !l4;
bool l3_and_l4 = l3 && l4;
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
* return default hash mode.
*/
if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
- return ETH_RSS_LEVEL_PMD_DEFAULT;
+ return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
- rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
- rss_level |= ETH_RSS_LEVEL_INNERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
else
- rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+ rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
return rss_level;
}
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
if (vf >= bp->pdev->max_vfs)
return -EINVAL;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
return -ENOTSUP;
}
/* Is this really the correct mapping? VFd seems to think it is. */
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
flag |= BNXT_VNIC_INFO_PROMISC;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
flag |= BNXT_VNIC_INFO_BCAST;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptor limits */
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
- RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+ RTE_ETH_RETA_GROUP_SIZE];
uint8_t rss_key[52]; /**< 52-byte hash key buffer. */
uint8_t rss_key_len; /**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2029955c1092..ca50583d62d8 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
uint16_t key_speed;
switch (speed) {
- case ETH_SPEED_NUM_NONE:
+ case RTE_ETH_SPEED_NUM_NONE:
key_speed = 0x00;
break;
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
key_speed = BOND_LINK_SPEED_KEY_10M;
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
key_speed = BOND_LINK_SPEED_KEY_100M;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
key_speed = BOND_LINK_SPEED_KEY_1000M;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
key_speed = BOND_LINK_SPEED_KEY_10G;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
key_speed = BOND_LINK_SPEED_KEY_20G;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
key_speed = BOND_LINK_SPEED_KEY_40G;
break;
default:
@@ -887,7 +887,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (ret >= 0 && link_info.link_status != 0) {
key = link_speed_key(link_info.link_speed) << 1;
- if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+ if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
key |= BOND_LINK_FULL_DUPLEX_KEY;
} else {
key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 5140ef14c2ee..84943cffe2bb 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
return 0;
internals = bonded_eth_dev->data->dev_private;
@@ -592,7 +592,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
return -1;
}
- if (link_props.link_status == ETH_LINK_UP) {
+ if (link_props.link_status == RTE_ETH_LINK_UP) {
if (internals->active_slave_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
@@ -727,7 +727,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
internals->tx_queue_offload_capa = 0;
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
internals->reta_size = 0;
internals->candidate_max_rx_pktlen = 0;
internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 8d038ba6b6c4..834a5937b3aa 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1369,8 +1369,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
* In any other mode the link properties are set to default
* values of AUTONEG/DUPLEX
*/
- ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
- ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+ ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
}
}
@@ -1700,7 +1700,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
/* If RSS is enabled for bonding, try to enable it for slaves */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
/* rss_key won't be empty if RSS is configured in bonded dev */
slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
@@ -1714,12 +1714,12 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
@@ -1823,7 +1823,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* If RSS is enabled for bonding, synchronize RETA */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int i;
struct bond_dev_private *internals;
@@ -1946,7 +1946,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
return -1;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 1;
internals = eth_dev->data->dev_private;
@@ -2086,7 +2086,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
tlb_last_obytets[internals->active_slaves[i]] = 0;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
@@ -2416,15 +2416,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
bond_ctx = ethdev->data->dev_private;
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
bond_ctx->active_slave_count == 0) {
- ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
- ethdev->data->dev_link.link_status = ETH_LINK_UP;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
if (wait_to_complete)
link_update = rte_eth_link_get;
@@ -2449,7 +2449,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
&slave_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
- ETH_SPEED_NUM_NONE;
+ RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
"Slave (port %u) link get failed: %s",
bond_ctx->active_slaves[idx],
@@ -2491,7 +2491,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
* In theses mode the maximum theoretical link speed is the sum
* of all the slaves
*/
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2865,7 +2865,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
goto link_update;
/* check link state properties if bonded link is up*/
- if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
"for slave %d in bonding mode %d",
@@ -2881,7 +2881,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (internals->active_slave_count < 1) {
/* If first active slave, then change link status */
bonded_eth_dev->data->dev_link.link_status =
- ETH_LINK_UP;
+ RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
@@ -2973,12 +2973,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
/* Copy RETA table */
- reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
- RTE_RETA_GROUP_SIZE;
+ reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+ RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < reta_count; i++) {
internals->reta_conf[i].mask = reta_conf[i].mask;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
}
@@ -3011,8 +3011,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
/* Copy RETA table */
- for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
@@ -3274,7 +3274,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->max_rx_pktlen = 0;
/* Initially allow to choose any offload type */
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
memset(&internals->default_rxconf, 0,
sizeof(internals->default_rxconf));
@@ -3501,7 +3501,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
* set key to the the value specified in port RSS configuration.
* Fall back to default RSS key if the key is not specified
*/
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
struct rte_eth_rss_conf *rss_conf =
&dev->data->dev_conf.rx_adv_conf.rss_conf;
if (rss_conf->rss_key != NULL) {
@@ -3526,9 +3526,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
internals->reta_conf[i].mask = ~0LL;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
internals->reta_conf[i].reta[j] =
- (i * RTE_RETA_GROUP_SIZE + j) %
+ (i * RTE_ETH_RETA_GROUP_SIZE + j) %
dev->data->nb_rx_queues;
}
}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 25da5f6691d0..f7eb0f437b77 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
flags |= NIX_RX_OFFLOAD_PTYPE_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
- if (conf & DEV_TX_OFFLOAD_SECURITY)
+ if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
return flags;
diff --git a/drivers/net/cnxk/cn10k_rte_flow.c b/drivers/net/cnxk/cn10k_rte_flow.c
index 8c87452934eb..dff4c7746cf5 100644
--- a/drivers/net/cnxk/cn10k_rte_flow.c
+++ b/drivers/net/cnxk/cn10k_rte_flow.c
@@ -98,7 +98,7 @@ cn10k_rss_action_validate(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("multi-queue mode is disabled");
return -ENOTSUP;
}
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index d6af54b56de6..5d603514c045 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -77,12 +77,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index eb962ef08cab..5e6c5ee11188 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -78,11 +78,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b7b..17f8f6debbc8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
flags |= NIX_RX_OFFLOAD_PTYPE_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
return flags;
@@ -298,9 +298,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
/* Platform specific checks */
if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
plt_err("Outer IP and SCTP checksum unsupported");
return -EINVAL;
}
@@ -553,17 +553,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* TSO not supported for earlier chip revisions
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
- dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
/* 50G and 100G to be supported for board version C0
* and above of CN9K.
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
}
dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 5c4387e74e0b..8d504c4a6d92 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -77,12 +77,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index e5691a2a7e16..f3f19fed9780 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -77,11 +77,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 2e05d8bf1552..db54468dbca1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
if (roc_nix_is_vf_or_sdp(&dev->nix) ||
dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
uint32_t speed_capa;
/* Auto negotiation disabled */
- speed_capa = ETH_LINK_SPEED_FIXED;
+ speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
- speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
}
return speed_capa;
@@ -65,7 +65,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
struct roc_nix *nix = &dev->nix;
int i, rc = 0;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Setup Inline Inbound */
rc = roc_nix_inl_inb_init(nix);
if (rc) {
@@ -80,8 +80,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
cnxk_nix_inb_mode_set(dev, true);
}
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
- dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+ dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
struct plt_bitmap *bmap;
size_t bmap_sz;
void *mem;
@@ -100,8 +100,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
- /* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+ /* Skip the rest if RTE_ETH_TX_OFFLOAD_SECURITY is not enabled */
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY))
goto done;
rc = -ENOMEM;
@@ -136,7 +136,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
done:
return 0;
cleanup:
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
rc |= roc_nix_inl_inb_fini(nix);
return rc;
}
@@ -182,7 +182,7 @@ nix_security_release(struct cnxk_eth_dev *dev)
int rc, ret = 0;
/* Cleanup Inline inbound */
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Destroy inbound sessions */
tvar = NULL;
RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
@@ -199,8 +199,8 @@ nix_security_release(struct cnxk_eth_dev *dev)
}
/* Cleanup Inline outbound */
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
- dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+ dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Destroy outbound sessions */
tvar = NULL;
RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
@@ -242,8 +242,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
}
@@ -273,7 +273,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
struct rte_eth_fc_conf fc_conf = {0};
int rc;
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -281,10 +281,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
@@ -305,11 +305,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (roc_model_is_cn96_ax() &&
dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
- (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+ (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_cfg.mode =
- (fc_cfg.mode == RTE_FC_FULL ||
- fc_cfg.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_cfg.mode == RTE_ETH_FC_FULL ||
+ fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -352,7 +352,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -380,7 +380,7 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
/* When Tx Security offload is enabled, increase tx desc count by
* max possible outbound desc count.
*/
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
nb_desc += dev->outb.nb_desc;
/* Setup ROC SQ */
@@ -499,7 +499,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
* to avoid meta packet drop as LBK does not currently support
* backpressure.
*/
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
/* Use current RQ's aura limit if inl rq is not available */
@@ -561,7 +561,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
rxq_sp->qconf.nb_desc = nb_desc;
rxq_sp->qconf.mp = mp;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Setup rq reference for inline dev if present */
rc = roc_nix_inl_dev_rq_get(rq);
if (rc)
@@ -579,7 +579,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
rc = cnxk_nix_tsc_convert(dev);
if (rc) {
plt_err("Failed to calculate delta and freq mult");
@@ -618,7 +618,7 @@ cnxk_nix_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
plt_nix_dbg("Releasing rxq %u", qid);
/* Release rq reference for inline dev if present */
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
roc_nix_inl_dev_rq_put(rq);
/* Cleanup ROC RQ */
@@ -657,24 +657,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
dev->ethdev_rss_hf = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -683,34 +683,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -746,7 +746,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
uint64_t rss_hf;
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
@@ -958,8 +958,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
/* Nothing much to do if offload is not enabled */
if (!(dev->tx_offloads &
- (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+ (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
return 0;
/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -1007,13 +1007,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
@@ -1054,7 +1054,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
/* Prepare rx cfg */
rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
}
@@ -1062,7 +1062,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
/* Disable drop re if rx offload security is enabled and
* platform does not support it.
@@ -1454,12 +1454,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled on PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
cnxk_eth_dev_ops.timesync_enable(eth_dev);
else
cnxk_eth_dev_ops.timesync_disable(eth_dev);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
rc = rte_mbuf_dyn_rx_timestamp_register
(&dev->tstamp.tstamp_dynfield_offset,
&dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 72f80ae948cf..29a3540ed3f8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -58,41 +58,44 @@
CNXK_NIX_TX_NB_SEG_MAX)
#define CNXK_NIX_RSS_L3_L4_SRC_DST \
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
#define CNXK_NIX_RSS_OFFLOAD \
- (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
- ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+ (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL | \
+ RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST | \
+ RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
#define CNXK_NIX_TX_OFFLOAD_CAPA \
- (DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
+ (RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_SECURITY)
#define CNXK_NIX_RX_OFFLOAD_CAPA \
- (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
- DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_SECURITY)
+ (RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_SECURITY)
#define RSS_IPV4_ENABLE \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE \
- (ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+ (RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index c0b949e21ab0..e068f553495c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -104,11 +104,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
val = ROC_NIX_RSS_RETA_SZ_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
val = ROC_NIX_RSS_RETA_SZ_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
val = ROC_NIX_RSS_RETA_SZ_256;
else
val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df76152..67464302653d 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,24 +81,24 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
- {DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
- {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
- {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_SECURITY, " Security,"},
- {DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+ {RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+ {RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
"Scalar, Rx Offloads:"
@@ -142,28 +142,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
- {DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
- {DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
- {DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
- {DEV_TX_OFFLOAD_SECURITY, " Security,"},
- {DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+ {RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
};
static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
"Scalar, Tx Offloads:"
@@ -203,8 +203,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
enum rte_eth_fc_mode mode_map[] = {
- RTE_FC_NONE, RTE_FC_RX_PAUSE,
- RTE_FC_TX_PAUSE, RTE_FC_FULL
+ RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+ RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
};
struct roc_nix *nix = &dev->nix;
int mode;
@@ -264,10 +264,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -408,13 +408,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
plt_err("Scatter offload is not enabled for mtu");
goto exit;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
plt_err("Greater than maximum supported packet length");
goto exit;
@@ -734,8 +734,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
reta[idx] = reta_conf[i].reta[j];
idx++;
@@ -770,8 +770,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
goto fail;
/* Copy RETA table */
- for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = reta[idx];
idx++;
@@ -804,7 +804,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_conf->rss_key)
roc_nix_rss_key_set(nix, rss_conf->rss_key);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 6a7080167598..f10a502826c6 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
plt_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex"
: "half-duplex");
else
@@ -89,7 +89,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
eth_link.link_status = link->status;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
/* Print link info */
@@ -117,17 +117,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
return 0;
if (roc_nix_is_lbk(&dev->nix)) {
- link.link_status = ETH_LINK_UP;
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_autoneg = ETH_LINK_FIXED;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
rc = roc_nix_mac_link_info_get(&dev->nix, &info);
if (rc)
return rc;
link.link_status = info.status;
link.link_speed = info.speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (info.full_duplex)
link.link_duplex = info.full_duplex;
}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rc = roc_nix_ptp_rx_ena_dis(nix, true);
if (!rc) {
@@ -257,7 +257,7 @@ int
cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
- uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+ uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
struct roc_nix *nix = &dev->nix;
int rc = 0;
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index dfc33ba8654a..b08d7c34faa9 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("multi-queue mode is disabled");
return -ENOTSUP;
}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 37625c5bfb69..dbcbfaf68a30 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,31 +28,31 @@
#define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
#define CXGBE_DEFAULT_RSS_KEY_LEN 40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
/* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
/* Devargs filtermode and filtermask representation */
enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b2976002c..4758321778d1 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
}
new_link.link_status = cxgbe_force_linkup(adapter) ?
- ETH_LINK_UP : pi->link_cfg.link_ok;
+ RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -374,7 +374,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
goto out;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
else
eth_dev->data->scattered_rx = 0;
@@ -438,9 +438,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
CXGBE_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!(adapter->flags & FW_QUEUE_BOUND)) {
err = cxgbe_setup_sge_fwevtq(adapter);
@@ -1080,13 +1080,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
rx_pause = 1;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1099,12 +1099,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
u8 tx_pause = 0, rx_pause = 0;
int ret;
- if (fc_conf->mode == RTE_FC_FULL) {
+ if (fc_conf->mode == RTE_ETH_FC_FULL) {
tx_pause = 1;
rx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
tx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
rx_pause = 1;
}
@@ -1200,9 +1200,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
}
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1246,8 +1246,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
@@ -1277,8 +1277,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
@@ -1479,7 +1479,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_100G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
}
@@ -1488,7 +1488,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_50G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
}
@@ -1497,7 +1497,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_25G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 91d6bb9bbcb0..f1ac32270961 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1670,7 +1670,7 @@ int cxgbe_link_start(struct port_info *pi)
* that step explicitly.
*/
ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
- !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+ !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
true);
if (ret == 0) {
ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1694,7 +1694,7 @@ int cxgbe_link_start(struct port_info *pi)
}
if (ret == 0 && cxgbe_force_linkup(adapter))
- pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return ret;
}
@@ -1725,10 +1725,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
F_FW_RSS_VI_CONFIG_CMD_UDPEN;
@@ -1865,7 +1865,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
{
#define SET_SPEED(__speed_name) \
do { \
- *speed_caps |= ETH_LINK_ ## __speed_name; \
+ *speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
} while (0)
#define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1952,7 +1952,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
speed_caps);
if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
- *speed_caps |= ETH_LINK_SPEED_FIXED;
+ *speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
}
/**
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c79cdb8d8ad7..89ea7dd47c0b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,29 +54,29 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
- if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
dev->data->scattered_rx = 1;
@@ -283,43 +283,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
/* Configure link only if link is UP*/
if (link->link_status) {
- if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
/* Start autoneg only if link is not in autoneg mode */
if (!link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- } else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
- switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M_HD:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ } else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ switch (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_10M:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10M:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_100M_HD:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M_HD:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_100M:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_1G:
- speed = ETH_SPEED_NUM_1G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_1G:
+ speed = RTE_ETH_SPEED_NUM_1G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_2_5G:
- speed = ETH_SPEED_NUM_2_5G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_2_5G:
+ speed = RTE_ETH_SPEED_NUM_2_5G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_10G:
- speed = ETH_SPEED_NUM_10G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10G:
+ speed = RTE_ETH_SPEED_NUM_10G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
- speed = ETH_SPEED_NUM_NONE;
- duplex = ETH_LINK_FULL_DUPLEX;
+ speed = RTE_ETH_SPEED_NUM_NONE;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
}
/* Set link speed */
@@ -535,30 +535,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
if (fif->mac_type == fman_mac_1g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G;
} else if (fif->mac_type == fman_mac_2_5g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G
- | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G
+ | RTE_ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -591,12 +591,12 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
/* Update Rx offload info */
@@ -623,14 +623,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -664,7 +664,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
return ret;
- if (link->link_status == ETH_LINK_DOWN &&
+ if (link->link_status == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -675,15 +675,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
if (ioctl_version < 2) {
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (fif->mac_type == fman_mac_1g)
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
else if (fif->mac_type == fman_mac_2_5g)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else if (fif->mac_type == fman_mac_10g)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -962,7 +962,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
@@ -1268,7 +1268,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
return 0;
@@ -1284,7 +1284,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
return 0;
@@ -1314,10 +1314,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- if (fc_conf->mode == RTE_FC_NONE) {
+ if (fc_conf->mode == RTE_ETH_FC_NONE) {
return 0;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
- fc_conf->mode == RTE_FC_FULL) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+ fc_conf->mode == RTE_ETH_FC_FULL) {
fman_if_set_fc_threshold(dev->process_private,
fc_conf->high_water,
fc_conf->low_water,
@@ -1361,11 +1361,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
}
ret = fman_if_get_fc_threshold(dev->process_private);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time =
fman_if_get_fc_quanta(dev->process_private);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -1626,10 +1626,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
fc_conf = dpaa_intf->fc_conf;
ret = fman_if_get_fc_threshold(fman_intf);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
#define DPAA_DEBUG_FQ_TX_ERROR 1
#define DPAA_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP)
#define DPAA_TX_CKSUM_OFFLOAD_MASK ( \
PKT_TX_IP_CKSUM | \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
if (req_dist_set % 2 != 0) {
dist_field = 1U << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_L2_PAYLOAD:
if (l2_configured)
break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_ETH;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
if (ipv4_configured)
break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV4;
break;
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (ipv6_configured)
break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV6;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
if (tcp_configured)
break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_TCP;
break;
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (udp_configured)
break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_UDP;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 08f49af7685d..3170694841df 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -220,9 +220,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
if (req_dist_set % 2 != 0) {
dist_field = 1ULL << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
- case ETH_RSS_ETH:
-
+ case RTE_ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_ETH:
if (l2_configured)
break;
l2_configured = 1;
@@ -238,7 +237,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_PPPOE:
+ case RTE_ETH_RSS_PPPOE:
if (pppoe_configured)
break;
kg_cfg->extracts[i].extract.from_hdr.prot =
@@ -252,7 +251,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_ESP:
+ case RTE_ETH_RSS_ESP:
if (esp_configured)
break;
esp_configured = 1;
@@ -268,7 +267,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_AH:
+ case RTE_ETH_RSS_AH:
if (ah_configured)
break;
ah_configured = 1;
@@ -284,8 +283,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_C_VLAN:
- case ETH_RSS_S_VLAN:
+ case RTE_ETH_RSS_C_VLAN:
+ case RTE_ETH_RSS_S_VLAN:
if (vlan_configured)
break;
vlan_configured = 1;
@@ -301,7 +300,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_MPLS:
+ case RTE_ETH_RSS_MPLS:
if (mpls_configured)
break;
@@ -338,13 +337,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (l3_configured)
break;
@@ -382,12 +381,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_TCP_EX:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (l4_configured)
break;
@@ -414,8 +413,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e78520e..59e728577f53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,33 +38,33 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* enable timestamp in mbuf */
bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -142,7 +142,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN Filter not avaialble */
if (!priv->max_vlan_filters) {
DPAA2_PMD_INFO("VLAN filter not available");
@@ -150,7 +150,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
priv->token, true);
else
@@ -251,13 +251,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_rx_offloads_nodis;
dev_info->tx_offload_capa = dev_tx_offloads_sup |
dev_tx_offloads_nodis;
- dev_info->speed_capa = ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -270,10 +270,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
if (dpaa2_svr_family == SVR_LX2160A) {
- dev_info->speed_capa |= ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
return 0;
@@ -291,15 +291,15 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+ {RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
};
/* Update Rx offload info */
@@ -326,15 +326,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -573,7 +573,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
return -1;
}
- if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
ret = dpaa2_setup_flow_dist(dev,
eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -587,12 +587,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rx_l3_csum_offload = true;
- if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
rx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -610,7 +610,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
#if !defined(RTE_LIBRTE_IEEE1588)
- if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
#endif
{
ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -623,12 +623,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dpaa2_enable_ts[dev->data->port_id] = true;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
tx_l3_csum_offload = true;
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
tx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -660,8 +660,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
dpaa2_tm_init(dev);
@@ -1856,7 +1856,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
return -1;
}
- if (state.up == ETH_LINK_DOWN &&
+ if (state.up == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -1868,9 +1868,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
link.link_speed = state.rate;
if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
@@ -2031,9 +2031,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* No TX side flow control (send Pause frame disabled)
*/
if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
} else {
/* DPNI_LINK_OPT_PAUSE not set
* if ASYM_PAUSE set,
@@ -2043,9 +2043,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* Flow control disabled
*/
if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return ret;
@@ -2089,14 +2089,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* update cfg with fc_conf */
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
/* Full flow control;
* OPT_PAUSE set, ASYM_PAUSE not set
*/
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
/* Enable RX flow control
* OPT_PAUSE not set;
* ASYM_PAUSE set;
@@ -2104,7 +2104,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_PAUSE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
/* Enable TX Flow control
* OPT_PAUSE set
* ASYM_PAUSE set
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
/* Disable Flow control
* OPT_PAUSE not set
* ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fdc62ec30d22..c5e9267bf04d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,17 +65,17 @@
#define DPAA2_TX_CONF_ENABLE 0x08
#define DPAA2_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP | \
- ETH_RSS_MPLS | \
- ETH_RSS_C_VLAN | \
- ETH_RSS_S_VLAN | \
- ETH_RSS_ESP | \
- ETH_RSS_AH | \
- ETH_RSS_PPPOE)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_MPLS | \
+ RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_S_VLAN | \
+ RTE_ETH_RSS_ESP | \
+ RTE_ETH_RSS_AH | \
+ RTE_ETH_RSS_PPPOE)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
rte_vlan_strip(bufs[num_rx]);
dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
eth_data->port_id);
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP) {
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
rte_vlan_strip(bufs[num_rx]);
}
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags
& PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
int ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 7d5d6377859a..a548ae2ccb2c 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -82,15 +82,15 @@
#define E1000_FTQF_QUEUE_ENABLE 0x00000100
#define IGB_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
/*
* The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6ed1..9da477e59def 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -597,8 +597,8 @@ eth_em_start(struct rte_eth_dev *dev)
e1000_clear_hw_cntrs_base_generic(hw);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_em_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -611,39 +611,39 @@ eth_em_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -1102,9 +1102,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
/* Preferred queue parameters */
dev_info->default_rxportconf.nb_queues = 1;
@@ -1162,17 +1162,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -1424,15 +1424,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
em_vlan_hw_strip_enable(dev);
else
em_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
em_vlan_hw_filter_enable(dev);
else
em_vlan_hw_filter_disable(dev);
@@ -1601,7 +1601,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
if (link.link_status) {
PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1683,13 +1683,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 344149c19147..648b04154c5b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
struct em_rx_entry *sw_ring; /**< address of RX software ring. */
struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
- uint64_t offloads; /**< Offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
uint16_t nb_rx_desc; /**< number of RX descriptors. */
uint16_t rx_tail; /**< current value of RDT register. */
uint16_t nb_rx_hold; /**< number of held free RX desc. */
@@ -173,7 +173,7 @@ struct em_tx_queue {
uint8_t wthresh; /**< Write-back threshold register. */
struct em_ctx_info ctx_cache;
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -1171,11 +1171,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
RTE_SET_USED(dev);
tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
return tx_offload_capa;
}
@@ -1369,13 +1369,13 @@ em_get_rx_port_offloads_capa(void)
uint64_t rx_offload_capa;
rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
return rx_offload_capa;
}
@@ -1469,7 +1469,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1788,7 +1788,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1831,7 +1831,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
}
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1844,7 +1844,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1870,7 +1870,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
}
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad2f..ae3bc4a9c201 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1073,21 +1073,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
- if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
- tx_mq_mode == ETH_MQ_TX_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+ tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* Check multi-queue mode.
- * To no break software we accept ETH_MQ_RX_NONE as this might
+ * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
* be used to turn off VLAN filter.
*/
- if (rx_mq_mode == ETH_MQ_RX_NONE ||
- rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
} else {
/* Only support one queue on VFs.
@@ -1099,12 +1099,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
/* TX mode is not used here, so mode might be ignored.*/
- if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(WARNING, "SRIOV is active,"
" TX mode %d is not supported. "
" Driver will behave as %d mode.",
- tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+ tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
}
/* check valid queue number */
@@ -1117,17 +1117,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
return -EINVAL;
}
- if (tx_mq_mode != ETH_MQ_TX_NONE &&
- tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+ tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
" Due to txmode is meaningless in this"
" driver, just ignore.",
@@ -1146,8 +1146,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = igb_check_mq_mode(dev);
@@ -1287,8 +1287,8 @@ eth_igb_start(struct rte_eth_dev *dev)
/*
* VLAN Offload Settings
*/
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_igb_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1296,7 +1296,7 @@ eth_igb_start(struct rte_eth_dev *dev)
return ret;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable VLAN filter since VMDq always use VLAN filter */
igb_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1310,39 +1310,39 @@ eth_igb_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -2185,21 +2185,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
case e1000_82576:
dev_info->max_rx_queues = 16;
dev_info->max_tx_queues = 16;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 16;
break;
case e1000_82580:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
case e1000_i350:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
@@ -2225,7 +2225,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
return -EINVAL;
}
dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2251,9 +2251,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2296,12 +2296,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
dev_info->max_rx_pktlen = 0x3FFF; /* See RLPML register. */
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
switch (hw->mac.type) {
case e1000_vfadapt:
dev_info->max_rx_queues = 2;
@@ -2402,17 +2402,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else if (!link_check) {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2588,7 +2588,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= E1000_CTRL_EXT_EXT_VLAN;
/* only outer TPID of double VLAN can be configured*/
- if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg = E1000_READ_REG(hw, E1000_VET);
reg = (reg & (~E1000_VET_VET_EXT)) |
((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2703,22 +2703,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igb_vlan_hw_strip_enable(dev);
else
igb_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igb_vlan_hw_filter_enable(dev);
else
igb_vlan_hw_filter_disable(dev);
}
- if(mask & ETH_VLAN_EXTEND_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
igb_vlan_hw_extend_enable(dev);
else
igb_vlan_hw_extend_disable(dev);
@@ -2870,7 +2870,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3024,13 +3024,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3099,18 +3099,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* on configuration
*/
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
ctrl |= E1000_CTRL_RFCE;
ctrl &= ~E1000_CTRL_TFCE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
ctrl |= E1000_CTRL_TFCE;
ctrl &= ~E1000_CTRL_RFCE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
break;
default:
@@ -3258,22 +3258,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -3571,16 +3571,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGB_4_BIT_MASK);
if (!mask)
@@ -3612,16 +3612,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGB_4_BIT_MASK);
if (!mask)
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
if (*vfinfo == NULL)
rte_panic("Cannot allocate memory for private VF data\n");
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index a1d5eecc14a1..bcce2fc726d8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -186,7 +186,7 @@ struct igb_tx_queue {
/**< Start context position for transmit queue. */
struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -1459,13 +1459,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
RTE_SET_USED(dev);
- tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return tx_offload_capa;
}
@@ -1640,19 +1640,19 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == e1000_i350 ||
hw->mac.type == e1000_i210 ||
hw->mac.type == e1000_i211)
- rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
return rx_offload_capa;
}
@@ -1733,7 +1733,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1950,23 +1950,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
}
@@ -2032,23 +2032,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -2170,15 +2170,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
E1000_VMOLR_MPME);
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
vmolr |= E1000_VMOLR_AUPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
vmolr |= E1000_VMOLR_ROMPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
vmolr |= E1000_VMOLR_ROPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
vmolr |= E1000_VMOLR_BAM;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
vmolr |= E1000_VMOLR_MPME;
E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2214,9 +2214,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VLVF: set up filters for vlan tags as configured */
for (i = 0; i < cfg->nb_pool_maps; i++) {
/* set vlan id in VF register and set the valid bit */
- E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
- (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
- ((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+ E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+ (cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+ ((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
E1000_VLVF_POOLSEL_MASK)));
}
@@ -2268,7 +2268,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t mrqc;
- if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+ if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
/*
* SRIOV active scheme
* FIXME if support RSS together with VMDq & SRIOV
@@ -2282,14 +2282,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igb_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
/*Configure general VMDQ only RX parameters*/
igb_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode.*/
default:
igb_rss_disable(dev);
@@ -2338,7 +2338,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Set maximum packet length by default, and might be updated
* together with enabling/disabling dual VLAN.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
max_len += VLAN_TAG_SIZE;
E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2374,7 +2374,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2444,7 +2444,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2488,16 +2488,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
rxcsum |= E1000_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
if (rxmode->offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
rxcsum |= E1000_RXCSUM_TUOFL;
else
rxcsum &= ~E1000_RXCSUM_TUOFL;
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_CRCOFL;
else
rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2505,7 +2505,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
/* clear STRCRC bit in all queues */
@@ -2545,7 +2545,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
/* Make sure VLAN Filters are off. */
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
rctl &= ~E1000_RCTL_VFE;
/* Don't store bad packets. */
rctl &= ~E1000_RCTL_SBP;
@@ -2743,7 +2743,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index f3b17d70c9a4..4d2601d15a57 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -117,10 +117,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
#define ENA_STATS_ARRAY_TX ARRAY_SIZE(ena_stats_tx_strings)
#define ENA_STATS_ARRAY_RX ARRAY_SIZE(ena_stats_rx_strings)
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
- DEV_TX_OFFLOAD_UDP_CKSUM |\
- DEV_TX_OFFLOAD_IPV4_CKSUM |\
- DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
PKT_TX_IP_CKSUM |\
PKT_TX_TCP_SEG)
@@ -332,7 +332,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
(queue_offloads & QUEUE_OFFLOADS)) {
/* check if TSO is required */
if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
ena_tx_ctx->tso_enable = true;
ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -340,7 +340,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L3 checksum is needed */
if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
ena_tx_ctx->l3_csum_enable = true;
if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -357,12 +357,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L4 checksum is needed */
if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
ena_tx_ctx->l4_csum_enable = true;
} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
PKT_TX_UDP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
ena_tx_ctx->l4_csum_enable = true;
} else {
@@ -643,9 +643,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
struct rte_eth_link *link = &dev->data->dev_link;
struct ena_adapter *adapter = dev->data->dev_private;
- link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -923,7 +923,7 @@ static int ena_start(struct rte_eth_dev *dev)
if (rc)
goto err_start_tx;
- if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
rc = ena_rss_configure(adapter);
if (rc)
goto err_rss_init;
@@ -2004,9 +2004,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
adapter->state = ENA_ADAPTER_STATE_CONFIG;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
- dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Scattered Rx cannot be turned off in the HW, so this capability must
* be forced.
@@ -2067,17 +2067,17 @@ static uint64_t ena_get_rx_port_offloads(struct ena_adapter *adapter)
uint64_t port_offloads = 0;
if (adapter->offloads.rx_offloads & ENA_L3_IPV4_CSUM)
- port_offloads |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
if (adapter->offloads.rx_offloads &
(ENA_L4_IPV4_CSUM | ENA_L4_IPV6_CSUM))
port_offloads |=
- DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if (adapter->offloads.rx_offloads & ENA_RX_RSS_HASH)
- port_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- port_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
return port_offloads;
}
@@ -2087,17 +2087,17 @@ static uint64_t ena_get_tx_port_offloads(struct ena_adapter *adapter)
uint64_t port_offloads = 0;
if (adapter->offloads.tx_offloads & ENA_IPV4_TSO)
- port_offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (adapter->offloads.tx_offloads & ENA_L3_IPV4_CSUM)
- port_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
if (adapter->offloads.tx_offloads &
(ENA_L4_IPV4_CSUM_PARTIAL | ENA_L4_IPV4_CSUM |
ENA_L4_IPV6_CSUM | ENA_L4_IPV6_CSUM_PARTIAL))
port_offloads |=
- DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
- port_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return port_offloads;
}
@@ -2130,14 +2130,14 @@ static int ena_infos_get(struct rte_eth_dev *dev,
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
dev_info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/* Inform framework about available features */
dev_info->rx_offload_capa = ena_get_rx_port_offloads(adapter);
@@ -2303,7 +2303,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
#endif
- fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+ fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
descs_in_use = rx_ring->ring_size -
ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
@@ -2416,11 +2416,11 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
#ifdef RTE_LIBRTE_ETHDEV_DEBUG
/* Check if requested offload is also enabled for the queue */
if ((ol_flags & PKT_TX_IP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)) ||
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) ||
(l4_csum_flag == PKT_TX_TCP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) ||
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) ||
(l4_csum_flag == PKT_TX_UDP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_UDP_CKSUM))) {
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM))) {
PMD_TX_LOG(DEBUG,
"mbuf[%" PRIu32 "]: requested offloads: %" PRIu16 " are not enabled for the queue[%u]\n",
i, m->nb_segs, tx_ring->id);
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 4f4142ed12d0..865e1241e0ce 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -58,8 +58,8 @@
#define ENA_HASH_KEY_SIZE 40
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define ENA_IO_TXQ_IDX(q) (2 * (q))
#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1)
--git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 152098410fa2..be4007e3f3fe 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
if (reta_size == 0 || reta_conf == NULL)
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
/* Each reta_conf is for 64 entries.
* To support 128 we use 2 conf of 64.
*/
- conf_idx = i / RTE_RETA_GROUP_SIZE;
- idx = i % RTE_RETA_GROUP_SIZE;
+ conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ idx = i % RTE_ETH_RETA_GROUP_SIZE;
if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
entry_value =
ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -139,7 +139,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
if (reta_size == 0 || reta_conf == NULL)
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -154,8 +154,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0 ; i < reta_size ; i++) {
- reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
- reta_idx = i % RTE_RETA_GROUP_SIZE;
+ reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
reta_conf[reta_conf_idx].reta[reta_idx] =
ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -199,34 +199,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Convert proto to ETH flag */
switch (proto) {
case ENA_ADMIN_RSS_TCP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
break;
case ENA_ADMIN_RSS_UDP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
break;
case ENA_ADMIN_RSS_TCP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
break;
case ENA_ADMIN_RSS_UDP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
break;
case ENA_ADMIN_RSS_IP4:
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
break;
case ENA_ADMIN_RSS_IP6:
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
break;
case ENA_ADMIN_RSS_IP4_FRAG:
- rss_hf |= ETH_RSS_FRAG_IPV4;
+ rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
break;
case ENA_ADMIN_RSS_NOT_IP:
- rss_hf |= ETH_RSS_L2_PAYLOAD;
+ rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
break;
case ENA_ADMIN_RSS_TCP6_EX:
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
break;
case ENA_ADMIN_RSS_IP6_EX:
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
break;
default:
break;
@@ -235,10 +235,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L3. */
switch (fields & ENA_HF_RSS_ALL_L3) {
case ENA_ADMIN_RSS_L3_SA:
- rss_hf |= ETH_RSS_L3_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L3_DA:
- rss_hf |= ETH_RSS_L3_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
break;
default:
break;
@@ -247,10 +247,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L4. */
switch (fields & ENA_HF_RSS_ALL_L4) {
case ENA_ADMIN_RSS_L4_SP:
- rss_hf |= ETH_RSS_L4_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L4_DP:
- rss_hf |= ETH_RSS_L4_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
break;
default:
break;
@@ -268,11 +268,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
/* Determine which fields of L3 should be used. */
- switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
- case ETH_RSS_L3_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+ case RTE_ETH_RSS_L3_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_DA;
break;
- case ETH_RSS_L3_SRC_ONLY:
+ case RTE_ETH_RSS_L3_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_SA;
break;
default:
@@ -284,11 +284,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
}
/* Determine which fields of L4 should be used. */
- switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
- case ETH_RSS_L4_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ case RTE_ETH_RSS_L4_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_DP;
break;
- case ETH_RSS_L4_SRC_ONLY:
+ case RTE_ETH_RSS_L4_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_SP;
break;
default:
@@ -334,43 +334,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
int rc, i;
/* Turn on appropriate fields for each requested packet type */
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
- if ((rss_hf & ETH_RSS_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
selected_fields[ENA_ADMIN_RSS_IP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
- if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
- if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+ if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
@@ -541,7 +541,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
uint16_t admin_hf;
static bool warn_once;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
return -ENOTSUP;
}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 1b567f01eae0..7cdb8ce463ed 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
if (status & ENETC_LINK_MODE)
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
else
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
if (status & ENETC_LINK_STATUS)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
switch (status & ENETC_LINK_SPEED_MASK) {
case ENETC_LINK_SPEED_1G:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ENETC_LINK_SPEED_100M:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
default:
case ENETC_LINK_SPEED_10M:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -207,10 +207,10 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
dev_info->max_tx_queues = MAX_TX_RINGS;
dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
dev_info->rx_offload_capa =
- (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC);
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC);
return 0;
}
@@ -463,7 +463,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
RTE_ETH_QUEUE_STATE_STOPPED;
}
- rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0);
return 0;
@@ -705,7 +705,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
int config;
config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -713,10 +713,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
checksum &= ~L3_CKSUM;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
checksum &= ~L4_CKSUM;
enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
*/
uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
uint8_t rss_enable;
- uint64_t rss_hf; /* ETH_RSS flags */
+ uint64_t rss_hf; /* RTE_ETH_RSS flags */
union vnic_rss_key rss_key;
union vnic_rss_cpu rss_cpu;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5e0..c8bdaf1a8e79 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
uint16_t sub_devid;
uint32_t capa;
} vic_speed_capa_map[] = {
- { 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
- { 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
- { 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
- { 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
- { 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
- { 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
- { 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
- { 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
- { 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
- { 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
- { 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
- { 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
- { 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
- { 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
- { 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1440 Mezz */
- { 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1480 MLOM */
- { 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
- { 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
- { 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
- { 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
- { 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
- { 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+ { 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+ { 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+ { 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+ { 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+ { 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+ { 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+ { 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+ { 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+ { 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+ { 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+ { 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+ { 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+ { 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+ { 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+ { 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+ { 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+ { 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+ { 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+ { 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+ { 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+ { 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+ { 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
{ 0, 0 }, /* End marker */
};
@@ -297,8 +297,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
ENICPMD_FUNC_TRACE();
offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
enic->ig_vlan_strip_en = 1;
else
enic->ig_vlan_strip_en = 0;
@@ -323,17 +323,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
return ret;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->mc_count = 0;
enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM);
+ RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* All vlan offload masks to apply the current settings */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = enicpmd_vlan_offload_set(eth_dev, mask);
if (ret) {
dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -435,14 +435,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
}
/* 1300 and later models are at least 40G */
if (id >= 0x0100)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
/* VFs have subsystem id 0, check device id */
if (id == 0) {
/* Newer VF implies at least 40G model */
if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
}
- return ETH_LINK_SPEED_10G;
+ return RTE_ETH_LINK_SPEED_10G;
}
static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -774,8 +774,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -806,8 +806,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
*/
rss_cpu = enic->rss_cpu;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
rss_cpu.cpu[i / 4].b[i % 4] =
enic_rte_rq_idx_to_sop_idx(
@@ -883,7 +883,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
*/
conf->offloads = enic->rx_offload_capa;
if (!enic->ig_vlan_strip_en)
- conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* rx_thresh and other fields are not applicable for enic */
}
@@ -969,8 +969,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
static int udp_tunnel_common_check(struct enic *enic,
struct rte_eth_udp_tunnel *tnl)
{
- if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
- tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+ if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+ tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
return -ENOTSUP;
if (!enic->overlay_offload) {
ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1010,7 +1010,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
@@ -1039,7 +1039,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index dfc7f5d1f94f..21b1fffb14f0 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
memset(&link, 0, sizeof(link));
link.link_status = enic_get_link_status(enic);
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = vnic_dev_port_speed(enic->vdev);
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
}
eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* vnic notification of link status has already been turned on in
* enic_dev_init() which is called during probe time. Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
* and vlan insertion are supported.
*/
simple_tx_offloads = enic->tx_offload_capa &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if ((eth_dev->data->dev_conf.txmode.offloads &
~simple_tx_offloads) == 0) {
ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
@@ -1385,15 +1385,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type = 0;
rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
if (enic->rq_count > 1 &&
- (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+ (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
rss_hf != 0) {
rss_enable = 1;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
if (enic->udp_rss_weak) {
/*
@@ -1404,12 +1404,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
}
}
- if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
if (enic->udp_rss_weak)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1745,9 +1745,9 @@ enic_enable_overlay_offload(struct enic *enic)
return -EINVAL;
}
enic->tx_offload_capa |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- (enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
- (enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ (enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+ (enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
enic->tx_offload_mask |=
PKT_TX_OUTER_IPV6 |
PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index c5777772a09e..918a9e170ff6 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
* IPV4 hash type handles both non-frag and frag packet types.
* TCP/UDP is controlled via a separate flag below.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (ENIC_SETTING(enic, RSSHASH_IPV6))
/*
* The VIC adapter can perform RSS on IPv6 packets with and
* without extension headers. An IPv6 "fragment" is an IPv6
* packet with the fragment extension header.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (enic->udp_rss_weak)
enic->flow_type_rss_offloads |=
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
/* Zero offloads if RSS is not enabled */
if (!ENIC_SETTING(enic, RSS))
@@ -201,19 +201,19 @@ int enic_get_vnic_config(struct enic *enic)
enic->tx_queue_offload_capa = 0;
enic->tx_offload_capa =
enic->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->tx_offload_mask =
PKT_TX_IPV6 |
PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e6014..82d595b1d1a0 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
static const struct rte_eth_link eth_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
};
static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
int qid;
struct rte_eth_dev *fsdev;
struct rxq **rxq;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð(sdev)->data->dev_conf.intr_conf;
fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
failsafe_rx_intr_install(struct rte_eth_dev *dev)
{
struct fs_priv *priv = PRIV(dev);
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
&priv->data->dev_conf.intr_conf;
if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c6e..a3a8a1c82e3a 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1172,51 +1172,51 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
* configuring a sub-device.
*/
infos->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->rx_queue_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
infos->flow_type_rss_offloads =
- ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP;
+ RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP;
infos->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 17c73c4dc5ae..b7522a47a80b 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
uint8_t drop_en;
uint8_t rx_deferred_start; /* don't start this queue in dev start. */
uint16_t rx_ftag_en; /* indicates FTAG RX supported */
- uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
};
/*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
uint16_t next_rs; /* Next pos to set RS flag */
uint16_t next_dd; /* Next pos to check DD flag */
volatile uint32_t *tail_ptr;
- uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
uint16_t nb_desc;
uint16_t port_id;
uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 66f4a5c6df2c..d256334bfde9 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
- if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
return 0;
if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
};
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
*/
hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
if (mrqc == 0) {
PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
if (hw->mac.type != fm10k_mac_pf)
return;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
nb_queue_pools = vmdq_conf->nb_queue_pools;
/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
/* It adds dual VLAN length for supporting dual VLAN */
if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
- rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t reg;
dev->data->scattered_rx = 1;
reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
}
/* Update default vlan when not in VMDQ mode */
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_50G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
dev->data->dev_link.link_status =
- dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+ dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
return 0;
}
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->max_vfs = pdev->max_vfs;
dev_info->vmdq_pool_base = 0;
dev_info->vmdq_queue_base = 0;
- dev_info->max_vmdq_pools = ETH_32_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_32_POOLS;
dev_info->vmdq_queue_num = FM10K_MAX_QUEUES_PF;
dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
dev_info->reta_size = FM10K_MAX_RSS_INDICES;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
return -EINVAL;
}
- if (vlan_id > ETH_VLAN_ID_MAX) {
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
return -EINVAL;
}
@@ -1767,20 +1767,20 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
}
static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_RSS_HASH);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
}
static int
@@ -1965,12 +1965,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO);
+ return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO);
}
static int
@@ -2111,8 +2111,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
* 128-entries in 32 registers
*/
for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
BIT_MASK_PER_UINT32);
if (mask == 0)
@@ -2160,8 +2160,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
* 128-entries in 32 registers
*/
for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
BIT_MASK_PER_UINT32);
if (mask == 0)
@@ -2198,15 +2198,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
return -EINVAL;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
/* If the mapping doesn't fit any supported, return */
if (mrqc == 0)
@@ -2243,15 +2243,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
hf = 0;
- hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV4) ? RTE_ETH_RSS_IPV4 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX : 0;
rss_conf->rss_hf = hf;
@@ -2606,7 +2606,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
/* first clear the internal SW recording structure */
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
false);
@@ -2622,7 +2622,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
MAIN_VSI_POOL_NUMBER);
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
true);
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
#ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
/* whithout rx ol_flags, no VP flag report */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
#endif
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
static int hinic_link_event_process(struct hinic_hwdev *hwdev,
struct rte_eth_dev *eth_dev, u8 status)
{
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
struct nic_port_info port_info;
struct rte_eth_link link;
int rc = HINIC_OK;
if (!status) {
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
memset(&port_info, 0, sizeof(port_info));
rc = hinic_get_port_info(hwdev, &port_info);
if (rc) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
link.link_speed = port_speed[port_info.speed %
LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb6759..4cd5a85d5f8d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
/* init vlan offoad */
err = hinic_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
} else {
*speed_capa = 0;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
- *speed_capa |= ETH_LINK_SPEED_1G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
- *speed_capa |= ETH_LINK_SPEED_10G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
- *speed_capa |= ETH_LINK_SPEED_25G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
- *speed_capa |= ETH_LINK_SPEED_40G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
- *speed_capa |= ETH_LINK_SPEED_100G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_100G;
}
}
@@ -732,24 +732,24 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
hinic_get_speed_capa(dev, &info->speed_capa);
info->rx_queue_offload_capa = 0;
- info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_RSS_HASH;
+ info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
info->tx_queue_offload_capa = 0;
- info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
info->hash_key_size = HINIC_RSS_KEY_SIZE;
info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -846,20 +846,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
u8 port_link_status = 0;
struct nic_port_info port_link_info;
struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
rc = hinic_get_link_status(nic_hwdev, &port_link_status);
if (rc)
return rc;
if (!port_link_status) {
- link->link_status = ETH_LINK_DOWN;
+ link->link_status = RTE_ETH_LINK_DOWN;
link->link_speed = 0;
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
- link->link_autoneg = ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
return HINIC_OK;
}
@@ -901,8 +901,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* Get link status information from hardware */
rc = hinic_priv_get_dev_link_status(nic_dev, &link);
if (rc != HINIC_OK) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Get link status failed");
goto out;
}
@@ -1650,8 +1650,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
int err;
/* Enable or disable VLAN filter */
- if (mask & ETH_VLAN_FILTER_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
TRUE : FALSE;
err = hinic_config_vlan_filter(nic_dev->hwdev, on);
if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1672,8 +1672,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Enable or disable VLAN stripping */
- if (mask & ETH_VLAN_STRIP_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
TRUE : FALSE;
err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
if (err) {
@@ -1859,13 +1859,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
fc_conf->autoneg = nic_pause.auto_neg;
if (nic_pause.tx_pause && nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (nic_pause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else if (nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1879,14 +1879,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
nic_pause.auto_neg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
nic_pause.tx_pause = true;
else
nic_pause.tx_pause = false;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
nic_pause.rx_pause = true;
else
nic_pause.rx_pause = false;
@@ -1930,7 +1930,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err = 0;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_OK;
}
@@ -1951,14 +1951,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
}
}
- rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
if (err) {
@@ -1994,7 +1994,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_ERROR;
}
@@ -2015,15 +2015,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
rss_conf->rss_hf |= rss_type.ipv4 ?
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
rss_conf->rss_hf |= rss_type.ipv6 ?
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
- rss_conf->rss_hf |= rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+ rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
return HINIC_OK;
}
@@ -2053,7 +2053,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
u16 i = 0;
u16 idx, shift;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
return HINIC_OK;
if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2067,8 +2067,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
/* update rss indir_tbl */
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2133,8 +2133,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
{
u64 rss_hf = rss_conf->rss_hf;
- rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
}
static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
{
int err, i;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
nic_dev->num_rss = 0;
if (nic_dev->num_rq > 1) {
/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
PMD_DRV_LOG(WARNING, "Alloc rss template failed");
return err;
}
- nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
for (i = 0; i < nic_dev->num_rq; i++)
hinic_add_rq_to_rx_queue_list(nic_dev, i);
}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
{
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (hinic_rss_template_free(nic_dev->hwdev,
nic_dev->rss_tmpl_idx))
PMD_DRV_LOG(WARNING, "Free rss template failed");
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
}
}
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
int ret = 0;
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
ret = hinic_config_mq_rx_rss(nic_dev, on);
break;
default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
int lro_wqe_num;
int buf_size;
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (rss_conf.rss_hf == 0) {
rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
}
/* Enable both L3/L4 rx checksum offload */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
goto rx_csum_ofl_err;
/* config lro */
- lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+ lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
true : false;
max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
hinic_rss_deinit(nic_dev);
hinic_destroy_num_qps(nic_dev);
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
#define HINIC_DEFAULT_RX_FREE_THRESH 32
#define HINIC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
enum rq_completion_fmt {
RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 8753c340e790..3d0159d78778 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
return ret;
}
- if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
if (dcb_rx_conf->nb_tcs == 0)
hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
uint16_t nb_tx_q = hw->data->nb_tx_queues;
int ret;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
return 0;
ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
{
switch (mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->requested_fc_mode = HNS3_FC_NONE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->requested_fc_mode = HNS3_FC_FULL;
break;
default:
hw->requested_fc_mode = HNS3_FC_NONE;
hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
- "configured to RTE_FC_NONE", mode);
+ "configured to RTE_ETH_FC_NONE", mode);
break;
}
}
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 693048f58704..8e0ccecb57a6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
};
static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
- { ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
};
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
struct hns3_cmd_desc desc;
int ret;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
return -EINVAL;
}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rte_spinlock_lock(&hw->lock);
rxmode = &dev->data->dev_conf.rxmode;
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
/* ignore vlan filter configuration during promiscuous mode */
if (!dev->data->promiscuous) {
/* Enable or disable VLAN filter */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
true : false;
ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
true : false;
ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
return ret;
}
- ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+ ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
if (ret) {
hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
if (!hw->data->promiscuous) {
/* restore vlan filter states */
offloads = hw->data->dev_conf.rxmode.offloads;
- enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+ enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
ret = hns3_enable_vlan_filter(hns, enable);
if (ret) {
hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
txmode->hw_vlan_reject_untagged);
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
ret = hns3_vlan_offload_set(dev, mask);
if (ret) {
hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2213,9 +2213,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
int max_tc = 0;
int i;
- if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
- (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+ (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
rx_mq_mode, tx_mq_mode);
return -EOPNOTSUPP;
@@ -2223,7 +2223,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
if (dcb_rx_conf->nb_tcs > pf->tc_max) {
hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2232,7 +2232,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
- hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+ hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
"nb_tcs(%d) != %d or %d in rx direction.",
dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
return -EINVAL;
@@ -2400,11 +2400,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
* configure link_speeds (default 0), which means auto-negotiation.
* In this case, it should return success.
*/
- if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
hw->mac.support_autoneg == 0)
return 0;
- if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
ret = hns3_check_port_speed(hw, link_speeds);
if (ret)
return ret;
@@ -2464,15 +2464,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
if (ret)
goto cfg_err;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = hns3_setup_dcb(dev);
if (ret)
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rss_conf = conf->rx_adv_conf.rss_conf;
hw->rss_dis_flag = false;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2493,7 +2493,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -2600,15 +2600,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_10M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
- speed_capa |= ETH_LINK_SPEED_10M;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
return speed_capa;
}
@@ -2619,19 +2619,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
return speed_capa;
}
@@ -2650,7 +2650,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
hns3_get_firber_port_speed_capa(mac->supported_speed);
if (mac->support_autoneg == 0)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -2676,40 +2676,40 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_get_support(hw, INDEP_TXRX))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
if (hns3_dev_get_support(hw, PTP))
- info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = HNS3_MAX_RING_DESC,
@@ -2793,7 +2793,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
ret = hns3_update_link_info(eth_dev);
if (ret)
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
return ret;
}
@@ -2806,29 +2806,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link->link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_NONE;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link->link_duplex = mac->link_duplex;
- new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link->link_autoneg = mac->link_autoneg;
}
@@ -2848,8 +2848,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
if (eth_dev->data->dev_started == 0) {
new_link.link_autoneg = mac->link_autoneg;
new_link.link_duplex = mac->link_duplex;
- new_link.link_speed = ETH_SPEED_NUM_NONE;
- new_link.link_status = ETH_LINK_DOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ new_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
@@ -2861,7 +2861,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
break;
}
- if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+ if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3207,31 +3207,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
{
switch (speed_cmd) {
case HNS3_CFG_SPEED_10M:
- *speed = ETH_SPEED_NUM_10M;
+ *speed = RTE_ETH_SPEED_NUM_10M;
break;
case HNS3_CFG_SPEED_100M:
- *speed = ETH_SPEED_NUM_100M;
+ *speed = RTE_ETH_SPEED_NUM_100M;
break;
case HNS3_CFG_SPEED_1G:
- *speed = ETH_SPEED_NUM_1G;
+ *speed = RTE_ETH_SPEED_NUM_1G;
break;
case HNS3_CFG_SPEED_10G:
- *speed = ETH_SPEED_NUM_10G;
+ *speed = RTE_ETH_SPEED_NUM_10G;
break;
case HNS3_CFG_SPEED_25G:
- *speed = ETH_SPEED_NUM_25G;
+ *speed = RTE_ETH_SPEED_NUM_25G;
break;
case HNS3_CFG_SPEED_40G:
- *speed = ETH_SPEED_NUM_40G;
+ *speed = RTE_ETH_SPEED_NUM_40G;
break;
case HNS3_CFG_SPEED_50G:
- *speed = ETH_SPEED_NUM_50G;
+ *speed = RTE_ETH_SPEED_NUM_50G;
break;
case HNS3_CFG_SPEED_100G:
- *speed = ETH_SPEED_NUM_100G;
+ *speed = RTE_ETH_SPEED_NUM_100G;
break;
case HNS3_CFG_SPEED_200G:
- *speed = ETH_SPEED_NUM_200G;
+ *speed = RTE_ETH_SPEED_NUM_200G;
break;
default:
return -EINVAL;
@@ -3559,39 +3559,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
switch (speed) {
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
break;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
break;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
break;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
break;
@@ -4254,14 +4254,14 @@ hns3_mac_init(struct hns3_hw *hw)
int ret;
pf->support_sfp_query = true;
- mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+ mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
if (ret) {
PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
return ret;
}
- mac->link_status = ETH_LINK_DOWN;
+ mac->link_status = RTE_ETH_LINK_DOWN;
return hns3_config_mtu(hw, pf->mps);
}
@@ -4511,7 +4511,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
* all packets coming in in the receiving direction.
*/
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, false);
if (ret) {
hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4552,7 +4552,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
}
/* when promiscuous mode was disabled, restore the vlan filter status */
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, true);
if (ret) {
hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4672,8 +4672,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
mac_info->supported_speed =
rte_le_to_cpu_32(resp->supported_speed);
mac_info->support_autoneg = resp->autoneg_ability;
- mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
- : ETH_LINK_AUTONEG;
+ mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+ : RTE_ETH_LINK_AUTONEG;
} else {
mac_info->query_type = HNS3_DEFAULT_QUERY;
}
@@ -4684,8 +4684,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
static uint8_t
hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
{
- if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
- duplex = ETH_LINK_FULL_DUPLEX;
+ if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
return duplex;
}
@@ -4735,7 +4735,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
return ret;
/* Do nothing if no SFP */
- if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+ if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
return 0;
/*
@@ -4762,7 +4762,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
/* Config full duplex for SFP */
return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
- ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_FULL_DUPLEX);
}
static void
@@ -4881,10 +4881,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
/*
- * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+ * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
* when receiving frames. Otherwise, CRC will be stripped.
*/
- if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
else
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4912,7 +4912,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret) {
hns3_err(hw, "get link status cmd failed %d", ret);
- return ETH_LINK_DOWN;
+ return RTE_ETH_LINK_DOWN;
}
req = (struct hns3_link_status_cmd *)desc.data;
@@ -5094,19 +5094,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
return HNS3_FIBER_LINK_SPEED_1G_BIT;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
return HNS3_FIBER_LINK_SPEED_10G_BIT;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
return HNS3_FIBER_LINK_SPEED_25G_BIT;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
return HNS3_FIBER_LINK_SPEED_40G_BIT;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
return HNS3_FIBER_LINK_SPEED_50G_BIT;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
return HNS3_FIBER_LINK_SPEED_100G_BIT;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
return HNS3_FIBER_LINK_SPEED_200G_BIT;
default:
hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5344,20 +5344,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M:
speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
break;
- case ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
break;
- case ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M:
speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
break;
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
break;
default:
@@ -5373,26 +5373,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_1G:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
break;
default:
@@ -5427,28 +5427,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
static inline uint32_t
hns3_get_link_speed(uint32_t link_speeds)
{
- uint32_t speed = ETH_SPEED_NUM_NONE;
-
- if (link_speeds & ETH_LINK_SPEED_10M ||
- link_speeds & ETH_LINK_SPEED_10M_HD)
- speed = ETH_SPEED_NUM_10M;
- if (link_speeds & ETH_LINK_SPEED_100M ||
- link_speeds & ETH_LINK_SPEED_100M_HD)
- speed = ETH_SPEED_NUM_100M;
- if (link_speeds & ETH_LINK_SPEED_1G)
- speed = ETH_SPEED_NUM_1G;
- if (link_speeds & ETH_LINK_SPEED_10G)
- speed = ETH_SPEED_NUM_10G;
- if (link_speeds & ETH_LINK_SPEED_25G)
- speed = ETH_SPEED_NUM_25G;
- if (link_speeds & ETH_LINK_SPEED_40G)
- speed = ETH_SPEED_NUM_40G;
- if (link_speeds & ETH_LINK_SPEED_50G)
- speed = ETH_SPEED_NUM_50G;
- if (link_speeds & ETH_LINK_SPEED_100G)
- speed = ETH_SPEED_NUM_100G;
- if (link_speeds & ETH_LINK_SPEED_200G)
- speed = ETH_SPEED_NUM_200G;
+ uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+ if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+ link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+ speed = RTE_ETH_SPEED_NUM_10M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+ link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+ speed = RTE_ETH_SPEED_NUM_100M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+ speed = RTE_ETH_SPEED_NUM_1G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+ speed = RTE_ETH_SPEED_NUM_10G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+ speed = RTE_ETH_SPEED_NUM_25G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+ speed = RTE_ETH_SPEED_NUM_40G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+ speed = RTE_ETH_SPEED_NUM_50G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+ speed = RTE_ETH_SPEED_NUM_100G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+ speed = RTE_ETH_SPEED_NUM_200G;
return speed;
}
@@ -5456,11 +5456,11 @@ hns3_get_link_speed(uint32_t link_speeds)
static uint8_t
hns3_get_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
static int
@@ -5594,9 +5594,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
struct hns3_set_link_speed_cfg cfg;
memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
- cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
- if (cfg.autoneg != ETH_LINK_AUTONEG) {
+ cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+ if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
cfg.speed = hns3_get_link_speed(conf->link_speeds);
cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
}
@@ -5869,7 +5869,7 @@ hns3_do_stop(struct hns3_adapter *hns)
ret = hns3_cfg_mac_mode(hw, false);
if (ret)
return ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
hns3_configure_all_mac_addr(hns, true);
@@ -6080,17 +6080,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
current_mode = hns3_get_current_fc_mode(dev);
switch (current_mode) {
case HNS3_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case HNS3_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HNS3_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case HNS3_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
}
@@ -6236,7 +6236,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
int i;
rte_spinlock_lock(&hw->lock);
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = pf->local_max_tc;
else
dcb_info->nb_tcs = 1;
@@ -6536,7 +6536,7 @@ hns3_stop_service(struct hns3_adapter *hns)
struct rte_eth_dev *eth_dev;
eth_dev = &rte_eth_devices[hw->data->port_id];
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (hw->adapter_state == HNS3_NIC_STARTED) {
rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
hns3_update_linkstatus_and_event(hw, false);
@@ -6826,7 +6826,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
* in device of link speed
* below 10 Gbps.
*/
- if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+ if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
*state = 0;
return 0;
}
@@ -6858,7 +6858,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
* configured FEC mode is returned.
* If link is up, current FEC mode is returned.
*/
- if (hw->mac.link_status == ETH_LINK_DOWN) {
+ if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
ret = get_current_fec_auto_state(hw, &auto_state);
if (ret)
return ret;
@@ -6957,12 +6957,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
uint32_t cur_capa;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
cur_capa = fec_capa[1].capa;
break;
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
cur_capa = fec_capa[0].capa;
break;
default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index e28056b1bd60..0f55fd4c83ad 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -190,10 +190,10 @@ struct hns3_mac {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t media_type;
uint8_t phy_addr;
- uint8_t link_duplex : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
- uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
- uint8_t link_status : 1; /* ETH_LINK_[DOWN/UP] */
- uint32_t link_speed; /* ETH_SPEED_NUM_ */
+ uint8_t link_duplex : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint8_t link_status : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /* RTE_ETH_SPEED_NUM_ */
/*
* Some firmware versions support only the SFP speed query. In addition
* to the SFP speed query, some firmware supports the query of the speed
@@ -1076,9 +1076,9 @@ static inline uint64_t
hns3_txvlan_cap_get(struct hns3_hw *hw)
{
if (hw->port_base_vlan_cfg.state)
- return DEV_TX_OFFLOAD_VLAN_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
else
- return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
}
#endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 54dbd4b798f2..7b784048b518 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -807,15 +807,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
}
hw->adapter_state = HNS3_NIC_CONFIGURING;
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
hns3_err(hw, "setting link speed/duplex not supported");
ret = -EINVAL;
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
hw->rss_dis_flag = false;
rss_conf = conf->rx_adv_conf.rss_conf;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -832,7 +832,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -935,32 +935,32 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_get_support(hw, INDEP_TXRX))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1640,10 +1640,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN filter */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = hns3vf_en_vlan_filter(hw, true);
else
ret = hns3vf_en_vlan_filter(hw, false);
@@ -1653,10 +1653,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Vlan stripping setting */
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ret = hns3vf_en_hw_strip_rxvtag(hw, true);
else
ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1724,7 +1724,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
int ret;
dev_conf = &hw->data->dev_conf;
- en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+ en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
: false;
ret = hns3vf_en_hw_strip_rxvtag(hw, en);
if (ret)
@@ -1749,8 +1749,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
}
/* Apply vlan offload setting */
- ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
@@ -2059,7 +2059,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
struct hns3_hw *hw = &hns->hw;
int ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
/*
* The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2218,31 +2218,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&new_link, 0, sizeof(new_link));
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link.link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link.link_duplex = mac->link_duplex;
- new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link.link_autoneg =
- !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+ !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(eth_dev, &new_link);
}
@@ -2570,11 +2570,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
* Make sure call update link status before hns3vf_stop_poll_job
* because update link status depend on polling job exist.
*/
- hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+ hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
hw->mac.link_duplex);
hns3vf_stop_poll_job(eth_dev);
}
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
hns3_set_rxtx_function(eth_dev);
rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 38a2ee58a651..da6918fddda3 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
* Kunpeng930 and future kunpeng series support to use src/dst port
* fields to RSS hash for IPv6 SCTP packet type.
*/
- if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
- (rss->types & ETH_RSS_IP ||
+ if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+ (rss->types & RTE_ETH_RSS_IP ||
(!hw->rss_info.ipv6_sctp_offload_supported &&
- rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+ rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return false;
return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 5dfe68cc4dbd..9a829d7011ad 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
struct hns3_hw *hw = &hns->hw;
int ret;
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
return 0;
ret = rte_mbuf_dyn_rx_timestamp_register
--git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_tuple_table[] = {
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
};
@@ -146,44 +146,44 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_rss_types[] = {
- { ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+ { RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+ { RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
};
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
* When user does not specify the following types or a combination of
* the following types, it enables all fields for the supported RSS
* types. the following types as:
- * - ETH_RSS_L3_SRC_ONLY
- * - ETH_RSS_L3_DST_ONLY
- * - ETH_RSS_L4_SRC_ONLY
- * - ETH_RSS_L4_DST_ONLY
+ * - RTE_ETH_RSS_L3_SRC_ONLY
+ * - RTE_ETH_RSS_L3_DST_ONLY
+ * - RTE_ETH_RSS_L4_SRC_ONLY
+ * - RTE_ETH_RSS_L4_DST_ONLY
*/
if (fields_count == 0) {
for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
sizeof(rss_cfg->rss_indirection_tbl));
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
rte_spinlock_unlock(&hw->lock);
hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
}
rte_spinlock_lock(&hw->lock);
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] =
rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
}
/* When RSS is off, redirect the packet queue 0 */
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
hns3_rss_uninit(hns);
/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
* When RSS is off, it doesn't need to configure rss redirection table
* to hardware.
*/
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
hw->rss_ind_tbl_size);
if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
return ret;
rss_indir_table_uninit:
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret1 = hns3_rss_reset_indir_table(hw);
if (ret1 != 0)
return ret;
--git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
#include <rte_flow.h>
#define HNS3_ETH_RSS_SUPPORT ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
#define HNS3_RSS_IND_TBL_SIZE 512 /* The size of hash lookup table */
#define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 602548a4f25b..920ee8ceeab9 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1924,7 +1924,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
/* CRC len set here is used for amending packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1969,7 +1969,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
rxq->rx_buf_len);
}
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
@@ -2845,7 +2845,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
vec_allowed = vec_support && hns3_get_default_vec_support();
sve_allowed = vec_support && hns3_get_sve_support();
simple_allowed = !dev->data->scattered_rx &&
- (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+ (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
return hns3_recv_pkts_vec;
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
int ret;
offloads = hw->data->dev_conf.rxmode.offloads;
- gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4291,7 +4291,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
if (hns3_dev_get_support(hw, PTP))
return false;
- return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+ return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
}
static bool
@@ -4303,16 +4303,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
return true;
#else
#define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index c8229e9076b5..dfea5d5b4c2f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
uint16_t rx_rearm_nb; /* number of remaining BDs to be re-armed */
- /* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+ /* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
uint8_t crc_len;
/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index ff434d2d33ed..455110361aac 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
if (hns3_dev_get_support(hw, PTP))
return -ENOTSUP;
- /* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
- if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ /* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+ if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
return -ENOTSUP;
return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
int
hns3_rx_check_vec_support(struct rte_eth_dev *dev)
{
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN;
+ uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hns3_dev_get_support(hw, PTP))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d4a..293df887bf7c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1629,7 +1629,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* Set the global registers with default ether type value */
if (!pf->support_multi_driver) {
- ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
if (ret != I40E_SUCCESS) {
PMD_INIT_LOG(ERR,
@@ -1896,8 +1896,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
ad->tx_simple_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Only legacy filter API needs the following fdir config. So when the
* legacy filter API is deprecated, the following codes should also be
@@ -1931,13 +1931,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
* number, which will be available after rx_queue_setup(). dev_start()
* function is good to place RSS setup.
*/
- if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
ret = i40e_vmdq_setup(dev);
if (ret)
goto err;
}
- if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = i40e_dcb_setup(dev);
if (ret) {
PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2214,17 +2214,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
{
uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed |= I40E_LINK_SPEED_40GB;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed |= I40E_LINK_SPEED_25GB;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed |= I40E_LINK_SPEED_20GB;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed |= I40E_LINK_SPEED_10GB;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed |= I40E_LINK_SPEED_1GB;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
link_speed |= I40E_LINK_SPEED_100MB;
return link_speed;
@@ -2332,13 +2332,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
I40E_AQ_PHY_LINK_ENABLED;
- if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
- conf->link_speeds = ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_100M;
+ if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+ conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_100M;
abilities |= I40E_AQ_PHY_AN_ENABLED;
} else {
@@ -2876,34 +2876,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
/* Parse the link status */
switch (link_speed) {
case I40E_REG_SPEED_0:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_REG_SPEED_1:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_REG_SPEED_2:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_REG_SPEED_3:
if (hw->mac.type == I40E_MAC_X722) {
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
} else {
reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
if (reg_val & I40E_REG_MACC_25GB)
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
else
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
}
break;
case I40E_REG_SPEED_4:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
default:
PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2930,8 +2930,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
status = i40e_aq_get_link_info(hw, enable_lse,
&link_status, NULL);
if (unlikely(status != I40E_SUCCESS)) {
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
return;
}
@@ -2946,28 +2946,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
/* Parse the link status */
switch (link_status.link_speed) {
case I40E_LINK_SPEED_100MB:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_LINK_SPEED_1GB:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_LINK_SPEED_20GB:
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case I40E_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case I40E_LINK_SPEED_40GB:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
default:
if (link->link_status)
- link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -2984,9 +2984,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
/* i40e uses full duplex only */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (!wait_to_complete && !enable_lse)
update_link_reg(hw, &link);
@@ -3720,33 +3720,33 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3805,7 +3805,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
/* For XL710 */
- dev_info->speed_capa = ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
dev_info->default_rxportconf.nb_queues = 2;
dev_info->default_txportconf.nb_queues = 2;
if (dev->data->nb_rx_queues == 1)
@@ -3819,17 +3819,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
/* For XXV710 */
- dev_info->speed_capa = ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
dev_info->default_rxportconf.ring_size = 256;
dev_info->default_txportconf.ring_size = 256;
} else {
/* For X710 */
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
dev_info->default_rxportconf.ring_size = 512;
dev_info->default_txportconf.ring_size = 256;
} else {
@@ -3868,7 +3868,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
int ret;
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
reg_id = 2;
}
@@ -3915,12 +3915,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
int ret = 0;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) ||
- (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+ (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -3934,12 +3934,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
/* 802.1ad frames ability is added in NVM API 1.7*/
if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->first_tag = rte_cpu_to_le_16(tpid);
- else if (vlan_type == ETH_VLAN_TYPE_INNER)
+ else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
hw->second_tag = rte_cpu_to_le_16(tpid);
} else {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->second_tag = rte_cpu_to_le_16(tpid);
}
ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -3998,37 +3998,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
i40e_vsi_config_vlan_filter(vsi, TRUE);
else
i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_vlan_stripping(vsi, FALSE);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
i40e_vsi_config_double_vlan(vsi, TRUE);
/* Set global registers with default ethertype. */
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
}
else
i40e_vsi_config_double_vlan(vsi, FALSE);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
/* Enable or disable outer VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4111,17 +4111,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* Return current mode according to actual setting*/
switch (hw->fc.current_mode) {
case I40E_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case I40E_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case I40E_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case I40E_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
};
return 0;
@@ -4137,10 +4137,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
struct i40e_hw *hw;
struct i40e_pf *pf;
enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
- [RTE_FC_NONE] = I40E_FC_NONE,
- [RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
- [RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
- [RTE_FC_FULL] = I40E_FC_FULL
+ [RTE_ETH_FC_NONE] = I40E_FC_NONE,
+ [RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+ [RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+ [RTE_ETH_FC_FULL] = I40E_FC_FULL
};
/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4287,7 +4287,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
}
rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
else
mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4440,7 +4440,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4456,8 +4456,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -4483,7 +4483,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4500,8 +4500,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -4818,7 +4818,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
hw->func_caps.num_vsis - vsi_count);
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
- ETH_64_POOLS);
+ RTE_ETH_64_POOLS);
if (pf->max_nb_vmdq_vsi) {
pf->flags |= I40E_FLAG_VMDQ;
pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6104,10 +6104,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
int mask = 0;
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = i40e_vlan_offload_set(dev, mask);
if (ret) {
PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6236,9 +6236,9 @@ i40e_pf_setup(struct i40e_pf *pf)
/* Configure filter control */
memset(&settings, 0, sizeof(settings));
- if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+ if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
- else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+ else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
else {
PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7098,7 +7098,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
{
uint32_t vid_idx, vid_bit;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return 0;
vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7133,7 +7133,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
int ret;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return;
i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -7727,25 +7727,25 @@ static int
i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag)
{
switch (filter_type) {
- case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
break;
- case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
break;
- case RTE_TUNNEL_FILTER_IMAC_TENID:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_TENID:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
break;
- case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+ case RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC:
*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
break;
- case ETH_TUNNEL_FILTER_IMAC:
+ case RTE_ETH_TUNNEL_FILTER_IMAC:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
break;
- case ETH_TUNNEL_FILTER_OIP:
+ case RTE_ETH_TUNNEL_FILTER_OIP:
*flag = I40E_AQC_ADD_CLOUD_FILTER_OIP;
break;
- case ETH_TUNNEL_FILTER_IIP:
+ case RTE_ETH_TUNNEL_FILTER_IIP:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IIP;
break;
default:
@@ -8711,16 +8711,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8746,12 +8746,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8843,7 +8843,7 @@ int
i40e_pf_reset_rss_reta(struct i40e_pf *pf)
{
struct i40e_hw *hw = &pf->adapter->hw;
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
int num;
@@ -8851,7 +8851,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
* configured. It's necessary to calculate the actual PF
* queues that are configured.
*/
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
else
num = pf->dev_data->nb_rx_queues;
@@ -8930,7 +8930,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
if (!(rss_hf & pf->adapter->flow_types_mask) ||
- !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+ !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return 0;
hw = I40E_PF_TO_HW(pf);
@@ -10267,16 +10267,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_25G:
tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
break;
@@ -10504,7 +10504,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
else
*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_cfg->pfc.willing = 0;
dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11012,7 +11012,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
uint16_t bsf, tc_mapping;
int i, j = 0;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
else
dcb_info->nb_tcs = 1;
@@ -11060,7 +11060,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
}
j++;
- } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+ } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 1d57b9617e66..d8042abbd9be 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -147,17 +147,17 @@ enum i40e_flxpld_layer_idx {
I40E_FLAG_RSS_AQ_CAPABLE)
#define I40E_RSS_OFFLOAD_ALL ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/* All bits of RSS hash enable for X722*/
#define I40E_RSS_HENA_ALL_X722 ( \
@@ -1063,7 +1063,7 @@ struct i40e_rte_flow_rss_conf {
uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /**< Hash key. */
- uint16_t queue[ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
+ uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
bool symmetric_enable; /**< true, if enable symmetric */
uint64_t config_pctypes; /**< All PCTYPES with the flow */
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index e41a84f1d737..9acaa1875105 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
uint64_t reg_r = 0;
uint16_t reg_id;
uint16_t tpid;
@@ -3601,13 +3601,13 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
}
static uint16_t i40e_supported_tunnel_filter_types[] = {
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID |
- ETH_TUNNEL_FILTER_IVLAN,
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID,
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID |
- ETH_TUNNEL_FILTER_IMAC,
- ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC,
};
static int
@@ -3697,12 +3697,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
rte_memcpy(&filter->outer_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_OMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
} else {
rte_memcpy(&filter->inner_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_IMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
}
}
break;
@@ -3724,7 +3724,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
filter->inner_vlan =
rte_be_to_cpu_16(vlan_spec->tci) &
I40E_VLAN_TCI_MASK;
- filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
}
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -3798,7 +3798,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
vxlan_spec->vni, 3);
filter->tenant_id =
rte_be_to_cpu_32(tenant_id_be);
- filter_type |= ETH_TUNNEL_FILTER_TENID;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
}
vxlan_flag = 1;
@@ -3927,12 +3927,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
rte_memcpy(&filter->outer_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_OMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
} else {
rte_memcpy(&filter->inner_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_IMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
}
}
@@ -3955,7 +3955,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
filter->inner_vlan =
rte_be_to_cpu_16(vlan_spec->tci) &
I40E_VLAN_TCI_MASK;
- filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
}
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -4050,7 +4050,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
nvgre_spec->tni, 3);
filter->tenant_id =
rte_be_to_cpu_32(tenant_id_be);
- filter_type |= ETH_TUNNEL_FILTER_TENID;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
}
nvgre_flag = 1;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 5da3d187076e..8962e9d97aa7 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -105,47 +105,47 @@ struct i40e_hash_map_rss_inset {
const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
/* IPv4 */
- { ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* IPv6 */
- { ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* Port */
- { ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+ { RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
/* Ether */
- { ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
- { ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+ { RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+ { RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
/* VLAN */
- { ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
- { ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+ { RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+ { RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
};
#define I40E_HASH_VOID_NEXT_ALLOW BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -208,30 +208,30 @@ struct i40e_hash_match_pattern {
#define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
pattern, rss_mask, true, cus_pctype }
-#define I40E_HASH_L2_RSS_MASK (ETH_RSS_VLAN | ETH_RSS_ETH | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK (RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY)
#define I40E_HASH_L23_RSS_MASK (I40E_HASH_L2_RSS_MASK | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY)
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
-#define I40E_HASH_IPV4_L23_RSS_MASK (ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK (ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK (RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK (RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
#define I40E_HASH_L234_RSS_MASK (I40E_HASH_L23_RSS_MASK | \
- ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
-#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
-#define I40E_HASH_L4_TYPES (ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES (RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
@@ -239,72 +239,72 @@ struct i40e_hash_match_pattern {
static const struct i40e_hash_match_pattern match_patterns[] = {
/* Ether */
I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
- ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+ RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
I40E_FILTER_PCTYPE_L2_PAYLOAD),
/* IPv4 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV4),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
- ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
- ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
- ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
/* IPv6 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV6),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_FRAG,
- ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV6),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
- ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
- ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
- ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
/* ESP */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
/* GTPC */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -319,27 +319,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
/* L2TPV3 */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
/* AH */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV6),
};
@@ -575,29 +575,29 @@ i40e_hash_get_inset(uint64_t rss_types)
/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
* it is the same case as none of them are added.
*/
- mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
- if (mask == ETH_RSS_L2_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
inset &= ~I40E_INSET_DMAC;
- else if (mask == ETH_RSS_L2_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
inset &= ~I40E_INSET_SMAC;
- mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
- if (mask == ETH_RSS_L3_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
- else if (mask == ETH_RSS_L3_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
- mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
- if (mask == ETH_RSS_L4_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
inset &= ~I40E_INSET_DST_PORT;
- else if (mask == ETH_RSS_L4_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
inset &= ~I40E_INSET_SRC_PORT;
if (rss_types & I40E_HASH_L4_TYPES) {
uint64_t l3_mask = rss_types &
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
uint64_t l4_mask = rss_types &
- (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
if (l3_mask && !l4_mask)
inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -836,7 +836,7 @@ i40e_hash_config(struct i40e_pf *pf,
/* Update lookup table */
if (rss_info->queue_num > 0) {
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i, j = 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -943,7 +943,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
"RSS key is ignored when queues specified");
pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
max_queue = i40e_pf_calc_configured_queues_num(pf);
else
max_queue = pf->dev_data->nb_rx_queues;
@@ -1081,22 +1081,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
uint64_t type, mask;
/* Validate L2 */
- type = ETH_RSS_ETH & rss_types;
- mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+ type = RTE_ETH_RSS_ETH & rss_types;
+ mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L3 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
- mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+ mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L4 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
- mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+ mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
if (!type && mask)
return false;
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
event.event_data.link_event.link_status =
dev->data->dev_link.link_status;
- /* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+ /* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
switch (dev->data->dev_link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
break;
default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 554b1142c136..a13bb81115f4 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
for (i = 0; i < tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
if (k) {
for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -1995,7 +1995,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->queue_id = queue_idx;
rxq->reg_idx = reg_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2243,7 +2243,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
}
/* check simple tx conflict */
if (ad->tx_simple_allowed) {
- if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+ if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
PMD_DRV_LOG(ERR, "No-simple tx is required.");
return -EINVAL;
@@ -3417,7 +3417,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
ad->tx_vec_allowed = (ad->tx_simple_allowed &&
txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2301e6301d7d..5e6eecc50116 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
bool rx_deferred_start; /**< don't start this queue in dev start */
uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
uint8_t dcb_tc; /**< Traffic class of rx queue */
- uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -166,7 +166,7 @@ struct i40e_tx_queue {
bool q_set; /**< indicate if tx queue has been configured */
bool tx_deferred_start; /**< don't start this queue in dev start */
uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 4ffe030fcb64..7abc0821d119 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -900,7 +900,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52e3c567558..f9a7f4655050 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
*/
txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < n; i++) {
free[i] = txep[i].mbuf;
txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct i40e_rx_queue *rxq;
uint16_t desc, i;
bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
/* no QinQ support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a9b..663c46b91dc5 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
return -EINVAL;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
return i40e_vsi_config_vlan_filter(vsi, TRUE);
else
return i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
return i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 34bfa9af4734..12f541f53926 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -50,18 +50,18 @@
VIRTCHNL_VF_OFFLOAD_RX_POLLING)
#define IAVF_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
#define IAVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define IAVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722b0..df44df772e4e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -266,53 +266,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
static const uint64_t map_hena_rss[] = {
/* IPv4 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
- ETH_RSS_NONFRAG_IPV4_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
- ETH_RSS_NONFRAG_IPV4_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
/* IPv6 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
- ETH_RSS_NONFRAG_IPV6_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
- ETH_RSS_NONFRAG_IPV6_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
/* L2 Payload */
- [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+ [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
};
- const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV4_OTHER |
- ETH_RSS_FRAG_IPV4;
+ const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_FRAG_IPV4;
- const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP |
- ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_FRAG_IPV6;
+ const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_FRAG_IPV6;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -331,13 +331,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
/**
- * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+ * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
* generalizations of all other IPv4 and IPv6 RSS types.
*/
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
rss_hf |= ipv4_rss;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
rss_hf |= ipv6_rss;
RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -363,10 +363,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
if (valid_rss_hf & ipv4_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
if (valid_rss_hf & ipv6_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
if (rss_hf & ~valid_rss_hf)
PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -467,7 +467,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
return 0;
enable = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
iavf_config_vlan_insert_v2(adapter, enable);
return 0;
@@ -479,10 +479,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
int err;
err = iavf_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to update vlan offload");
return err;
@@ -512,8 +512,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
ad->rx_vec_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Large VF setting */
if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -611,7 +611,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
rxq->max_pkt_len > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -961,34 +961,34 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1048,42 +1048,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
*/
switch (vf->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = vf->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -1231,14 +1231,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
bool enable;
int err;
- if (mask & ETH_VLAN_FILTER_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
iavf_iterate_vlan_filters_v2(dev, enable);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = iavf_config_vlan_strip_v2(adapter, enable);
/* If not support, the stripping is already disabled by PF */
@@ -1267,9 +1267,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
err = iavf_enable_vlan_strip(adapter);
else
err = iavf_disable_vlan_strip(adapter);
@@ -1311,8 +1311,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(lut, vf->rss_lut, reta_size);
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -1348,8 +1348,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = vf->rss_lut[i];
}
@@ -1556,7 +1556,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
ret = iavf_query_stats(adapter, &pstats);
if (ret == 0) {
uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
RTE_ETHER_CRC_LEN;
iavf_update_stats(vsi, pstats);
stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 01724cd569dd..55d8a11da388 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -395,90 +395,90 @@ struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_IPV4_CHKSUM)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_IPV4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
/* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
/* VLAN IPV4 */
#define IAVF_RSS_TYPE_VLAN_IPV4 (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4 ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4 RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6 ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6 RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* GTPU IPv4 */
#define IAVF_RSS_TYPE_GTPU_IPV4 (IAVF_RSS_TYPE_INNER_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_UDP (IAVF_RSS_TYPE_INNER_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_TCP (IAVF_RSS_TYPE_INNER_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define IAVF_RSS_TYPE_GTPU_IPV6 (IAVF_RSS_TYPE_INNER_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_UDP (IAVF_RSS_TYPE_INNER_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_TCP (IAVF_RSS_TYPE_INNER_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/**
* Supported pattern for hash.
@@ -496,7 +496,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv4_udp, IAVF_RSS_TYPE_VLAN_IPV4_UDP, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_vlan_ipv4_tcp, IAVF_RSS_TYPE_VLAN_IPV4_TCP, &outer_ipv4_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv4_sctp, IAVF_RSS_TYPE_VLAN_IPV4_SCTP, &outer_ipv4_sctp_tmplt},
- {iavf_pattern_eth_ipv4_gtpu, ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
+ {iavf_pattern_eth_ipv4_gtpu, RTE_ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4, IAVF_RSS_TYPE_GTPU_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_udp, IAVF_RSS_TYPE_GTPU_IPV4_UDP, &inner_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp, IAVF_RSS_TYPE_GTPU_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -538,9 +538,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ah, IAVF_RSS_TYPE_IPV4_AH, &ipv4_ah_tmplt},
{iavf_pattern_eth_ipv4_l2tpv3, IAVF_RSS_TYPE_IPV4_L2TPV3, &ipv4_l2tpv3_tmplt},
{iavf_pattern_eth_ipv4_pfcp, IAVF_RSS_TYPE_IPV4_PFCP, &ipv4_pfcp_tmplt},
- {iavf_pattern_eth_ipv4_gtpc, ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
- {iavf_pattern_eth_ecpri, ETH_RSS_ECPRI, ð_ecpri_tmplt},
- {iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_gtpc, RTE_ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ecpri, RTE_ETH_RSS_ECPRI, ð_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_ecpri, RTE_ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4_tcp, IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -565,7 +565,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
- {iavf_pattern_eth_ipv6_gtpu, ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
+ {iavf_pattern_eth_ipv6_gtpu, RTE_ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6, IAVF_RSS_TYPE_GTPU_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_udp, IAVF_RSS_TYPE_GTPU_IPV6_UDP, &inner_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp, IAVF_RSS_TYPE_GTPU_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -607,7 +607,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv6_ah, IAVF_RSS_TYPE_IPV6_AH, &ipv6_ah_tmplt},
{iavf_pattern_eth_ipv6_l2tpv3, IAVF_RSS_TYPE_IPV6_L2TPV3, &ipv6_l2tpv3_tmplt},
{iavf_pattern_eth_ipv6_pfcp, IAVF_RSS_TYPE_IPV6_PFCP, &ipv6_pfcp_tmplt},
- {iavf_pattern_eth_ipv6_gtpc, ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ipv6_gtpc, RTE_ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6_tcp, IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -648,52 +648,52 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
struct virtchnl_rss_cfg rss_cfg;
#define IAVF_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
rss_cfg.proto_hdrs = inner_ipv4_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
rss_cfg.proto_hdrs = inner_ipv6_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
@@ -855,28 +855,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr = &proto_hdrs->proto_hdr[i];
switch (hdr->type) {
case VIRTCHNL_PROTO_HDR_ETH:
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
hdr->field_selector = 0;
- else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
REFINE_PROTO_FLD(DEL, ETH_DST);
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
REFINE_PROTO_FLD(DEL, ETH_SRC);
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
- } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
REFINE_PROTO_FLD(DEL, IPV4_SRC);
}
@@ -884,39 +884,39 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
REFINE_PROTO_FLD(DEL, IPV6_SRC);
}
@@ -933,7 +933,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
}
break;
case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
else
hdr->field_selector = 0;
@@ -941,87 +941,87 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_UDP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, UDP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_TCP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, TCP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_SCTP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, SCTP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_S_VLAN:
- if (!(rss_type & ETH_RSS_S_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_S_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_C_VLAN:
- if (!(rss_type & ETH_RSS_C_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_C_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_L2TPV3:
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ESP:
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_AH:
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_PFCP:
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ECPRI:
- if (!(rss_type & ETH_RSS_ECPRI))
+ if (!(rss_type & RTE_ETH_RSS_ECPRI))
hdr->field_selector = 0;
break;
default:
@@ -1038,7 +1038,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
struct virtchnl_proto_hdr *hdr;
int i;
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
for (i = 0; i < proto_hdrs->count; i++) {
@@ -1163,10 +1163,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -1177,27 +1177,27 @@ struct rss_attr_type {
uint64_t type;
};
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE64)
#define INVALID_RSS_ATTR (RTE_ETH_RSS_L3_PRE32 | \
@@ -1207,9 +1207,9 @@ struct rss_attr_type {
RTE_ETH_RSS_L3_PRE96)
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE64, VALID_RSS_IPV6},
{INVALID_RSS_ATTR, 0}
@@ -1226,15 +1226,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88bbd40c1027..ac4db117f5cd 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -617,7 +617,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->vsi = vsi;
rxq->offloads = offloads;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index f4ae2fd6e123..2d7f6b1b2dca 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
#define IAVF_VPMD_TX_MAX_FREE_BUF 64
#define IAVF_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define IAVF_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define IAVF_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IAVF_VECTOR_PATH 0
#define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 72a4fcab04a5..b47c51b8ebe4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -906,7 +906,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -958,7 +958,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
(_mm256_castsi128_si256(raw_desc_bh0),
raw_desc_bh1, 1);
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 12375d3d80bd..b8f2f69f12fc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1141,7 +1141,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -1193,7 +1193,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
(_mm256_castsi128_si256(raw_desc_bh0),
raw_desc_bh1, 1);
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
@@ -1721,7 +1721,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index edb54991e298..1de43b9b8ee2 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -819,7 +819,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e349..7b7df5eebb6d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -835,7 +835,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
PMD_DRV_LOG(DEBUG, "RSS is not supported");
return -ENOTSUP;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
/* set all lut items to default queue */
memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ebd8ca57ef5f..1cda2db00e56 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -95,7 +95,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -582,7 +582,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -644,7 +644,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
}
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -660,8 +660,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
return 0;
}
@@ -683,27 +683,27 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -933,42 +933,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
*/
switch (hw->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = hw->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -987,11 +987,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
udp_tunnel->udp_port);
break;
@@ -1018,8 +1018,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
break;
default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 44fb38dbe7b1..b9fcfc80ad9b 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -143,28 +143,28 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -246,9 +246,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
bool enable = !!(dev_conf->rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (enable && repr->outer_vlan_info.port_vlan_ena) {
PMD_DRV_LOG(ERR,
@@ -345,7 +345,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (!ice_dcf_vlan_offload_ena(repr))
return -ENOTSUP;
- if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer VLAN in QinQ\n");
return -EINVAL;
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (repr->outer_vlan_info.stripping_ena) {
err = ice_dcf_vf_repr_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR,
"Failed to reset VLAN stripping : %d\n",
@@ -449,7 +449,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
int err;
err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index edbc74632711..6a6637a15af7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1487,9 +1487,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
TAILQ_INIT(&vsi->mac_list);
TAILQ_INIT(&vsi->vlan_list);
- /* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+ /* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
- ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+ RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
hw->func_caps.common_cap.rss_table_size;
pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
@@ -2993,14 +2993,14 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
int ret;
#define ICE_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
if (ret)
@@ -3010,7 +3010,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
cfg.symm = 0;
cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
/* Configure RSS for IPv4 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3020,7 +3020,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for IPv6 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3030,7 +3030,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3041,7 +3041,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3052,7 +3052,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3063,7 +3063,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3074,7 +3074,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -3085,7 +3085,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -3095,7 +3095,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -3105,7 +3105,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -3115,7 +3115,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3125,7 +3125,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3135,7 +3135,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3145,7 +3145,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3288,8 +3288,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_rx_queues) {
ret = ice_init_rss(pf);
@@ -3569,8 +3569,8 @@ ice_dev_start(struct rte_eth_dev *dev)
ice_set_rx_function(dev);
ice_set_tx_function(dev);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = ice_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3682,40 +3682,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->flow_type_rss_offloads = 0;
if (!is_safe_mode) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
}
dev_info->rx_queue_offload_capa = 0;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->reta_size = pf->hash_lut_size;
dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3754,24 +3754,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_align = ICE_ALIGN_RING_DESC,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_25G;
phy_type_low = hw->port_info->phy.phy_type_low;
phy_type_high = hw->port_info->phy.phy_type_high;
if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->nb_rx_queues = dev->data->nb_rx_queues;
dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3836,8 +3836,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
status = ice_aq_get_link_info(hw->port_info, enable_lse,
&link_status, NULL);
if (status != ICE_SUCCESS) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
goto out;
}
@@ -3853,55 +3853,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
goto out;
/* Full-duplex operation at all supported speeds */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* Parse the link status */
switch (link_status.link_speed) {
case ICE_AQ_LINK_SPEED_10MB:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case ICE_AQ_LINK_SPEED_100MB:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case ICE_AQ_LINK_SPEED_1000MB:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ICE_AQ_LINK_SPEED_2500MB:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case ICE_AQ_LINK_SPEED_5GB:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case ICE_AQ_LINK_SPEED_10GB:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case ICE_AQ_LINK_SPEED_20GB:
- link.link_speed = ETH_SPEED_NUM_20G;
+ link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case ICE_AQ_LINK_SPEED_25GB:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case ICE_AQ_LINK_SPEED_40GB:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case ICE_AQ_LINK_SPEED_50GB:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case ICE_AQ_LINK_SPEED_100GB:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case ICE_AQ_LINK_SPEED_UNKNOWN:
PMD_DRV_LOG(ERR, "Unknown link speed");
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
default:
PMD_DRV_LOG(ERR, "None link speed");
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
out:
ice_atomic_write_link_status(dev, &link);
@@ -4377,15 +4377,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ice_vsi_config_vlan_filter(vsi, true);
else
ice_vsi_config_vlan_filter(vsi, false);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ice_vsi_config_vlan_stripping(vsi, true);
else
ice_vsi_config_vlan_stripping(vsi, false);
@@ -4500,8 +4500,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -4550,8 +4550,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -5460,7 +5460,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
break;
default:
@@ -5484,7 +5484,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
break;
default:
@@ -5505,7 +5505,7 @@ ice_timesync_enable(struct rte_eth_dev *dev)
int ret;
if (dev->data->dev_started && !(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_TIMESTAMP)) {
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
PMD_DRV_LOG(ERR, "Rx timestamp offload not configured");
return -1;
}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 1cd3753ccc5f..599e0028f7e8 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -117,19 +117,19 @@
ICE_FLAG_VF_MAC_BY_PF)
#define ICE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/**
* The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 20a3204fab7e..35eff8b17d28 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
#define ICE_IPV4_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
#define ICE_IPV6_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE32 | \
RTE_ETH_RSS_L3_PRE48 | \
RTE_ETH_RSS_L3_PRE64)
@@ -373,87 +373,87 @@ struct ice_rss_hash_cfg eth_tmplt = {
};
/* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_IPV4_CHKSUM)
+#define ICE_RSS_TYPE_ETH_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_IPV4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_UDP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_TCP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_SCTP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV4 ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV4 RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG (ETH_RSS_ETH | ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_ETH_IPV6_UDP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV6_TCP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV6_SCTP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV6 ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV6 RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* VLAN IPV4 */
#define ICE_RSS_TYPE_VLAN_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV4)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define ICE_RSS_TYPE_VLAN_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_SCTP (ICE_RSS_TYPE_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define ICE_RSS_TYPE_VLAN_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_FRAG (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_VLAN_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_SCTP (ICE_RSS_TYPE_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* GTPU IPv4 */
#define ICE_RSS_TYPE_GTPU_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define ICE_RSS_TYPE_GTPU_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* PPPOE */
-#define ICE_RSS_TYPE_PPPOE (ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE (RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
/* PPPOE IPv4 */
#define ICE_RSS_TYPE_PPPOE_IPV4 (ICE_RSS_TYPE_IPV4 | \
@@ -472,17 +472,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
ICE_RSS_TYPE_PPPOE)
/* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/* MAC */
-#define ICE_RSS_TYPE_ETH ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH RTE_ETH_RSS_ETH
/**
* Supported pattern for hash.
@@ -647,86 +647,86 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
*hash_flds &= ~ICE_FLOW_HASH_ETH;
- if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
- if (rss_type & ETH_RSS_ETH)
+ if (rss_type & RTE_ETH_RSS_ETH)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
- if (rss_type & ETH_RSS_C_VLAN)
+ if (rss_type & RTE_ETH_RSS_C_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
- else if (rss_type & ETH_RSS_S_VLAN)
+ else if (rss_type & RTE_ETH_RSS_S_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
- if (!(rss_type & ETH_RSS_PPPOE))
+ if (!(rss_type & RTE_ETH_RSS_PPPOE))
*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
}
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
}
if (rss_type & RTE_ETH_RSS_L3_PRE32) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
} else {
@@ -735,10 +735,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE48) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
} else {
@@ -747,10 +747,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE64) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
} else {
@@ -762,81 +762,81 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
}
}
@@ -870,7 +870,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -892,10 +892,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -907,9 +907,9 @@ struct rss_attr_type {
};
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE32, VALID_RSS_IPV6},
{RTE_ETH_RSS_L3_PRE48, VALID_RSS_IPV6},
@@ -928,16 +928,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ff362c21d9f5..8406240d7209 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -303,7 +303,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
}
}
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
/* Register mbuf field and flag for Rx timestamp */
err = rte_mbuf_dyn_rx_timestamp_register(
&ice_timestamp_dynfield_offset,
@@ -367,7 +367,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
regval |= (0x03 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
QRXFLXP_CNTXT_RXDID_PRIO_M;
- if (ad->ptp_ena || rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (ad->ptp_ena || rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
regval |= QRXFLXP_CNTXT_TS_M;
ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
@@ -1117,7 +1117,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = vsi->base_queue + queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1624,7 +1624,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
ice_rxd_to_vlan_tci(mb, &rxdp[j]);
rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -1942,7 +1942,7 @@ ice_recv_scattered_pkts(void *rx_queue,
rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -2373,7 +2373,7 @@ ice_recv_pkts(void *rx_queue,
rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -2889,7 +2889,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
for (i = 0; i < txq->tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
rte_mempool_put(txep->mbuf->pool, txep->mbuf);
txep->mbuf = NULL;
@@ -3365,7 +3365,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 490693bff218..86955539bea8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -474,7 +474,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 7efe7b50a206..af23f6a34e58 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -585,7 +585,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
@@ -995,7 +995,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index f0f99265857e..b1d975b31a5a 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
}
#define ICE_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define ICE_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define ICE_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define ICE_VECTOR_PATH 0
#define ICE_VECTOR_OFFLOAD_PATH 1
@@ -287,7 +287,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
if (rxq->proto_xtr != PROTO_XTR_NONE)
return -1;
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
return -1;
if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD)
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b641b..7ce80a442b35 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -307,8 +307,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
@@ -318,7 +318,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (tx_mq_mode != ETH_MQ_TX_NONE)
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
PMD_INIT_LOG(WARNING,
"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
tx_mq_mode);
@@ -334,8 +334,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = igc_check_mq_mode(dev);
if (ret != 0)
@@ -473,12 +473,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (speed == SPEED_2500) {
uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -490,9 +490,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
} else {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -525,7 +525,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -972,18 +972,18 @@ eth_igc_start(struct rte_eth_dev *dev)
/* VLAN Offload Settings */
eth_igc_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
hw->mac.autoneg = 1;
} else {
int num_speeds = 0;
- if (*speeds & ETH_LINK_SPEED_FIXED) {
+ if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_DRV_LOG(ERR,
"Force speed mode currently not supported");
igc_dev_clear_queues(dev);
@@ -993,33 +993,33 @@ eth_igc_start(struct rte_eth_dev *dev)
hw->phy.autoneg_advertised = 0;
hw->mac.autoneg = 1;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_2_5G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
num_speeds++;
}
@@ -1482,14 +1482,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
- dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_vmdq_pools = 0;
dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1515,9 +1515,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2141,13 +2141,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2179,16 +2179,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->fc.requested_mode = igc_fc_none;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->fc.requested_mode = igc_fc_rx_pause;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->fc.requested_mode = igc_fc_tx_pause;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->fc.requested_mode = igc_fc_full;
break;
default:
@@ -2234,29 +2234,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* set redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta, reg;
uint16_t idx, shift;
uint8_t j, mask;
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGC_RSS_RDT_REG_SIZE_MASK);
/* if no need to update the register */
if (!mask ||
- shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+ shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
continue;
/* check mask whether need to read the register value first */
@@ -2290,29 +2290,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* read redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta;
uint16_t idx, shift;
uint8_t j, mask;
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGC_RSS_RDT_REG_SIZE_MASK);
/* if no need to read register */
if (!mask ||
- shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+ shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
continue;
/* read register and get the queue index */
@@ -2369,23 +2369,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf = 0;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf |= rss_hf;
return 0;
@@ -2514,22 +2514,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igc_vlan_hw_strip_enable(dev);
else
igc_vlan_hw_strip_disable(dev);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igc_vlan_hw_filter_enable(dev);
else
igc_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return igc_vlan_hw_extend_enable(dev);
else
return igc_vlan_hw_extend_disable(dev);
@@ -2547,7 +2547,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
uint32_t reg_val;
/* only outer TPID of double VLAN can be configured*/
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg_val = IGC_READ_REG(hw, IGC_VET);
reg_val = (reg_val & (~IGC_VET_EXT)) |
((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 5e6c2ff30157..f56cad79e939 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -66,37 +66,37 @@ extern "C" {
#define IGC_TX_MAX_MTU_SEG UINT8_MAX
#define IGC_RX_OFFLOAD_ALL ( \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IGC_TX_OFFLOAD_ALL ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_UDP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_UDP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define IGC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IGC_MAX_ETQF_FILTERS 3 /* etqf(3) is used for 1588 */
#define IGC_ETQF_FILTER_1588 3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 56132e8c6cd6..1d34ae2e1b15 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
};
/** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
/**< Start context position for transmit queue. */
struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
static inline uint64_t
@@ -847,23 +847,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
}
@@ -1037,10 +1037,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
}
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igc_rss_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/*
* configure RSS register for following,
* then disable the RSS logic
@@ -1111,7 +1111,7 @@ igc_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0;
bus_addr = rxq->rx_ring_phys_addr;
@@ -1177,7 +1177,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
if (dev->data->scattered_rx) {
@@ -1221,20 +1221,20 @@ igc_rx_init(struct rte_eth_dev *dev)
rxcsum |= IGC_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= IGC_RXCSUM_IPOFL;
else
rxcsum &= ~IGC_RXCSUM_IPOFL;
if (offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rxcsum |= IGC_RXCSUM_TUOFL;
- offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
} else {
rxcsum &= ~IGC_RXCSUM_TUOFL;
}
- if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
rxcsum |= IGC_RXCSUM_CRCOFL;
else
rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1242,7 +1242,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1279,12 +1279,12 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
dvmolr |= IGC_DVMOLR_STRVLAN;
else
dvmolr &= ~IGC_DVMOLR_STRVLAN;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
dvmolr &= ~IGC_DVMOLR_STRCRC;
else
dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2253,10 +2253,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
if (on) {
reg_val |= IGC_DVMOLR_STRVLAN;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index f94a1fed0a38..c688c3735c06 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(link));
if (adapter->idev.port_info->config.an_enable) {
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
}
if (!adapter->link_up ||
!(lif->state & IONIC_LIF_F_UP)) {
/* Interface is down */
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else {
/* Interface is up */
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (adapter->link_speed) {
case 10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -387,17 +387,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
dev_info->speed_capa =
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/*
* Per-queue capabilities
* RTE does not support disabling a feature on a queue if it is
* enabled globally on the device. Thus the driver does not advertise
- * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+ * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
* though the driver would be otherwise capable of disabling it on
* a per-queue basis.
*/
@@ -411,24 +411,24 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
*/
dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
0;
dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
0;
dev_info->rx_desc_lim = rx_desc_lim;
@@ -463,9 +463,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
fc_conf->autoneg = 0;
if (idev->port_info->config.pause_type)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -487,14 +487,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
break;
- case RTE_FC_RX_PAUSE:
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
return -ENOTSUP;
}
@@ -545,12 +545,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = tbl_sz / RTE_RETA_GROUP_SIZE;
+ num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if (reta_conf[i].mask & ((uint64_t)1 << j)) {
- index = (i * RTE_RETA_GROUP_SIZE) + j;
+ index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
}
}
@@ -585,12 +585,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = reta_size / RTE_RETA_GROUP_SIZE;
+ num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
memcpy(reta_conf->reta,
- &lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
- RTE_RETA_GROUP_SIZE);
+ &lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+ RTE_ETH_RETA_GROUP_SIZE);
reta_conf++;
}
@@ -618,17 +618,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
IONIC_RSS_HASH_KEY_SIZE);
if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
rss_conf->rss_hf = rss_hf;
@@ -660,17 +660,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (!lif->rss_ind_tbl)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
rss_types |= IONIC_RSS_TYPE_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
rss_types |= IONIC_RSS_TYPE_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -842,15 +842,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
static inline uint32_t
ionic_parse_link_speeds(uint16_t link_speeds)
{
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
return 100000;
- else if (link_speeds & ETH_LINK_SPEED_50G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
return 50000;
- else if (link_speeds & ETH_LINK_SPEED_40G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
return 40000;
- else if (link_speeds & ETH_LINK_SPEED_25G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
return 25000;
- else if (link_speeds & ETH_LINK_SPEED_10G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
return 10000;
else
return 0;
@@ -874,12 +874,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
IONIC_PRINT_CALL();
allowed_speeds =
- ETH_LINK_SPEED_FIXED |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_FIXED |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
if (dev_conf->link_speeds & ~allowed_speeds) {
IONIC_PRINT(ERR, "Invalid link setting");
@@ -896,7 +896,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure link */
- an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
ionic_dev_cmd_port_autoneg(idev, an_enable);
err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
#include <rte_ethdev.h>
#define IONIC_ETH_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index a1f9ce2d81cb..5e8fdf3893ad 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
/*
* IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
- * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+ * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
*/
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
else
lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
/*
* NB: While it is true that RSS_HASH is always enabled on ionic,
* setting this flag unconditionally causes problems in DTS.
- * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
*/
/* RX per-port */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
lif->features |= IONIC_ETH_HW_RX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_RX_CSUM;
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
lif->features |= IONIC_ETH_HW_RX_SG;
lif->eth_dev->data->scattered_rx = 1;
} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
}
/* Covers VLAN_STRIP */
- ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+ ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
/* TX per-port */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
lif->features |= IONIC_ETH_HW_TX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_TX_CSUM;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
else
lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
lif->features |= IONIC_ETH_HW_TX_SG;
else
lif->features &= ~IONIC_ETH_HW_TX_SG;
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
lif->features |= IONIC_ETH_HW_TSO;
lif->features |= IONIC_ETH_HW_TSO_IPV6;
lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 4d16a39c6b6d..e3df7c56debe 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -203,11 +203,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
txq->flags |= IONIC_QCQ_F_DEFERRED;
/* Convert the offload flags into queue flags */
- if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_L3;
- if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_TCP;
- if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_UDP;
eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -743,11 +743,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
/*
* Note: the interface does not currently support
- * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
* when the adapter will be able to keep the CRC and subtract
* it to the length for all received packets:
* if (eth_dev->data->dev_conf.rxmode.offloads &
- * DEV_RX_OFFLOAD_KEEP_CRC)
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC)
* rxq->crc_len = ETHER_CRC_LEN;
*/
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f7f..17088585757f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->speed_capa =
(hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
- ETH_LINK_SPEED_10G :
+ RTE_ETH_LINK_SPEED_10G :
((hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
- ETH_LINK_SPEED_25G :
- ETH_LINK_SPEED_AUTONEG);
+ RTE_ETH_LINK_SPEED_25G :
+ RTE_ETH_LINK_SPEED_AUTONEG);
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
@@ -67,30 +67,30 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
};
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
@@ -2399,10 +2399,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
(uint64_t *)&link_speed);
switch (link_speed) {
case IFPGA_RAWDEV_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case IFPGA_RAWDEV_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2460,9 +2460,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2518,9 +2518,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 46c95425adfb..7fd2c539e002 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1857,7 +1857,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= IXGBE_DMATXCTL_GDV;
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (qinq) {
reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1872,7 +1872,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
" by single VLAN");
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (qinq) {
/* Only the high 16-bits is valid */
IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1959,10 +1959,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -2083,7 +2083,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
if (hw->mac.type == ixgbe_mac_82598EB) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
ctrl |= IXGBE_VLNCTRL_VME;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2100,7 +2100,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl |= IXGBE_RXDCTL_VME;
on = TRUE;
} else {
@@ -2122,17 +2122,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct ixgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -2143,19 +2143,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
ixgbe_vlan_hw_strip_config(dev);
- }
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ixgbe_vlan_hw_filter_enable(dev);
else
ixgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
ixgbe_vlan_hw_extend_enable(dev);
else
ixgbe_vlan_hw_extend_disable(dev);
@@ -2194,10 +2193,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -2221,18 +2220,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2242,12 +2241,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -2256,12 +2255,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -2276,13 +2275,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2291,15 +2290,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2308,39 +2307,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -2349,7 +2348,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB) {
if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
PMD_INIT_LOG(ERR,
@@ -2373,8 +2372,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = ixgbe_check_mq_mode(dev);
@@ -2619,15 +2618,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
ixgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -2704,17 +2703,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G | ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G | RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- allowed_speeds = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
break;
default:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
}
link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2728,7 +2727,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2746,17 +2745,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
speed = IXGBE_LINK_SPEED_82599_AUTONEG;
}
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= IXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= IXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= IXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= IXGBE_LINK_SPEED_100_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= IXGBE_LINK_SPEED_10_FULL;
}
@@ -3832,7 +3831,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB)
dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
}
@@ -3842,9 +3841,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3883,21 +3882,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
if (hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X540_vf ||
hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550_vf) {
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
}
if (hw->mac.type == ixgbe_mac_X550) {
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
}
/* Driver-preferred Rx/Tx parameters */
@@ -3966,9 +3965,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
@@ -4211,11 +4210,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
u32 esdp_reg;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
hw->mac.get_link_status = true;
@@ -4237,8 +4236,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
if (diag != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -4274,37 +4273,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case IXGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
case IXGBE_LINK_SPEED_10_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case IXGBE_LINK_SPEED_100_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case IXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case IXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case IXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case IXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -4521,7 +4520,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4740,13 +4739,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -5044,8 +5043,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IXGBE_4_BIT_MASK);
if (!mask)
@@ -5092,8 +5091,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IXGBE_4_BIT_MASK);
if (!mask)
@@ -5255,22 +5254,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -5330,8 +5329,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
ixgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5568,10 +5567,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
ixgbevf_vlan_strip_queue_set(dev, i, on);
}
}
@@ -5702,12 +5701,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
}
@@ -5721,15 +5720,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= IXGBE_VMOLR_AUPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= IXGBE_VMOLR_ROMPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= IXGBE_VMOLR_ROPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= IXGBE_VMOLR_BAM;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= IXGBE_VMOLR_MPE;
return new_val;
@@ -6724,15 +6723,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = IXGBE_INCVAL_100;
shift = IXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = IXGBE_INCVAL_1GB;
shift = IXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = IXGBE_INCVAL_10GB;
shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7143,16 +7142,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- return ETH_RSS_RETA_SIZE_512;
+ return RTE_ETH_RSS_RETA_SIZE_512;
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
- return ETH_RSS_RETA_SIZE_64;
+ return RTE_ETH_RSS_RETA_SIZE_64;
case ixgbe_mac_X540_vf:
case ixgbe_mac_82599_vf:
return 0;
default:
- return ETH_RSS_RETA_SIZE_128;
+ return RTE_ETH_RSS_RETA_SIZE_128;
}
}
@@ -7162,10 +7161,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- if (reta_idx < ETH_RSS_RETA_SIZE_128)
+ if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
return IXGBE_RETA(reta_idx >> 2);
else
- return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+ return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
@@ -7221,7 +7220,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -7232,7 +7231,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled*/
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -7256,9 +7255,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled*/
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7271,7 +7270,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7524,7 +7523,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -7556,7 +7555,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -7653,12 +7652,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
@@ -7690,11 +7689,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 950fb2d2450c..876b670f2682 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -114,15 +114,15 @@
#define IXGBE_FDIR_NVGRE_TUNNEL_TYPE 0x0
#define IXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IXGBE_VF_IRQ_ENABLE_MASK 3 /* vf irq enable mask */
#define IXGBE_VF_MAXMSIVECTOR 1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
uint32_t key);
static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_input *input, uint8_t queue,
uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
{
*fdirctrl = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 27322ab9038a..bdc9d4796c02 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
IXGBE_SECTXCTRL_STORE_FORWARD);
reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 295e5a39b245..9f1bd0a62ba4 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -104,15 +104,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -263,15 +263,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
gpie |= IXGBE_GPIE_VTMODE_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
gpie |= IXGBE_GPIE_VTMODE_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
gpie |= IXGBE_GPIE_VTMODE_16;
break;
@@ -674,29 +674,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = &dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a51450fe5b82..aa3a406c204d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2592,26 +2592,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2780,7 +2780,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/*
@@ -3021,7 +3021,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hw->mac.type != ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return offloads;
}
@@ -3032,19 +3032,19 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
uint64_t offloads;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (ixgbe_is_vf(dev) == 0)
- offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
/*
* RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3054,20 +3054,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X550) &&
!RTE_ETH_DEV_SRIOV(dev).active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -3122,7 +3122,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -3507,23 +3507,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
}
@@ -3605,23 +3605,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -3697,12 +3697,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
ixgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* RXPBSIZE
@@ -3727,7 +3727,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3736,7 +3736,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
}
/* MRQC: enable vmdq and dcb */
- mrqc = (num_pools == ETH_16_POOLS) ?
+ mrqc = (num_pools == RTE_ETH_16_POOLS) ?
IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
@@ -3752,7 +3752,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3776,7 +3776,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 16 or 32 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*
* MPSAR - allow pools to read specific mac addresses
@@ -3858,7 +3858,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
if (hw->mac.type != ixgbe_mac_82598EB)
/*PF VF Transmit Enable*/
IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
- vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3874,12 +3874,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3889,7 +3889,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3907,12 +3907,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3922,7 +3922,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3949,7 +3949,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3976,7 +3976,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4145,7 +4145,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
if (hw->mac.type != ixgbe_mac_82598EB) {
config_dcb_rx = DCB_RX_CONFIG;
@@ -4158,8 +4158,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_configure(dev);
}
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4172,7 +4172,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -4183,7 +4183,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4199,15 +4199,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
- if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -4257,9 +4257,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
- }
}
if (config_dcb_tx) {
/* Only support an equally distributed
@@ -4273,7 +4272,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
}
@@ -4309,7 +4308,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/*
@@ -4323,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = ixgbe_dcb_pfc_enabled;
}
ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -4344,12 +4343,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -4405,7 +4404,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 64 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
/*
@@ -4526,11 +4525,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
mrqc &= ~IXGBE_MRQC_MRQE_MASK;
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS64EN;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS32EN;
break;
@@ -4551,17 +4550,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQEN);
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT4TCEN);
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT8TCEN);
break;
@@ -4588,21 +4587,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
ixgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
ixgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
ixgbe_rss_disable(dev);
@@ -4613,18 +4612,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
ixgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4658,7 +4657,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
ixgbe_vmdq_tx_hw_configure(hw);
else {
mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4671,13 +4670,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
IXGBE_MTQC_8TC_8TQ;
break;
@@ -4885,7 +4884,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
rxq->rx_using_sse = rx_using_sse;
#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
#endif
}
}
@@ -4913,10 +4912,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4924,8 +4923,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
/*
* According to chapter of 4.6.7.2.1 of the Spec Rev.
* 3.0 RSC configuration requires HW CRC stripping being
@@ -4939,7 +4938,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RFCTL configuration */
rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
- if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~IXGBE_RFCTL_RSC_DIS;
else
rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4948,7 +4947,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set RDRXCTL.RSCACKC bit */
@@ -5070,7 +5069,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
else
hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5107,7 +5106,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5116,7 +5115,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -5158,11 +5157,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/* It adds dual VLAN length for supporting dual VLAN */
if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -5177,7 +5176,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
rxcsum |= IXGBE_RXCSUM_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= IXGBE_RXCSUM_IPPCSE;
else
rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5187,7 +5186,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540) {
rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
else
rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5393,9 +5392,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY) ||
+ RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY)) {
+ RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = ixgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -5681,7 +5680,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5730,7 +5729,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
@@ -5738,8 +5737,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index a1764f2b08af..668a5b9814f6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
uint8_t rx_udp_csum_zero_err;
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -227,7 +227,7 @@ struct ixgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 005e60668a8b..cd34d4098785 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -277,7 +277,7 @@ static inline int
ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
/* no fdir support */
if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ae03ea6e9db3..ac8976062fa7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index 9fa75984fb31..bd528ff346c7 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
/**< Maximum number of MAC addresses. */
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/**< Device RX offload capabilities. */
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/**< Device TX offload capabilities. */
dev_info->speed_capa =
representor->pf_ethdev->data->dev_link.link_speed;
- /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
dev_info->switch_info.name =
representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
*/
if (hw->mac.type == ixgbe_mac_82598EB)
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_16_POOLS;
+ RTE_ETH_16_POOLS;
else
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_64_POOLS;
+ RTE_ETH_64_POOLS;
for (q = 0; q < queues_per_pool; q++)
(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
* @param rx_mask
* The RX mode mask, which is one or more of accepting Untagged Packets,
* packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-* ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-* ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+* RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+* RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
* in rx_mode.
* @param on
* 1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index cb9f7c8e8200..c428caf44189 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static int is_kni_initialized;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 0fc3f0ab66a9..90ffe31b9fda 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
break;
/* CN23xx 25G cards */
case PCI_SUBSYS_DEV_ID_CN2350_225:
case PCI_SUBSYS_DEV_ID_CN2360_225:
- devinfo->speed_capa = ETH_LINK_SPEED_25G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
break;
default:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
lio_dev_err(lio_dev,
"Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->max_mac_addrs = 1;
- devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
- devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+ devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
+ devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
devinfo->rx_desc_lim = lio_rx_desc_lim;
devinfo->tx_desc_lim = lio_tx_desc_lim;
devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_EX |
- ETH_RSS_IPV6_TCP_EX);
+ devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_IPV6_TCP_EX);
return 0;
}
@@ -519,10 +519,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
- for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
- index = (i * RTE_RETA_GROUP_SIZE) + j;
+ index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
rss_state->itable[index] = reta_conf[i].reta[j];
}
}
@@ -562,12 +562,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = reta_size / RTE_RETA_GROUP_SIZE;
+ num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
memcpy(reta_conf->reta,
- &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
- RTE_RETA_GROUP_SIZE);
+ &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+ RTE_ETH_RETA_GROUP_SIZE);
reta_conf++;
}
@@ -595,17 +595,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
if (rss_state->ip)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (rss_state->tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (rss_state->ipv6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (rss_state->ipv6_tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (rss_state->ipv6_ex)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (rss_state->ipv6_tcp_ex_hash)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
rss_conf->rss_hf = rss_hf;
@@ -673,42 +673,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_state->hash_disable)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
hashinfo |= LIO_RSS_HASH_IPV4;
rss_state->ip = 1;
} else {
rss_state->ip = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV4;
rss_state->tcp_hash = 1;
} else {
rss_state->tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
hashinfo |= LIO_RSS_HASH_IPV6;
rss_state->ipv6 = 1;
} else {
rss_state->ipv6 = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6;
rss_state->ipv6_tcp_hash = 1;
} else {
rss_state->ipv6_tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
hashinfo |= LIO_RSS_HASH_IPV6_EX;
rss_state->ipv6_ex = 1;
} else {
rss_state->ipv6_ex = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
rss_state->ipv6_tcp_ex_hash = 1;
} else {
@@ -757,7 +757,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -814,7 +814,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -912,10 +912,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
/* Initialize */
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
/* Return what we found */
if (lio_dev->linfo.link.s.link_up == 0) {
@@ -923,18 +923,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
return rte_eth_linkstatus_set(eth_dev, &link);
}
- link.link_status = ETH_LINK_UP; /* Interface is up */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (lio_dev->linfo.link.s.speed) {
case LIO_LINK_SPEED_10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case LIO_LINK_SPEED_25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
}
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1086,8 +1086,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
i % eth_dev->data->nb_rx_queues : 0);
- conf_idx = i / RTE_RETA_GROUP_SIZE;
- reta_idx = i % RTE_RETA_GROUP_SIZE;
+ conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
reta_conf[conf_idx].reta[reta_idx] = q_idx;
reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
}
@@ -1103,10 +1103,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
struct rte_eth_rss_conf rss_conf;
switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
lio_dev_rss_configure(eth_dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode. */
default:
memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1484,7 +1484,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -1505,11 +1505,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 0;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
lio_dev_err(lio_dev, "Unable to set Link Down\n");
return -1;
}
@@ -1721,9 +1721,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Inform firmware about change in number of queues to use.
* Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65c1..8533e39f6957 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
int i;
int ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e86..9deb7a5f1360 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
#define MEMIF_MP_SEND_REGION "memif_mp_send_region"
@@ -199,7 +199,7 @@ memif_dev_info(struct rte_eth_dev *dev __rte_unused, struct rte_eth_dev_info *de
dev_info->max_rx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
dev_info->max_tx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -1219,7 +1219,7 @@ memif_connect(struct rte_eth_dev *dev)
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
}
MIF_LOG(INFO, "Connected.");
return 0;
@@ -1381,10 +1381,10 @@ memif_link_update(struct rte_eth_dev *dev,
if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
proc_private = dev->process_private;
- if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
proc_private->regions_num == 0) {
memif_mp_request_regions(dev);
- } else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+ } else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
proc_private->regions_num > 0) {
memif_free_regions(dev);
}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->if_index = priv->if_index;
info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_56G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_56G;
info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
dev_link.link_speed = link_speed;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
dev->data->dev_link = dev_link;
return 0;
}
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
ret = 0;
out:
MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
};
static const uint64_t dpdk[] = {
[INNER] = 0,
- [IPV4] = ETH_RSS_IPV4,
- [IPV4_1] = ETH_RSS_FRAG_IPV4,
- [IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
- [IPV6] = ETH_RSS_IPV6,
- [IPV6_1] = ETH_RSS_FRAG_IPV6,
- [IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
- [IPV6_3] = ETH_RSS_IPV6_EX,
+ [IPV4] = RTE_ETH_RSS_IPV4,
+ [IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+ [IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IPV6] = RTE_ETH_RSS_IPV6,
+ [IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+ [IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IPV6_3] = RTE_ETH_RSS_IPV6_EX,
[TCP] = 0,
[UDP] = 0,
- [IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
- [IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
- [IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
- [IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
- [IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
- [IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+ [IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ [IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ [IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+ [IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+ [IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ [IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
};
static const uint64_t verbs[RTE_DIM(dpdk)] = {
[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
* - MAC flow rules are generated from @p dev->data->mac_addrs
* (@p priv->mac array).
* - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
* is enabled and VLAN filters are configured.
*
* @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
struct rte_ether_addr *rule_mac = ð_spec.dst;
rte_be16_t *rule_vlan =
(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!ETH_DEV(priv)->data->promiscuous ?
&vlan_spec.tci :
NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
static void
mlx4_link_status_alarm(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
};
uint32_t caught[RTE_DIM(type)] = { 0 };
struct ibv_async_event event;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
unsigned int i;
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
int
mlx4_intr_install(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
int rc;
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
int
mlx4_rxq_intr_enable(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index ee2d2b75e59a..781ee256df71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,12 +682,12 @@ mlx4_rxq_detach(struct rxq *rxq)
uint64_t
mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_RSS_HASH;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
- offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return offloads;
}
@@ -703,7 +703,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
uint64_t
mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
(void)priv;
return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
/* By default, FCS (CRC) is stripped by hardware. */
crc_present = 0;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (priv->hw_fcs_strip) {
crc_present = 1;
} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts = elts,
/* Toggle Rx checksum offload if hardware supports it. */
.csum = priv->hw_csum &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.csum_l2tun = priv->hw_csum_l2tun &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.crc_present = crc_present,
.l2tun_offload = priv->hw_csum_l2tun,
.stats = {
@@ -832,7 +832,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 7d8c4f2a2223..0db2e55befd3 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
uint64_t
mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+ uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (priv->hw_csum) {
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
}
if (priv->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (priv->hw_csum_l2tun) {
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (priv->tso)
- offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
}
return offloads;
}
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts_comp_cd_init =
RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
.csum = priv->hw_csum &&
- (offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM)),
+ (offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
.csum_l2tun = priv->hw_csum_l2tun &&
(offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
/* Enable Tx loopback for VF devices. */
.lb = !!priv->vf,
.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
dev_link.link_speed = link_speed;
priv->link_speed_capa = 0;
if (edata.supported & (SUPPORTED_1000baseT_Full |
SUPPORTED_1000baseKX_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (edata.supported & SUPPORTED_10000baseKR_Full)
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (edata.supported & (SUPPORTED_40000baseKR4_Full |
SUPPORTED_40000baseCR4_Full |
SUPPORTED_40000baseSR4_Full |
SUPPORTED_40000baseLR4_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
return ret;
}
dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
- ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+ RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
sc = ecmd->link_mode_masks[0] |
((uint64_t)ecmd->link_mode_masks[1] << 32);
priv->link_speed_capa = 0;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
sc = ecmd->link_mode_masks[2] |
((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
MLX5_BITSHIFT
(ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 111a7597317a..23d9e0a476ac 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1310,8 +1310,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1594,7 +1594,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
/*
* If HW has bug working with tunnel packet decapsulation and
* scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
- * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+ * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
*/
if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 7263d354b180..3a9b716e438c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1704,10 +1704,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_udp_tunnel *udp_tunnel)
{
MLX5_ASSERT(udp_tunnel != NULL);
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
udp_tunnel->udp_port == 4789)
return 0;
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
udp_tunnel->udp_port == 4790)
return 0;
return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 42cacd0bbe3b..52f03ada2ced 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1233,7 +1233,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
struct mlx5_flow_rss_desc {
uint32_t level;
uint32_t queue_num; /**< Number of entries in @p queue. */
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint64_t hash_fields; /* Verbs Hash fields. */
uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
#define MLX5_VPMD_DESCS_PER_LOOP 4
/* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
/* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
MLX5_RSS_SRC_DST_ONLY))
/* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
}
if ((dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+ RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->default_txportconf.ring_size = 256;
info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
- if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
- (priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+ if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+ (priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
info->default_rxportconf.nb_queues = 16;
info->default_txportconf.nb_queues = 16;
if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 002449e993e7..d645fd48647e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
uint64_t rss_types;
/**<
* RSS types bit-field associated with this node
- * (see ETH_RSS_* definitions).
+ * (see RTE_ETH_RSS_* definitions).
*/
uint64_t node_flags;
/**<
@@ -298,7 +298,7 @@ mlx5_flow_expand_rss_skip_explicit(const struct mlx5_flow_expand_node graph[],
* @param[in] pattern
* User flow pattern.
* @param[in] types
- * RSS types to expand (see ETH_RSS_* definitions).
+ * RSS types to expand (see RTE_ETH_RSS_* definitions).
* @param[in] graph
* Input graph to expand @p pattern according to @p types.
* @param[in] graph_root_index
@@ -560,8 +560,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV6),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -569,11 +569,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_OUTER_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -584,8 +584,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_GRE,
MLX5_EXPANSION_NVGRE),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -593,11 +593,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_VXLAN] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -659,32 +659,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
MLX5_EXPANSION_IPV4_TCP),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_IPV4_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
MLX5_EXPANSION_IPV6_TCP,
MLX5_EXPANSION_IPV6_FRAG_EXT),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_IPV6_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1100,7 +1100,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
* @param[in] tunnel
* 1 when the hash field is for a tunnel item.
* @param[in] layer_types
- * ETH_RSS_* types.
+ * RTE_ETH_RSS_* types.
* @param[in] hash_fields
* Item hash fields.
*
@@ -1653,14 +1653,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
&rss->types,
"some RSS protocols are not"
" supported");
- if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
- !(rss->types & ETH_RSS_IP))
+ if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+ !(rss->types & RTE_ETH_RSS_IP))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L3 partial RSS requested but L3 RSS"
" type not specified");
- if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
- !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+ if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+ !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L4 partial RSS requested but L4 RSS"
@@ -6427,8 +6427,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
* mlx5_flow_hashfields_adjust() in advance.
*/
rss_desc->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
}
flow->dev_handles = 0;
if (rss && rss->types) {
@@ -7126,7 +7126,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
if (!priv->reta_idx_n || !priv->rxqs_n) {
return 0;
}
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
queue[i] = (*priv->reta_idx)[i];
@@ -8794,7 +8794,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF,
NULL, "invalid port configuration");
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
ctx->action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f1a83d537d0c..4a16f30fb7a6 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -331,18 +331,18 @@ enum mlx5_feature_name {
/* Valid layer type for IPV4 RSS. */
#define MLX5_IPV4_LAYER_TYPES \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
/* IBV hash source bits for IPV4. */
#define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
/* Valid layer type for IPV6 RSS. */
#define MLX5_IPV6_LAYER_TYPES \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
/* IBV hash source bits for IPV6. */
#define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 5bd90bfa2818..c4a5706532a9 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10862,9 +10862,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
else
dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10872,9 +10872,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
else
dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10888,11 +10888,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
return;
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
- if (rss_types & ETH_RSS_UDP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_UDP;
else
@@ -10900,11 +10900,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
}
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
- if (rss_types & ETH_RSS_TCP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_TCP;
else
@@ -14444,9 +14444,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4:
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV4;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV4;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV4;
else
*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14455,9 +14455,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV6:
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV6;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV6;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV6;
else
*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14466,11 +14466,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_UDP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_UDP:
- if (rss_types & ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_UDP) {
*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
else
*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14479,11 +14479,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_TCP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_TCP:
- if (rss_types & ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_TCP) {
*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
else
*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14631,8 +14631,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
origin = &shared_rss->origin;
origin->func = rss->func;
origin->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 892abcb65779..f9010a674d7f 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1824,7 +1824,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_TCP,
+ (rss_desc, tunnel, RTE_ETH_RSS_TCP,
(IBV_RX_HASH_SRC_PORT_TCP |
IBV_RX_HASH_DST_PORT_TCP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1837,7 +1837,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_UDP,
+ (rss_desc, tunnel, RTE_ETH_RSS_UDP,
(IBV_RX_HASH_SRC_PORT_UDP |
IBV_RX_HASH_DST_PORT_UDP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
--git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
if (!(*priv->rxqs)[i])
continue;
(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
- !!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+ !!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
++idx;
}
return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
}
/* Fill each entry of the table even if its bit is not set. */
for (idx = 0, i = 0; (i != reta_size); ++i) {
- idx = i / RTE_RETA_GROUP_SIZE;
- reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
(*priv->reta_idx)[i];
}
return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
return ret;
for (idx = 0, i = 0; (i != reta_size); ++i) {
- idx = i / RTE_RETA_GROUP_SIZE;
- pos = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ pos = i % RTE_ETH_RETA_GROUP_SIZE;
if (((reta_conf[idx].mask >> i) & 0x1) == 0)
continue;
MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 60673d014d02..14b9991c5fa8 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,22 +333,22 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *config = &priv->config;
- uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_RSS_HASH);
+ uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
if (config->hw_fcs_strip)
- offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (config->hw_csum)
- offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
if (config->hw_vlan_strip)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (MLX5_LRO_SUPPORTED(dev))
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
return offloads;
}
@@ -362,7 +362,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
uint64_t
mlx5_get_rx_port_offloads(void)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
return offloads;
}
@@ -694,7 +694,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->dev_conf.rxmode.offloads;
/* The offloads should be checked on rte_eth_dev layer. */
- MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+ MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
DRV_LOG(ERR, "port %u queue index %u split "
"offload not configured",
@@ -1336,7 +1336,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
- unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+ unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
@@ -1439,7 +1439,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
MLX5_ASSERT(tmpl->rxq.rxseg_n &&
tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
- if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
@@ -1485,7 +1485,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
config->mprq.stride_size_n : mprq_stride_size;
tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
tmpl->rxq.strd_scatter_en =
- !!(offloads & DEV_RX_OFFLOAD_SCATTER);
+ !!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
max_lro_size = RTE_MIN(max_rx_pktlen,
@@ -1500,7 +1500,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
max_lro_size = max_rx_pktlen;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
if (lro_on_queue && first_mb_free_size <
@@ -1561,9 +1561,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
/* Toggle RX checksum offload if hardware supports it. */
- tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+ tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* Configure Rx timestamp. */
- tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+ tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
tmpl->rxq.timestamp_rx_flag = 0;
if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
&tmpl->rxq.timestamp_offset,
@@ -1572,11 +1572,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
goto error;
}
/* Configure VLAN stripping. */
- tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
/* By default, FCS (CRC) is stripped by hardware. */
tmpl->rxq.crc_present = 0;
tmpl->rxq.lro = lro_on_queue;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (config->hw_fcs_strip) {
/*
* RQs used for LRO-enabled TIRs should not be
@@ -1606,7 +1606,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
tmpl->rxq.crc_present << 2);
/* Save port ID. */
tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
- (!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+ (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
tmpl->rxq.port_id = dev->data->port_id;
tmpl->priv = priv;
tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
/* HW checksum offload capabilities of vectorized Tx. */
#define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
/*
* Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
unsigned int diff = 0, olx = 0, i, m;
MLX5_ASSERT(priv);
- if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
/* We should support Multi-Segment Packets. */
olx |= MLX5_TXOFF_CONFIG_MULTI;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
/* We should support TCP Send Offload. */
olx |= MLX5_TXOFF_CONFIG_TSO;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support Software Parser for Tunnels. */
olx |= MLX5_TXOFF_CONFIG_SWP;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support IP/TCP/UDP Checksums. */
olx |= MLX5_TXOFF_CONFIG_CSUM;
}
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
/* We should support VLAN insertion. */
olx |= MLX5_TXOFF_CONFIG_VLAN;
}
- if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
rte_mbuf_dynflag_lookup
(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 1f92250f5edd..02bb9307ae61 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,42 +98,42 @@ uint64_t
mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
- uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
struct mlx5_dev_config *config = &priv->config;
if (config->hw_csum)
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if (config->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (config->tx_pp)
- offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+ offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
if (config->swp) {
if (config->swp & MLX5_SW_PARSING_CSUM_CAP)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->swp & MLX5_SW_PARSING_TSO_CAP)
- offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
}
if (config->tunnel_en) {
if (config->hw_csum)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->tso) {
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)
- offloads |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_GRE_CAP)
- offloads |= DEV_TX_OFFLOAD_GRE_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO;
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)
- offloads |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
}
}
if (!config->mprq.enabled)
- offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
return offloads;
}
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
unsigned int inlen_mode; /* Minimal required Inline data. */
unsigned int txqs_inline; /* Min Tx queues to enable inline. */
uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
- bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
bool vlan_inline;
unsigned int temp;
txq_ctrl->txq.fast_free =
- !!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
- !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+ !!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
!config->mprq.enabled);
if (config->txqs_inline == MLX5_ARG_UNSET)
txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
* tx_burst routine.
*/
txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
- vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+ vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
!config->hw_vlan_insert;
/*
* If there are few Tx queues it is prioritized
@@ -978,19 +978,19 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
MLX5_MAX_TSO_HEADER);
txq_ctrl->txq.tso_en = 1;
}
- if (((DEV_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
+ if (((RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) |
- ((DEV_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
+ ((RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) |
- ((DEV_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
+ ((RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) |
(config->swp & MLX5_SW_PARSING_TSO_CAP))
txq_ctrl->txq.tunnel_en = 1;
- txq_ctrl->txq.swp_en = (((DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO) &
+ txq_ctrl->txq.swp_en = (((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO) &
txq_ctrl->txq.offloads) && (config->swp &
MLX5_SW_PARSING_TSO_CAP)) |
- ((DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM &
+ ((RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM &
txq_ctrl->txq.offloads) && (config->swp &
MLX5_SW_PARSING_CSUM_CAP));
}
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct mlx5_priv *priv = dev->data->dev_private;
unsigned int i;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (!priv->config.hw_vlan_strip) {
DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 31c4d3276053..9a9069da7572 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -485,8 +485,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
if (config->hw_padding) {
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2a0288087357..10fe6d828ccd 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
struct mvneta_priv *priv = dev->data->dev_private;
struct neta_ppio_params *ppio_params;
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
if (dev->data->nb_rx_queues > 1)
@@ -126,7 +126,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ppio_params = &priv->ppio_params;
@@ -151,10 +151,10 @@ static int
mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_dev_info *info)
{
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G;
info->max_rx_queues = MRVL_NETA_RXQ_MAX;
info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -503,28 +503,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
neta_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index 126a9a0c11b9..ccb87d518d83 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,14 +54,14 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 9836bb071a82..62d8aa586dae 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -734,7 +734,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9b5..d0746b0d1215 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,15 +58,15 @@
#define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
/** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
@@ -442,14 +442,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
if (rss_conf->rss_hf == 0) {
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
- } else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_2_TUPLE;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 1;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 0;
@@ -483,8 +483,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
- dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+ dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
return -EINVAL;
@@ -502,7 +502,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -524,7 +524,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return ret;
if (dev->data->nb_rx_queues == 1 &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
priv->configured = 1;
@@ -623,7 +623,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -644,7 +644,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -664,14 +664,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
ret = pp2_ppio_disable(priv->ppio);
if (ret)
return ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -893,7 +893,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
if (dev->data->all_multicast == 1)
mrvl_allmulticast_enable(dev);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = mrvl_populate_vlan_table(dev, 1);
if (ret) {
MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -929,11 +929,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
priv->flow_ctrl = 0;
}
- if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
ret = mrvl_dev_set_link_up(dev);
if (ret) {
MRVL_LOG(ERR, "Failed to set link up");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
}
@@ -1202,30 +1202,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case SPEED_10000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
pp2_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
@@ -1709,11 +1709,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
{
struct mrvl_priv *priv = dev->data->dev_private;
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
info->max_rx_queues = MRVL_PP2_RXQ_MAX;
info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1733,9 +1733,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
info->tx_offload_capa = MRVL_TX_OFFLOADS;
info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
- info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP;
+ info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP;
/* By default packets are dropped if no descriptors are available */
info->default_rxconf.rx_drop_en = 1;
@@ -1864,13 +1864,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
int ret;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
MRVL_LOG(ERR, "VLAN stripping is not supported\n");
return -ENOTSUP;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = mrvl_populate_vlan_table(dev, 1);
else
ret = mrvl_populate_vlan_table(dev, 0);
@@ -1879,7 +1879,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return ret;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
MRVL_LOG(ERR, "Extend VLAN not supported\n");
return -ENOTSUP;
}
@@ -2022,7 +2022,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
- rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+ rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2182,7 +2182,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
return ret;
}
- fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+ fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
if (ret) {
@@ -2191,10 +2191,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
if (en) {
- if (fc_conf->mode == RTE_FC_NONE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
}
return 0;
@@ -2240,19 +2240,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
rx_en = 1;
tx_en = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
rx_en = 0;
tx_en = 1;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
rx_en = 1;
tx_en = 0;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
rx_en = 0;
tx_en = 0;
break;
@@ -2329,11 +2329,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hash_type == PP2_PPIO_HASH_T_NONE)
rss_conf->rss_hf = 0;
else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
- rss_conf->rss_hf = ETH_RSS_IPV4;
+ rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
return 0;
}
@@ -3152,7 +3152,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
eth_dev->dev_ops = &mrvl_ops;
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_eth_dev_probing_finish(eth_dev);
return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
#include "hn_nvs.h"
#include "ndis.h"
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NETVSC_ARG_LATENCY "latency"
#define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
hn_rndis_get_linkspeed(hv);
link = (struct rte_eth_link) {
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_autoneg = ETH_LINK_SPEED_FIXED,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
.link_speed = hv->link_speed / 10000,
};
if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
if (old.link_status == link.link_status)
return 0;
PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
- (link.link_status == ETH_LINK_UP) ? "up" : "down");
+ (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
return rte_eth_linkstatus_set(dev, &link);
}
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
struct hn_data *hv = dev->data->dev_private;
int rc;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = HN_MAX_XFER_LEN;
dev_info->max_mac_addrs = 1;
dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
dev_info->flow_type_rss_offloads = hv->rss_offloads;
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->max_rx_queues = hv->max_queues;
dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < NDIS_HASH_INDCNT; i++) {
- uint16_t idx = i / RTE_RETA_GROUP_SIZE;
- uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+ uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
uint64_t mask = (uint64_t)1 << shift;
if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < NDIS_HASH_INDCNT; i++) {
- uint16_t idx = i / RTE_RETA_GROUP_SIZE;
- uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+ uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
uint64_t mask = (uint64_t)1 << shift;
if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
/* Convert from DPDK RSS hash flags to NDIS hash flags */
hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
hv->rss_hash |= NDIS_HASH_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
hv->rss_hash |= NDIS_HASH_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
hv->rss_hash |= NDIS_HASH_IPV6_EX;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
if (hv->rss_hash & NDIS_HASH_IPV4)
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (hv->rss_hash & NDIS_HASH_IPV6)
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
if (hv->rss_hash & NDIS_HASH_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
return 0;
}
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = hn_rndis_conf_offload(hv, txmode->offloads,
rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 62ba39636cd8..1b63b27e0c3e 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
hv->rss_offloads = 0;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
- hv->rss_offloads |= ETH_RSS_IPV4
- | ETH_RSS_NONFRAG_IPV4_TCP
- | ETH_RSS_NONFRAG_IPV4_UDP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV4
+ | RTE_ETH_RSS_NONFRAG_IPV4_TCP
+ | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
- hv->rss_offloads |= ETH_RSS_IPV6
- | ETH_RSS_NONFRAG_IPV6_TCP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6
+ | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
- hv->rss_offloads |= ETH_RSS_IPV6_EX
- | ETH_RSS_IPV6_TCP_EX;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+ | RTE_ETH_RSS_IPV6_TCP_EX;
/* Commit! */
*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
== NDIS_RXCSUM_CAP_TCP4)
params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
== NDIS_TXCSUM_CAP_IP4)
params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
else
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
else
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
return error;
}
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
== HN_NDIS_TXCSUM_CAP_IP4)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
== HN_NDIS_TXCSUM_CAP_TCP4 &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
== HN_NDIS_TXCSUM_CAP_TCP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
(hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
== HN_NDIS_LSOV2_CAP_IP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
return 0;
}
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 99d93ebf4667..3c39937816a4 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = dev->data->nb_rx_queues;
dev_info->max_tx_queues = dev->data->nb_tx_queues;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
status.speed = MAC_SPEED_UNKNOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_SPEED_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
if (internals->rxmac[0] != NULL) {
nc_rxmac_read_status(internals->rxmac[0], &status);
switch (status.speed) {
case MAC_SPEED_10G:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case MAC_SPEED_40G:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case MAC_SPEED_100G:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
nc_rxmac_read_status(internals->rxmac[i], &status);
if (status.enabled && status.link_up) {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
break;
}
}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 3ebb332ae46c..f76e2ba64621 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
}
/* Timestamps are enabled when there is
* key-value pair: enable_timestamp=1
- * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+ * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
*/
if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 0003fd54dde5..3ea697c54462 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Checking TX mode */
if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
}
/* Checking RX mode */
- if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
!(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
PMD_INIT_LOG(INFO, "RSS not supported");
return -EINVAL;
@@ -359,19 +359,19 @@ nfp_check_offloads(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
hw->mtu = dev->data->mtu;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
/* L2 broadcast */
@@ -383,13 +383,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_L2MC;
/* TX checksum offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
/* LSO offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hw->cap & NFP_NET_CFG_CTRL_LSO)
ctrl |= NFP_NET_CFG_CTRL_LSO;
else
@@ -397,7 +397,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
/* RX gather */
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
ctrl |= NFP_NET_CFG_CTRL_GATHER;
return ctrl;
@@ -485,14 +485,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
int ret;
static const uint32_t ls_to_ethtool[] = {
- [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_1G] = ETH_SPEED_NUM_1G,
- [NFP_NET_CFG_STS_LINK_RATE_10G] = ETH_SPEED_NUM_10G,
- [NFP_NET_CFG_STS_LINK_RATE_25G] = ETH_SPEED_NUM_25G,
- [NFP_NET_CFG_STS_LINK_RATE_40G] = ETH_SPEED_NUM_40G,
- [NFP_NET_CFG_STS_LINK_RATE_50G] = ETH_SPEED_NUM_50G,
- [NFP_NET_CFG_STS_LINK_RATE_100G] = ETH_SPEED_NUM_100G,
+ [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_1G] = RTE_ETH_SPEED_NUM_1G,
+ [NFP_NET_CFG_STS_LINK_RATE_10G] = RTE_ETH_SPEED_NUM_10G,
+ [NFP_NET_CFG_STS_LINK_RATE_25G] = RTE_ETH_SPEED_NUM_25G,
+ [NFP_NET_CFG_STS_LINK_RATE_40G] = RTE_ETH_SPEED_NUM_40G,
+ [NFP_NET_CFG_STS_LINK_RATE_50G] = RTE_ETH_SPEED_NUM_50G,
+ [NFP_NET_CFG_STS_LINK_RATE_100G] = RTE_ETH_SPEED_NUM_100G,
};
PMD_DRV_LOG(DEBUG, "Link update");
@@ -504,15 +504,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
memset(&link, 0, sizeof(struct rte_eth_link));
if (nn_link_status & NFP_NET_CFG_STS_LINK)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
NFP_NET_CFG_STS_LINK_RATE_MASK;
if (nn_link_status >= RTE_DIM(ls_to_ethtool))
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
link.link_speed = ls_to_ethtool[nn_link_status];
@@ -701,26 +701,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = 1;
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -757,22 +757,22 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
};
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_UDP;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP;
dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
}
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -843,7 +843,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
if (link.link_status)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -973,12 +973,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
new_ctrl = 0;
/* Enable vlan strip if it is not configured yet */
- if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
!(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
/* Disable vlan strip just if it is configured */
- if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
@@ -1018,8 +1018,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
*/
for (i = 0; i < reta_size; i += 4) {
/* Handling 4 RSS entries per loop */
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
if (!mask)
@@ -1099,8 +1099,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
*/
for (i = 0; i < reta_size; i += 4) {
/* Handling 4 RSS entries per loop */
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
if (!mask)
@@ -1138,22 +1138,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
rss_hf = rss_conf->rss_hf;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1223,22 +1223,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
/* Propagate current RSS hash functions to caller */
rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8c7..e08e594b04fe 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -141,7 +141,7 @@ nfp_net_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0c9..817fe64dbceb 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -103,7 +103,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = link_up;
link_speeds = &dev->data->dev_conf.link_speeds;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
negotiate = true;
err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
allowed_speeds = 0;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
- allowed_speeds |= ETH_LINK_SPEED_1G;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_100M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_10M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
if (*link_speeds & ~allowed_speeds) {
PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = hw->mac.default_speeds;
} else {
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= NGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= NGBE_LINK_SPEED_100M_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= NGBE_LINK_SPEED_10M_FULL;
}
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_10M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_10M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ~ETH_LINK_SPEED_AUTONEG);
+ ~RTE_ETH_LINK_SPEED_AUTONEG);
hw->mac.get_link_status = true;
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case NGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
case NGBE_LINK_SPEED_10M_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
lan_speed = 0;
break;
case NGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
lan_speed = 1;
break;
case NGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
lan_speed = 2;
break;
}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
- if (link.link_status == ETH_LINK_UP) {
+ if (link.link_status == RTE_ETH_LINK_UP) {
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
ngbe_dev_link_update(dev, 0);
/* likely to up */
- if (link.link_status != ETH_LINK_UP)
+ if (link.link_status != RTE_ETH_LINK_UP)
/* handle it 1 sec later, wait it being stable */
timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 25b9e5b1ce1b..ca03469d0e6d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
rte_spinlock_t rss_lock;
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
- RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+ RTE_ETH_RETA_GROUP_SIZE];
uint8_t rss_key[40]; /**< 40-byte hash key. */
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
if (dev == NULL)
return -EINVAL;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
if (dev == NULL)
return 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -391,9 +391,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
rte_spinlock_lock(&internal->rss_lock);
/* Copy RETA table */
- for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
internal->reta_conf[i].mask = reta_conf[i].mask;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
}
@@ -416,8 +416,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
rte_spinlock_lock(&internal->rss_lock);
/* Copy RETA table */
- for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
}
@@ -548,8 +548,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
internals->port_id = eth_dev->data->port_id;
rte_eth_random_addr(internals->eth_addr.addr_bytes);
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
- internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
+ internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
rte_memcpy(internals->rss_key, default_rss_key, 40);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index f578123ed00b..5b8cbec67b5d 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
(eth_dev->data->port_id),
link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
switch (nic->speed) {
case OCTEONTX_LINK_SPEED_SGMII:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case OCTEONTX_LINK_SPEED_XAUI:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_RXAUI:
case OCTEONTX_LINK_SPEED_10G_R:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_QSGMII:
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case OCTEONTX_LINK_SPEED_40G_R:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case OCTEONTX_LINK_SPEED_RESERVE1:
case OCTEONTX_LINK_SPEED_RESERVE2:
default:
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
octeontx_log_err("incorrect link speed %d", nic->speed);
break;
}
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= OCCTX_TX_MULTI_SEG_F;
return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
flags |= OCCTX_RX_MULTI_SEG_F;
eth_dev->data->scattered_rx = 1;
/* If scatter mode is enabled, TX should also be in multi
* seg mode, else memory leak will occur
*/
- nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
- if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+ if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
- txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+ txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
octeontx_log_err("setting link speed/duplex not supported");
return -EINVAL;
}
@@ -530,13 +530,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
octeontx_log_err("Scatter mode is disabled");
return -EINVAL;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -571,7 +571,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
/* Setup scatter mode if needed by jumbo */
if (data->mtu > buffsz) {
- nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
}
@@ -843,10 +843,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
struct octeontx_nic *nic = octeontx_pmd_priv(dev);
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_40G;
/* Min/Max MTU supported */
dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1356,7 +1356,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
nic->ev_ports = 1;
nic->print_flag = -1;
- data->dev_link.link_status = ETH_LINK_DOWN;
+ data->dev_link.link_status = RTE_ETH_LINK_DOWN;
data->dev_started = 0;
data->promiscuous = 0;
data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index 3a02824e3948..c493fa7a03ed 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,23 +55,23 @@
#define OCCTX_MAX_MTU (OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
#define OCTEONTX_RX_OFFLOADS ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
static inline struct octeontx_nic *
octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = octeontx_vlan_hw_filter(nic, true);
if (rc)
goto done;
- nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
} else {
rc = octeontx_vlan_hw_filter(nic, false);
if (rc)
goto done;
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
}
}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
TAILQ_INIT(&nic->vlan_info.fltr_tbl);
- rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
if (rc)
octeontx_log_err("Failed to set vlan offload rc=%d", rc);
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
return rc;
if (conf.rx_pause && conf.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (conf.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (conf.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
/* low_water & high_water values are in Bytes */
fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
conf.high_water = fc_conf->high_water;
conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index f491e20e95c1..060d267f5de5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
if (otx2_dev_is_vf(dev) ||
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
/* TSO not supported for earlier chip revisions */
if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
return capa;
}
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
req->npa_func = otx2_npa_pf_func_get();
req->sso_func = otx2_sso_pf_func_get();
req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
aq->rq.sso_ena = 0;
- if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
aq->rq.ipsech_ena = 1;
aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -665,7 +665,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev)) {
rc = otx2_nix_raw_clock_tsc_conv(dev);
if (rc) {
@@ -692,7 +692,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -707,29 +707,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
if (!dev->ptype_disable)
@@ -768,43 +768,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
- conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if (conf & DEV_TX_OFFLOAD_SECURITY)
+ if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
@@ -914,8 +914,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Setting up the rx[tx]_offload_flags due to change
* in rx[tx]_offloads.
@@ -1848,21 +1848,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
otx2_err("Outer IP and SCTP checksum unsupported");
goto fail_configure;
}
@@ -2235,7 +2235,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled in PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_enable(eth_dev);
else
@@ -2563,8 +2563,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
rc = otx2_eth_sec_ctx_create(eth_dev);
if (rc)
goto free_mac_addrs;
- dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+ dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+ dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
/* Initialize rte-flow */
rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4557a0ee1945..a5282c6c1231 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,43 +117,43 @@
#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
#define CQ_TIMER_THRESH_MAX 255
-#define NIX_RSS_L3_L4_SRC_DST (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
- | ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+ | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
- ETH_RSS_TCP | ETH_RSS_SCTP | \
- ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
- ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+ NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_C_VLAN)
#define NIX_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
#define NIX_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_QINQ_STRIP | \
- DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
- val = ETH_RSS_RETA_SIZE_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
- val = ETH_RSS_RETA_SIZE_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
- val = ETH_RSS_RETA_SIZE_256;
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+ val = RTE_ETH_RSS_RETA_SIZE_64;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+ val = RTE_ETH_RSS_RETA_SIZE_128;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+ val = RTE_ETH_RSS_RETA_SIZE_256;
else
val = NIX_RSS_RETA_SIZE;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba45..d5caaa326a5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -26,11 +26,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
return -EINVAL;
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * NIX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -568,17 +568,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
};
/* Auto negotiation disabled */
- devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
/* 50G and 100G to be supported for board version C0
* and above.
*/
if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index 7bd1ed6da043..4d40184de46d 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -869,8 +869,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
!RTE_IS_POWER_OF_2(sa_width));
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return 0;
if (rte_security_dynfield_register() < 0)
@@ -912,8 +912,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
uint16_t port = eth_dev->data->port_id;
char name[RTE_MEMZONE_NAMESIZE];
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
goto err_exit;
}
- if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
rc = flow_update_sec_tt(dev, actions);
if (rc != 0) {
rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
int rc;
if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
goto done;
if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rsp->rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (rsp->tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
done:
return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (otx2_dev_is_Ax(dev) &&
(dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+ (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_conf.mode =
- (fc_conf.mode == RTE_FC_FULL ||
- fc_conf.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_conf.mode == RTE_ETH_FC_FULL ||
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
return 0;
memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 79b92fda8a4a..91267bbb8182 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
attr, "No support of RSS in egress");
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
act, "multi-queue mode is disabled");
@@ -1186,7 +1186,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
*FLOW_KEY_ALG index. So, till we update the action with
*flow_key_alg index, set the action to drop.
*/
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
flow->npc_action = NIX_RX_ACTIONOP_DROP;
else
flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
otx2_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
eth_link.link_status = link->link_up;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
static int
lbk_link_update(struct rte_eth_link *link)
{
- link->link_status = ETH_LINK_UP;
- link->link_speed = ETH_SPEED_NUM_100G;
- link->link_autoneg = ETH_LINK_FIXED;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = RTE_ETH_LINK_UP;
+ link->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
link->link_status = rsp->link_info.link_up;
link->link_speed = rsp->link_info.speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (rsp->link_info.full_duplex)
link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
/* 50G and 100G to be supported for board version C0 and above */
if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
link_speed = 100000;
- if (link_speeds & ETH_LINK_SPEED_50G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
link_speed = 50000;
}
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed = 40000;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed = 25000;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed = 20000;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed = 10000;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
link_speed = 5000;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed = 1000;
return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
static inline uint8_t
nix_parse_eth_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
return cgx_change_mode(dev, &cfg);
}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
/* System time should be already on by default */
nix_start_timecounters(eth_dev);
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
return -EINVAL;
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
rss->ind_tbl[idx] = reta_conf[i].reta[j];
idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = rss->ind_tbl[j];
}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
}
#define RSS_IPV4_ENABLE ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE ( \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE ( \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
dev->rss_info.nix_rss = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
otx2_nix_rss_set_key(dev, rss_conf->rss_key,
(uint32_t)rss_conf->rss_key_len);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
int rc;
/* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
return 0;
/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
}
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index 0d85c898bfe7..2c18483b98fd 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
/* For PTP enabled, scalar rx function should be chosen as most of the
* PTP apps are implemented to rx burst 1 pkt.
*/
- if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
pick_rx_func(eth_dev, nix_eth_rx_burst);
else
pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ad704d745b04..135615580bbf 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
else
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
* Take offset from LA since in case of untagged packet,
* lbptr is zero.
*/
- if (type == ETH_VLAN_TYPE_OUTER) {
+ if (type == RTE_ETH_VLAN_TYPE_OUTER) {
vtag_action.act.vtag0_def = vtag_index;
vtag_action.act.vtag0_lid = NPC_LID_LA;
vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
if (vlan->strip_on ||
(vlan->qinq_on && !vlan->qinq_before_def)) {
if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- ETH_MQ_RX_RSS)
+ RTE_ETH_MQ_RX_RSS)
vlan->def_rx_mcam_ent.action |=
NIX_RX_ACTIONOP_RSS;
else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
rxmode = ð_dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, true);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, false);
}
if (rc)
goto done;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, true, 0);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, false, 0);
}
if (rc)
goto done;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
if (!dev->vlan_info.qinq_on) {
- offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, true);
if (rc)
goto done;
}
} else {
if (dev->vlan_info.qinq_on) {
- offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, false);
if (rc)
goto done;
}
}
- if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP)) {
+ if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
dev->rx_offloads |= offloads;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
tpid_cfg->tpid = tpid;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
else
tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
if (rc)
return rc;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
dev->vlan_info.outer_vlan_tpid = tpid;
else
dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
vlan->outer_vlan_idx = 0;
}
- rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+ rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
vtag_index, on);
if (rc < 0) {
printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
} else {
/* Reinstall all mcam entries now if filter offload is set */
if (eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
nix_vlan_reinstall_vlan_filters(eth_dev);
}
mask =
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
rc = otx2_nix_vlan_offload_set(eth_dev, mask);
if (rc) {
otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 698d22e22685..74dc36a17648 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,14 +33,14 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
otx_epvf = OTX_EP_DEV(eth_dev);
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
devinfo->max_rx_queues = otx_epvf->max_rx_queues;
devinfo->max_tx_queues = otx_epvf->max_tx_queues;
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
- devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+ devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index aa4dcd33cc79..9338b30672ec 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)
@@ -954,7 +954,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l4_len = hdr_lens.l4_len;
if (droq_pkt->nb_segs > 1 &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
goto oq_read_fail;
}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index d695c5eef7b0..ec29fd6bc53c 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -136,10 +136,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -659,7 +659,7 @@ eth_dev_start(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -714,7 +714,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 4cc002ee8fab..047010e15ed0 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
static struct pfe *g_pfe;
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
/* TODO: make pfe_svr a runtime option.
* Driver should be able to get the SVR
@@ -601,9 +601,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
}
link.link_status = lstatus;
- link.link_speed = ETH_LINK_SPEED_1G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_speed = RTE_ETH_LINK_SPEED_1G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
pfe_eth_atomic_write_link_status(dev, &link);
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t; /* In DWORDS !!! */
struct eth_phy_cfg {
/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
u32 speed;
-#define ETH_SPEED_AUTONEG 0
-#define ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG 0
+#define RTE_ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
u32 pause; /* bitmask */
#define ETH_PAUSE_NONE 0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc74e..c907d7fd8312 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
}
use_tx_offload = !!(tx_offloads &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
- DEV_TX_OFFLOAD_TCP_TSO | /* tso */
- DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
if (use_tx_offload) {
DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
(void)qede_vlan_stripping(eth_dev, 1);
else
(void)qede_vlan_stripping(eth_dev, 0);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN filtering kicks in when a VLAN is added */
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
qede_vlan_filter_set(eth_dev, 0, 1);
} else {
if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
* enabled
*/
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
qede_vlan_filter_set(eth_dev, 0, 0);
}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
/* Configure default RETA */
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
- id = i / RTE_RETA_GROUP_SIZE;
- pos = i % RTE_RETA_GROUP_SIZE;
+ id = i / RTE_ETH_RETA_GROUP_SIZE;
+ pos = i % RTE_ETH_RETA_GROUP_SIZE;
q = i % QEDE_RSS_COUNT(eth_dev);
reta_conf[id].reta[pos] = q;
}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure TPA parameters */
- if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (qede_enable_tpa(eth_dev, true))
return -EINVAL;
/* Enable scatter mode for LRO */
if (!eth_dev->data->scattered_rx)
- rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
}
/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
* Also, we would like to retain similar behavior in PF case, so we
* don't do PF/VF specific check here.
*/
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
if (qede_config_rss(eth_dev))
goto err;
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE(edev);
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* We need to have min 1 RX queue.There is no min check in
* rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DP_NOTICE(edev, false,
"Invalid devargs supplied, requested change will not take effect\n");
- if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
- rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+ if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
DP_ERR(edev, "Unsupported multi-queue mode\n");
return -ENOTSUP;
}
@@ -1312,7 +1312,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1321,8 +1321,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
qdev->mtu = eth_dev->data->mtu;
/* Enable VLAN offloads by default */
- ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -1385,34 +1385,34 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
- dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
dev_info->rx_queue_offload_capa = 0;
/* TX offloads are on a per-packet basis, so it is applicable
* to both at port and queue levels.
*/
- dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
dev_info->default_txconf = (struct rte_eth_txconf) {
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
};
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1424,17 +1424,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(struct qed_link_output));
qdev->ops->common->get_link(edev, &link);
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
- speed_cap |= ETH_LINK_SPEED_1G;
+ speed_cap |= RTE_ETH_LINK_SPEED_1G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
- speed_cap |= ETH_LINK_SPEED_10G;
+ speed_cap |= RTE_ETH_LINK_SPEED_10G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
- speed_cap |= ETH_LINK_SPEED_25G;
+ speed_cap |= RTE_ETH_LINK_SPEED_25G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
- speed_cap |= ETH_LINK_SPEED_40G;
+ speed_cap |= RTE_ETH_LINK_SPEED_40G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
- speed_cap |= ETH_LINK_SPEED_50G;
+ speed_cap |= RTE_ETH_LINK_SPEED_50G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
- speed_cap |= ETH_LINK_SPEED_100G;
+ speed_cap |= RTE_ETH_LINK_SPEED_100G;
dev_info->speed_capa = speed_cap;
return 0;
@@ -1461,10 +1461,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
/* Link Mode */
switch (q_link.duplex) {
case QEDE_DUPLEX_HALF:
- link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case QEDE_DUPLEX_FULL:
- link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case QEDE_DUPLEX_UNKNOWN:
default:
@@ -1473,11 +1473,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
link.link_duplex = link_duplex;
/* Link Status */
- link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
/* AN */
link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
link.link_speed, link.link_duplex,
@@ -2012,12 +2012,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
/* Pause is assumed to be supported (SUPPORTED_Pause) */
- if (fc_conf->mode == RTE_FC_FULL)
+ if (fc_conf->mode == RTE_ETH_FC_FULL)
params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
QED_LINK_PAUSE_RX_ENABLE);
- if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
- if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
params.link_up = true;
@@ -2041,13 +2041,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
QED_LINK_PAUSE_TX_ENABLE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2088,14 +2088,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
{
*rss_caps = 0;
- *rss_caps |= (hf & ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
}
int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2221,7 +2221,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
uint8_t entry;
int rc = 0;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported by hardware\n",
reta_size);
return -EINVAL;
@@ -2245,8 +2245,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
for_each_hwfn(edev, i) {
for (j = 0; j < reta_size; j++) {
- idx = j / RTE_RETA_GROUP_SIZE;
- shift = j % RTE_RETA_GROUP_SIZE;
+ idx = j / RTE_ETH_RETA_GROUP_SIZE;
+ shift = j % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift)) {
entry = reta_conf[idx].reta[shift];
fid = entry * edev->num_hwfns + i;
@@ -2282,15 +2282,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
uint16_t i, idx, shift;
uint8_t entry;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported\n",
reta_size);
return -EINVAL;
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift)) {
entry = qdev->rss_ind_table[i];
reta_conf[idx].reta[shift] = entry;
@@ -2718,16 +2718,16 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
adapter->ipgre.num_filters = 0;
if (is_vf) {
adapter->vxlan.enable = true;
- adapter->vxlan.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->vxlan.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
adapter->vxlan.udp_port = QEDE_VXLAN_DEF_PORT;
adapter->geneve.enable = true;
- adapter->geneve.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->geneve.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
adapter->geneve.udp_port = QEDE_GENEVE_DEF_PORT;
adapter->ipgre.enable = true;
- adapter->ipgre.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->ipgre.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
} else {
adapter->vxlan.enable = false;
adapter->geneve.enable = false;
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..440440423a32 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -20,97 +20,97 @@ const struct _qede_udp_tunn_types {
const char *string;
} qede_tunn_types[] = {
{
- ETH_TUNNEL_FILTER_OMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC,
ECORE_FILTER_MAC,
ECORE_TUNN_CLSS_MAC_VLAN,
"outer-mac"
},
{
- ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_TENID,
ECORE_FILTER_VNI,
ECORE_TUNN_CLSS_MAC_VNI,
"vni"
},
{
- ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_INNER_MAC,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-mac"
},
{
- ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_INNER_VLAN,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-vlan"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID,
ECORE_FILTER_MAC_VNI_PAIR,
ECORE_TUNN_CLSS_MAC_VNI,
"outer-mac and vni"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-mac and inner-mac"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-mac and inner-vlan"
},
{
- ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_INNER_MAC_VNI_PAIR,
ECORE_TUNN_CLSS_INNER_MAC_VNI,
"vni and inner-mac",
},
{
- ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"vni and inner-vlan",
},
{
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_INNER_PAIR,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-mac and inner-vlan",
},
{
- ETH_TUNNEL_FILTER_OIP,
+ RTE_ETH_TUNNEL_FILTER_OIP,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-IP"
},
{
- ETH_TUNNEL_FILTER_IIP,
+ RTE_ETH_TUNNEL_FILTER_IIP,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"inner-IP"
},
{
- RTE_TUNNEL_FILTER_IMAC_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_IVLAN"
},
{
- RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID,
+ RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_IVLAN_TENID"
},
{
- RTE_TUNNEL_FILTER_IMAC_TENID,
+ RTE_ETH_TUNNEL_FILTER_IMAC_TENID,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_TENID"
},
{
- RTE_TUNNEL_FILTER_OMAC_TENID_IMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"OMAC_TENID_IMAC"
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
/* check FDIR modes */
switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
ECORE_TUNN_CLSS_MAC_VLAN, false);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
qdev->vxlan.udp_port = udp_port;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c2263787b4ec..d585db8b61e8 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
- if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
#define QEDE_MAX_ETHER_HDR_LEN (RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
#define QEDE_ETH_MAX_LEN (RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
-#define QEDE_RSS_OFFLOAD_ALL (ETH_RSS_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP |\
- ETH_RSS_NONFRAG_IPV4_UDP |\
- ETH_RSS_IPV6 |\
- ETH_RSS_NONFRAG_IPV6_TCP |\
- ETH_RSS_NONFRAG_IPV6_UDP |\
- ETH_RSS_VXLAN |\
- ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL (RTE_ETH_RSS_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |\
+ RTE_ETH_RSS_IPV6 |\
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |\
+ RTE_ETH_RSS_VXLAN |\
+ RTE_ETH_RSS_GENEVE)
#define QEDE_RXTX_MAX(qdev) \
(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 0440019e07e1..db10f035dfcb 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -110,21 +110,21 @@ static int
eth_dev_stop(struct rte_eth_dev *dev)
{
dev->data->dev_started = 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_down(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_up(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = 1;
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 431c42f508d0..9c1be10ac93d 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -106,13 +106,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
{
uint32_t phy_caps = 0;
- if (~speeds & ETH_LINK_SPEED_FIXED) {
+ if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
phy_caps |= (1 << EFX_PHY_CAP_AN);
/*
* If no speeds are specified in the mask, any supported
* may be negotiated
*/
- if (speeds == ETH_LINK_SPEED_AUTONEG)
+ if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
phy_caps |=
(1 << EFX_PHY_CAP_1000FDX) |
(1 << EFX_PHY_CAP_10000FDX) |
@@ -121,17 +121,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
(1 << EFX_PHY_CAP_50000FDX) |
(1 << EFX_PHY_CAP_100000FDX);
}
- if (speeds & ETH_LINK_SPEED_1G)
+ if (speeds & RTE_ETH_LINK_SPEED_1G)
phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
- if (speeds & ETH_LINK_SPEED_10G)
+ if (speeds & RTE_ETH_LINK_SPEED_10G)
phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
- if (speeds & ETH_LINK_SPEED_25G)
+ if (speeds & RTE_ETH_LINK_SPEED_25G)
phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
- if (speeds & ETH_LINK_SPEED_40G)
+ if (speeds & RTE_ETH_LINK_SPEED_40G)
phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
- if (speeds & ETH_LINK_SPEED_50G)
+ if (speeds & RTE_ETH_LINK_SPEED_50G)
phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
- if (speeds & ETH_LINK_SPEED_100G)
+ if (speeds & RTE_ETH_LINK_SPEED_100G)
phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
return phy_caps;
@@ -401,10 +401,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
tx_offloads |= txq_info->offloads;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
else
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -899,7 +899,7 @@ sfc_attach(struct sfc_adapter *sa)
sa->priv.shared->tunnel_encaps =
encp->enc_tunnel_encapsulations_supported;
- if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso)
@@ -908,8 +908,8 @@ sfc_attach(struct sfc_adapter *sa)
if (sa->tso &&
(sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
- (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+ (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d958fd642fb1..eeb73a7530ef 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -979,11 +979,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
SFC_DP_RX_FEAT_INTR |
SFC_DP_RX_FEAT_STATS,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.get_dev_info = sfc_ef100_rx_get_dev_info,
.qsize_up_rings = sfc_ef100_rx_qsize_up_rings,
.qcreate = sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index e166fda888b1..67980a587fe4 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -971,16 +971,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
.features = SFC_DP_TX_FEAT_MULTI_PROCESS |
SFC_DP_TX_FEAT_STATS,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef100_get_dev_info,
.qsize_up_rings = sfc_ef100_tx_qsize_up_rings,
.qcreate = sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
},
.features = SFC_DP_RX_FEAT_FLOW_FLAG |
SFC_DP_RX_FEAT_FLOW_MARK,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.queue_offload_capa = 0,
.get_dev_info = sfc_ef10_essb_rx_get_dev_info,
.pool_ops_supported = sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
},
.features = SFC_DP_RX_FEAT_MULTI_PROCESS |
SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.get_dev_info = sfc_ef10_rx_get_dev_info,
.qsize_up_rings = sfc_ef10_rx_qsize_up_rings,
.qcreate = sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
if (txq->sw_ring == NULL)
goto fail_sw_ring_alloc;
- if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+ if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
info->txq_entries,
SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_EF10,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
.type = SFC_DP_TX,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f5986b610fff..833d833a0408 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -105,19 +105,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = sa->sriov.num_vfs;
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->max_rx_queues = sa->rxq_max;
dev_info->max_tx_queues = sa->txq_max;
@@ -145,8 +145,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
dev_info->tx_queue_offload_capa;
- if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->default_txconf.offloads |= txq_offloads_def;
@@ -989,16 +989,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
switch (link_fc) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case EFX_FCNTL_RESPOND:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case EFX_FCNTL_GENERATE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
default:
sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -1029,16 +1029,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
fcntl = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
fcntl = EFX_FCNTL_RESPOND;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
fcntl = EFX_FCNTL_GENERATE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
break;
default:
@@ -1313,7 +1313,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
- qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
qinfo->scattered_rx = 1;
}
qinfo->nb_desc = rxq_info->entries;
@@ -1523,9 +1523,9 @@ static efx_tunnel_protocol_t
sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
{
switch (rte_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
return EFX_TUNNEL_PROTOCOL_VXLAN;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
return EFX_TUNNEL_PROTOCOL_GENEVE;
default:
return EFX_TUNNEL_NPROTOS;
@@ -1652,7 +1652,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
/*
* Mapping of hash configuration between RTE and EFX is not one-to-one,
- * hence, conversion is done here to derive a correct set of ETH_RSS
+ * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
* flags which corresponds to the active EFX configuration stored
* locally in 'sfc_adapter' and kept up-to-date
*/
@@ -1778,8 +1778,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
for (entry = 0; entry < reta_size; entry++) {
- int grp = entry / RTE_RETA_GROUP_SIZE;
- int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+ int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+ int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[grp].mask >> grp_idx) & 1)
reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1828,10 +1828,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
for (entry = 0; entry < reta_size; entry++) {
- int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+ int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
struct rte_eth_rss_reta_entry64 *grp;
- grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+ grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
if (grp->mask & (1ull << grp_idx)) {
if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 8096af56739f..be2dfe778a0d 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -392,7 +392,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
const struct rte_flow_item_vlan *spec = NULL;
const struct rte_flow_item_vlan *mask = NULL;
const struct rte_flow_item_vlan supp_mask = {
- .tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+ .tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
.inner_type = RTE_BE16(0xffff),
};
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index 5320d8903dac..27b02b1119fb 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -573,66 +573,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
memset(link_info, 0, sizeof(*link_info));
if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
- link_info->link_status = ETH_LINK_DOWN;
+ link_info->link_status = RTE_ETH_LINK_DOWN;
else
- link_info->link_status = ETH_LINK_UP;
+ link_info->link_status = RTE_ETH_LINK_UP;
switch (link_mode) {
case EFX_LINK_10HDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_10FDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100HDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_100FDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_1000HDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_1000FDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_10000FDX:
- link_info->link_speed = ETH_SPEED_NUM_10G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_25000FDX:
- link_info->link_speed = ETH_SPEED_NUM_25G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_25G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_40000FDX:
- link_info->link_speed = ETH_SPEED_NUM_40G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_40G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_50000FDX:
- link_info->link_speed = ETH_SPEED_NUM_50G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_50G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100000FDX:
- link_info->link_speed = ETH_SPEED_NUM_100G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
SFC_ASSERT(B_FALSE);
/* FALLTHROUGH */
case EFX_LINK_UNKNOWN:
case EFX_LINK_DOWN:
- link_info->link_speed = ETH_SPEED_NUM_NONE;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_NONE;
link_info->link_duplex = 0;
break;
}
- link_info->link_autoneg = ETH_LINK_AUTONEG;
+ link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
int
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 2500b14cb006..9d88d554c1ba 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -405,7 +405,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
}
switch (conf->rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
if (nb_rx_queues != 1) {
sfcr_err(sr, "Rx RSS is not supported with %u queues",
nb_rx_queues);
@@ -420,7 +420,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
ret = -EINVAL;
}
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
break;
default:
sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
@@ -428,7 +428,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
break;
}
- if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+ if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
sfcr_err(sr, "Tx mode MQ modes not supported");
ret = -EINVAL;
}
@@ -553,8 +553,8 @@ sfc_repr_dev_link_update(struct rte_eth_dev *dev,
sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
} else {
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_UP;
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
}
return rte_eth_linkstatus_set(dev, &link);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c60ef17a922a..23df27c8f45a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -648,9 +648,9 @@ struct sfc_dp_rx sfc_efx_rx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_RX_EFX,
},
.features = SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.qsize_up_rings = sfc_efx_rx_qsize_up_rings,
.qcreate = sfc_efx_rx_qcreate,
.qdestroy = sfc_efx_rx_qdestroy,
@@ -931,7 +931,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (encp->enc_tunnel_encapsulations_supported == 0)
- no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
return ~no_caps;
}
@@ -1140,7 +1140,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
encp->enc_rx_prefix_size,
- (offloads & DEV_RX_OFFLOAD_SCATTER),
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
encp->enc_rx_scatter_max,
&error)) {
sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1166,15 +1166,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
rxq_info->type_flags |=
- (offloads & DEV_RX_OFFLOAD_SCATTER) ?
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
if ((encp->enc_tunnel_encapsulations_supported != 0) &&
(sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
if ((sa->negotiated_rx_metadata & RTE_ETH_RX_METADATA_USER_FLAG) != 0)
@@ -1211,7 +1211,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->refill_mb_pool = mb_pool;
if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
- (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
else
rxq_info->rxq_flags = 0;
@@ -1313,19 +1313,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
* Mapping between RTE RSS hash functions and their EFX counterparts.
*/
static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
- { ETH_RSS_NONFRAG_IPV4_TCP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP,
EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV4_UDP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP,
EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
- { ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE) },
- { ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_IPV6_EX,
+ { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_EX,
EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
EFX_RX_HASH(IPV6, 2TUPLE) }
};
@@ -1645,10 +1645,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
int rc = 0;
switch (rxmode->mq_mode) {
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* No special checks are required */
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
sfc_err(sa, "RSS is not available");
rc = EINVAL;
@@ -1665,16 +1665,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
* so unsupported offloads cannot be added as the result of
* below check.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
- (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+ (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
- rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
}
- if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
- rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
}
return rc;
@@ -1820,7 +1820,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
}
configure_rss:
- rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+ rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 13392cdd5a09..0273788c20ce 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (!encp->enc_hw_tx_insert_vlan_enabled)
- no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (!encp->enc_tunnel_encapsulations_supported)
- no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (!sa->tso)
- no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
- no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
- no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
return ~no_caps;
}
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
}
/* We either perform both TCP and UDP offload, or no offload at all */
- if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
- ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+ if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+ ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
sfc_err(sa, "TCP and UDP offloads can't be set independently");
rc = EINVAL;
}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
int rc = 0;
switch (txmode->mq_mode) {
- case ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_NONE:
break;
default:
sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -529,23 +529,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
if (rc != 0)
goto fail_ev_qstart;
- if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_IPV4;
- if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_IPV4;
- if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
flags |= EFX_TXQ_CKSUM_TCPUDP;
- if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
}
- if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
flags |= EFX_TXQ_FATSOV2;
rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -876,9 +876,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/*
* Here VLAN TCI is expected to be zero in case if no
- * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
* if the calling app ignores the absence of
- * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
* TX_ERROR will occur
*/
pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1242,13 +1242,13 @@ struct sfc_dp_tx sfc_efx_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_TX_EFX,
},
.features = 0,
- .dev_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
.qsize_up_rings = sfc_efx_tx_qsize_up_rings,
.qcreate = sfc_efx_tx_qcreate,
.qdestroy = sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
return status;
/* Link UP */
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
struct pmd_internals *p = dev->data->dev_private;
/* Link DOWN */
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
/* Firmware */
softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
/* dev->data */
dev->data->dev_private = dev_private;
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
dev->data->mac_addrs = ð_addr;
dev->data->promiscuous = 1;
dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 3c6a285e3c5e..6a084e3e1b1b 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
eth_dev_configure(struct rte_eth_dev *dev)
{
struct rte_eth_dev_data *data = dev->data;
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
dev->rx_pkt_burst = eth_szedata2_rx_scattered;
data->scattered_rx = 1;
} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_queues = internals->max_rx_queues;
dev_info->max_tx_queues = internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
dev_info->tx_offload_capa = 0;
dev_info->rx_queue_offload_capa = 0;
dev_info->tx_queue_offload_capa = 0;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1202,10 +1202,10 @@ eth_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_status = ETH_LINK_UP;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad45219e..5d5350d78e03 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
#define TAP_IOV_DEFAULT_MAX 1024
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
static int tap_devices_count;
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
static volatile uint32_t tap_trigger; /* Rx trigger */
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
len = readv(process_private->rxq_fds[rxq->queue_id],
*rxq->iovecs,
- 1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+ 1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
rxq->nb_rx_desc : 1));
if (len < (int)sizeof(struct tun_pi))
break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
seg->next = NULL;
mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
RTE_PTYPE_ALL_MASK);
- if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
tap_verify_csum(mbuf);
/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
}
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
}
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
uint32_t speed = pmd_link.link_speed;
uint32_t capa = 0;
- if (speed >= ETH_SPEED_NUM_10M)
- capa |= ETH_LINK_SPEED_10M;
- if (speed >= ETH_SPEED_NUM_100M)
- capa |= ETH_LINK_SPEED_100M;
- if (speed >= ETH_SPEED_NUM_1G)
- capa |= ETH_LINK_SPEED_1G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_2_5G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_5G;
- if (speed >= ETH_SPEED_NUM_10G)
- capa |= ETH_LINK_SPEED_10G;
- if (speed >= ETH_SPEED_NUM_20G)
- capa |= ETH_LINK_SPEED_20G;
- if (speed >= ETH_SPEED_NUM_25G)
- capa |= ETH_LINK_SPEED_25G;
- if (speed >= ETH_SPEED_NUM_40G)
- capa |= ETH_LINK_SPEED_40G;
- if (speed >= ETH_SPEED_NUM_50G)
- capa |= ETH_LINK_SPEED_50G;
- if (speed >= ETH_SPEED_NUM_56G)
- capa |= ETH_LINK_SPEED_56G;
- if (speed >= ETH_SPEED_NUM_100G)
- capa |= ETH_LINK_SPEED_100G;
+ if (speed >= RTE_ETH_SPEED_NUM_10M)
+ capa |= RTE_ETH_LINK_SPEED_10M;
+ if (speed >= RTE_ETH_SPEED_NUM_100M)
+ capa |= RTE_ETH_LINK_SPEED_100M;
+ if (speed >= RTE_ETH_SPEED_NUM_1G)
+ capa |= RTE_ETH_LINK_SPEED_1G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_2_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_10G)
+ capa |= RTE_ETH_LINK_SPEED_10G;
+ if (speed >= RTE_ETH_SPEED_NUM_20G)
+ capa |= RTE_ETH_LINK_SPEED_20G;
+ if (speed >= RTE_ETH_SPEED_NUM_25G)
+ capa |= RTE_ETH_LINK_SPEED_25G;
+ if (speed >= RTE_ETH_SPEED_NUM_40G)
+ capa |= RTE_ETH_LINK_SPEED_40G;
+ if (speed >= RTE_ETH_SPEED_NUM_50G)
+ capa |= RTE_ETH_LINK_SPEED_50G;
+ if (speed >= RTE_ETH_SPEED_NUM_56G)
+ capa |= RTE_ETH_LINK_SPEED_56G;
+ if (speed >= RTE_ETH_SPEED_NUM_100G)
+ capa |= RTE_ETH_LINK_SPEED_100G;
return capa;
}
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
if (!(ifr.ifr_flags & IFF_UP) ||
!(ifr.ifr_flags & IFF_RUNNING)) {
- dev_link->link_status = ETH_LINK_DOWN;
+ dev_link->link_status = RTE_ETH_LINK_DOWN;
return 0;
}
}
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
dev_link->link_status =
((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
- ETH_LINK_UP :
- ETH_LINK_DOWN);
+ RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN);
return 0;
}
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
int ret;
/* initialize GSO context */
- gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!pmd->gso_ctx_mp) {
/*
* Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->csum = !!(offloads &
- (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM));
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
if (ret == -1)
@@ -1760,7 +1760,7 @@ static int
tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1768,7 +1768,7 @@ static int
tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- if (fc_conf->mode != RTE_FC_NONE)
+ if (fc_conf->mode != RTE_ETH_FC_NONE)
return -ENOTSUP;
return 0;
}
@@ -2262,7 +2262,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
}
}
}
- pmd_link.link_speed = ETH_SPEED_NUM_10G;
+ pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
@@ -2436,7 +2436,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
return 0;
}
- speed = ETH_SPEED_NUM_10G;
+ speed = RTE_ETH_SPEED_NUM_10G;
/* use tap%d which causes kernel to choose next available */
strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
--git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
#define TAP_RSS_HASH_KEY_SIZE 40
/* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
/* hashed fields for RSS */
enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 8ce9a99dc074..762647e3b6ee 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
if (nic->duplex == NICVF_HALF_DUPLEX)
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else if (nic->duplex == NICVF_FULL_DUPLEX)
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_speed = nic->speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* rte_eth_link_get() might need to wait up to 9 seconds */
for (i = 0; i < MAX_CHECK_TIME; i++) {
nicvf_link_status_update(nic, &link);
- if (link.link_status == ETH_LINK_UP)
+ if (link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
}
@@ -390,35 +390,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
{
uint64_t nic_rss = 0;
- if (ethdev_rss & ETH_RSS_IPV4)
+ if (ethdev_rss & RTE_ETH_RSS_IPV4)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_IPV6)
+ if (ethdev_rss & RTE_ETH_RSS_IPV6)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
nic_rss |= RSS_TUN_VXLAN_ENA;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
nic_rss |= RSS_TUN_GENEVE_ENA;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
nic_rss |= RSS_TUN_NVGRE_ENA;
}
@@ -431,28 +431,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic, uint64_t nic_rss)
uint64_t ethdev_rss = 0;
if (nic_rss & RSS_IP_ENA)
- ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+ ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP);
if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
- ethdev_rss |= ETH_RSS_PORT;
+ ethdev_rss |= RTE_ETH_RSS_PORT;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
if (nic_rss & RSS_TUN_VXLAN_ENA)
- ethdev_rss |= ETH_RSS_VXLAN;
+ ethdev_rss |= RTE_ETH_RSS_VXLAN;
if (nic_rss & RSS_TUN_GENEVE_ENA)
- ethdev_rss |= ETH_RSS_GENEVE;
+ ethdev_rss |= RTE_ETH_RSS_GENEVE;
if (nic_rss & RSS_TUN_NVGRE_ENA)
- ethdev_rss |= ETH_RSS_NVGRE;
+ ethdev_rss |= RTE_ETH_RSS_NVGRE;
}
return ethdev_rss;
}
@@ -479,8 +479,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
return ret;
/* Copy RETA table */
- for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = tbl[j];
}
@@ -509,8 +509,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
return ret;
/* Copy RETA table */
- for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
tbl[j] = reta_conf[i].reta[j];
}
@@ -807,9 +807,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
dev->data->nb_rx_queues,
dev->data->dev_conf.lpbk_mode, rsshf);
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
ret = nicvf_rss_term(nic);
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
if (ret)
PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -870,7 +870,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
multiseg = true;
break;
}
@@ -992,7 +992,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->offloads = offloads;
- is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+ is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
/* Choose optimum free threshold value for multipool case */
if (!is_single_pool) {
@@ -1382,11 +1382,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
PMD_INIT_FUNC_TRACE();
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1415,10 +1415,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
};
return 0;
@@ -1582,8 +1582,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
/* Configure VLAN Strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = nicvf_vlan_offload_config(dev, mask);
/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1711,7 +1711,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
/* Setup scatter mode if needed by jumbo */
if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
/* Setup MTU */
@@ -1896,8 +1896,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!rte_eal_has_hugepages()) {
PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1909,8 +1909,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
@@ -1920,7 +1920,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
return -EINVAL;
}
@@ -1955,7 +1955,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic->offload_cksum = 1;
PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2032,8 +2032,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct nicvf *nic = nicvf_pmd_priv(dev);
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
nicvf_vlan_hw_strip(nic, true);
else
nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 5d38750d6313..cb474e26b81e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,32 +16,32 @@
#define NICVF_UNKNOWN_DUPLEX 0xff
#define NICVF_RSS_OFFLOAD_PASS1 ( \
- ETH_RSS_PORT | \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define NICVF_RSS_OFFLOAD_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
#define NICVF_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define NICVF_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NICVF_DEFAULT_RX_FREE_THRESH 224
#define NICVF_DEFAULT_TX_FREE_THRESH 224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb68635..0b0f9db7cb2a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -998,7 +998,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
restart = (rxcfg & TXGBE_RXCFG_ENA) &&
!(rxcfg & TXGBE_RXCFG_VLAN);
rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1033,7 +1033,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (vlan_ext) {
wr32m(hw, TXGBE_VLANCTL,
TXGBE_VLANCTL_TPID_MASK,
@@ -1053,7 +1053,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
TXGBE_TAGTPID_LSB(tpid));
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (vlan_ext) {
/* Only the high 16-bits is valid */
wr32m(hw, TXGBE_EXTAG,
@@ -1138,10 +1138,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -1240,7 +1240,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
txgbe_vlan_strip_queue_set(dev, i, 1);
else
txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1254,17 +1254,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct txgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -1275,25 +1275,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
txgbe_vlan_hw_strip_config(dev);
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
txgbe_vlan_hw_filter_enable(dev);
else
txgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
txgbe_vlan_hw_extend_enable(dev);
else
txgbe_vlan_hw_extend_disable(dev);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
txgbe_qinq_hw_strip_enable(dev);
else
txgbe_qinq_hw_strip_disable(dev);
@@ -1331,10 +1331,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -1357,18 +1357,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1378,13 +1378,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode =
- ETH_MQ_RX_VMDQ_ONLY;
+ RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -1393,13 +1393,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
dev->data->dev_conf.txmode.mq_mode =
- ETH_MQ_TX_VMDQ_ONLY;
+ RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -1414,13 +1414,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1429,15 +1429,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1446,39 +1446,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -1495,8 +1495,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multiple queue mode checking */
ret = txgbe_check_mq_mode(dev);
@@ -1694,15 +1694,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
txgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1763,8 +1763,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
if (err)
goto error;
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
link_speeds = &dev->data->dev_conf.link_speeds;
if (((*link_speeds) >> 1) & ~(allowed_speeds >> 1)) {
@@ -1773,20 +1773,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = (TXGBE_LINK_SPEED_100M_FULL |
TXGBE_LINK_SPEED_1GB_FULL |
TXGBE_LINK_SPEED_10GB_FULL);
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= TXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= TXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= TXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= TXGBE_LINK_SPEED_100M_FULL;
}
@@ -2601,7 +2601,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2634,11 +2634,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_desc_lim = tx_desc_lim;
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -2695,11 +2695,11 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_AUTONEG);
hw->mac.get_link_status = true;
@@ -2713,8 +2713,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -2733,34 +2733,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
}
intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case TXGBE_LINK_SPEED_UNKNOWN:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case TXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case TXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case TXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -2990,7 +2990,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3221,13 +3221,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3359,16 +3359,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
return -ENOTSUP;
}
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += 4) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
if (!mask)
continue;
@@ -3400,16 +3400,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += 4) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
if (!mask)
continue;
@@ -3576,12 +3576,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
wr32(hw, TXGBE_UCADDRTBL(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
wr32(hw, TXGBE_UCADDRTBL(i), 0);
}
@@ -3605,15 +3605,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= TXGBE_POOLETHCTL_UTA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= TXGBE_POOLETHCTL_MCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= TXGBE_POOLETHCTL_UCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= TXGBE_POOLETHCTL_BCA;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= TXGBE_POOLETHCTL_MCP;
return new_val;
@@ -4264,15 +4264,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = TXGBE_INCVAL_100;
shift = TXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = TXGBE_INCVAL_1GB;
shift = TXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = TXGBE_INCVAL_10GB;
shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4628,7 +4628,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -4639,7 +4639,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled */
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -4663,9 +4663,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled */
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4678,7 +4678,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4908,7 +4908,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -4939,7 +4939,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -4979,7 +4979,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -4987,7 +4987,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
ret = -EINVAL;
@@ -4995,7 +4995,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
ret = -EINVAL;
@@ -5003,7 +5003,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -5035,7 +5035,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5045,7 +5045,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, 0);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5055,7 +5055,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, 0);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5065,7 +5065,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, 0);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index fd65d89ffe7d..8304b68292da 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -60,15 +60,15 @@
#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
#define TXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define TXGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define TXGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b75..283b52e8f3db 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -486,14 +486,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -574,22 +574,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -647,8 +647,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
txgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -891,10 +891,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
txgbevf_vlan_strip_queue_set(dev, i, on);
}
}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
uint32_t *fdirctrl, uint32_t *flex)
{
*fdirctrl = 0;
*flex = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_signature_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= TXGBE_SECRXCTL_CRCSTRIP;
wr32(hw, TXGBE_SECRXCTL, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
reg = rd32(hw, TXGBE_SECTXCTL);
if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index a48972b1a381..30be2873307a 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -101,15 +101,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct txgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -256,13 +256,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
break;
}
@@ -611,29 +611,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = ð_dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = TXGBE_DEV_HW(eth_dev);
vmvir = rd32(hw, TXGBE_POOLTAG(vf));
vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 7e18dcce0a86..1204dc5499a5 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1960,7 +1960,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
uint64_t
txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
{
- return DEV_RX_OFFLOAD_VLAN_STRIP;
+ return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
uint64_t
@@ -1970,34 +1970,34 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
if (!txgbe_is_vf(dev))
- offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_VLAN_EXTEND);
+ offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
/*
* RSC is only supported by PF devices in a non-SR-IOV
* mode.
*/
if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == txgbe_mac_raptor)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -2222,32 +2222,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_UDP_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (!txgbe_is_vf(dev))
- tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2349,7 +2349,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/* Modification to set tail pointer for virtual function
@@ -2599,7 +2599,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2900,20 +2900,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_VFPLCFG_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
if (rss_hf)
@@ -2930,20 +2930,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
} else {
mrqc = rd32(hw, TXGBE_RACTL);
mrqc &= ~TXGBE_RACTL_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_RACTL_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_RACTL_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_RACTL_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_RACTL_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6UDP;
if (rss_hf)
@@ -2984,39 +2984,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
rss_hf = 0;
} else {
mrqc = rd32(hw, TXGBE_RACTL);
if (mrqc & TXGBE_RACTL_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_RACTL_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_RACTL_RSSENA))
rss_hf = 0;
}
@@ -3046,7 +3046,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
*/
if (adapter->rss_reta_updated == 0) {
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == dev->data->nb_rx_queues)
j = 0;
reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3083,12 +3083,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
txgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* split rx buffer up into sections, each for 1 traffic class
@@ -3103,7 +3103,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
rxpbsize &= (~(0x3FF << 10));
@@ -3111,7 +3111,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
- if (num_pools == ETH_16_POOLS) {
+ if (num_pools == RTE_ETH_16_POOLS) {
mrqc = TXGBE_PORTCTL_NUMTC_8;
mrqc |= TXGBE_PORTCTL_NUMVT_16;
} else {
@@ -3130,7 +3130,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_POOLCTL, vt_ctl);
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3151,7 +3151,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
wr32(hw, TXGBE_POOLRXENA(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
wr32(hw, TXGBE_ETHADDRIDX, 0);
wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3221,7 +3221,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
/*PF VF Transmit Enable*/
wr32(hw, TXGBE_POOLTXENA(0),
vmdq_tx_conf->nb_queue_pools ==
- ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3237,12 +3237,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3252,7 +3252,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3270,12 +3270,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3285,7 +3285,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3312,7 +3312,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3339,7 +3339,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3475,7 +3475,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_rx = DCB_RX_CONFIG;
/*
@@ -3486,8 +3486,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
/*Configure general VMDQ and DCB RX parameters*/
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3500,7 +3500,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -3511,7 +3511,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3527,15 +3527,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
- if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -3576,7 +3576,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
wr32(hw, TXGBE_PBRXSIZE(i), 0);
}
if (config_dcb_tx) {
@@ -3592,7 +3592,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
wr32(hw, TXGBE_PBTXSIZE(i), 0);
wr32(hw, TXGBE_PBTXDMATH(i), 0);
}
@@ -3634,7 +3634,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/* If the TC count is 8,
@@ -3648,7 +3648,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = txgbe_dcb_pfc_enabled;
}
txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -3719,12 +3719,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -3780,7 +3780,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* pool enabling for receive - 64 */
wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
/*
@@ -3904,11 +3904,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
@@ -3931,15 +3931,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -3962,21 +3962,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
txgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
txgbe_rss_disable(dev);
@@ -3987,18 +3987,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
txgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4028,7 +4028,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
txgbe_vmdq_tx_hw_configure(hw);
else
wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4038,13 +4038,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -4107,10 +4107,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4118,22 +4118,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
"is disabled");
return -EINVAL;
}
rfctl = rd32(hw, TXGBE_PSRCTL);
- if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~TXGBE_PSRCTL_RSCDIA;
else
rfctl |= TXGBE_PSRCTL_RSCDIA;
wr32(hw, TXGBE_PSRCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set PSRCTL.RSCACK bit */
@@ -4273,7 +4273,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
}
#endif
}
@@ -4316,7 +4316,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4344,7 +4344,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4354,7 +4354,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -4391,11 +4391,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -4410,7 +4410,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = rd32(hw, TXGBE_PSRCTL);
rxcsum |= TXGBE_PSRCTL_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= TXGBE_PSRCTL_L4CSUM;
else
rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4419,7 +4419,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == txgbe_mac_raptor) {
rdrxctl = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4542,8 +4542,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
txgbe_setup_loopback_link_raptor(hw);
#ifdef RTE_LIB_SECURITY
- if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
- (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+ if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+ (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = txgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -4851,7 +4851,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Set PSR type for VF RSS according to max Rx queue */
psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4903,7 +4903,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
*/
wr32(hw, TXGBE_RXCFG(i), srrctl);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4912,8 +4912,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/*
@@ -5084,7 +5084,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
* little-endian order.
*/
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == conf->conf.queue_num)
j = 0;
reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
uint8_t rx_deferred_start; /**< not in global dev start. */
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 86498365e149..17b6a1a1ceec 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
static struct rte_eth_link pmd_link = {
.link_speed = 10000,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN
};
struct rte_vhost_vring_state {
@@ -817,7 +817,7 @@ new_device(int vid)
rte_vhost_get_mtu(vid, ð_dev->data->mtu);
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_atomic32_set(&internal->dev_attached, 1);
update_queuing_status(eth_dev);
@@ -852,7 +852,7 @@ destroy_device(int vid)
rte_atomic32_set(&internal->dev_attached, 0);
update_queuing_status(eth_dev);
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1118,7 +1118,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
if (vhost_driver_setup(dev) < 0)
return -1;
- internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -1267,9 +1267,9 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_tx_queues = internal->max_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return 0;
}
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index ddf0e26ab4db..94120b349023 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -712,7 +712,7 @@ int
virtio_dev_close(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "virtio_dev_close");
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1774,7 +1774,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
- if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+ if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
config = &local_config;
virtio_read_dev_config(hw,
@@ -1788,7 +1788,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
}
}
if (hw->duplex == DUPLEX_UNKNOWN)
- hw->duplex = ETH_LINK_FULL_DUPLEX;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
hw->speed, hw->duplex);
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1887,7 +1887,7 @@ int
eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
{
struct virtio_hw *hw = eth_dev->data->dev_private;
- uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+ uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
int vectorized = 0;
int ret;
@@ -1958,22 +1958,22 @@ static uint32_t
virtio_dev_speed_capa_get(uint32_t speed)
{
switch (speed) {
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -2089,14 +2089,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "configure");
req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
- if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Rx multi queue mode %d",
rxmode->mq_mode);
return -EINVAL;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Tx multi queue mode %d",
txmode->mq_mode);
@@ -2114,20 +2114,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
req_features |=
(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
- if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_CSUM);
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
req_features |=
(1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2139,15 +2139,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+ if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
PMD_DRV_LOG(ERR,
"rx checksum not available on this host");
return -ENOTSUP;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
PMD_DRV_LOG(ERR,
@@ -2159,12 +2159,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
virtio_dev_cq_start(dev);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
hw->vlan_strip = 1;
- hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+ hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
- if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(ERR,
"vlan filtering not available on this host");
@@ -2217,7 +2217,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(INFO,
"disabled packed ring vectorized rx for TCP_LRO enabled");
hw->use_vec_rx = 0;
@@ -2244,10 +2244,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN_STRIP)) {
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
PMD_DRV_LOG(INFO,
"disabled split ring vectorized rx for offloading enabled");
hw->use_vec_rx = 0;
@@ -2440,7 +2440,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
struct rte_eth_link link;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "stop");
dev->data->dev_started = 0;
@@ -2481,28 +2481,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
memset(&link, 0, sizeof(link));
link.link_duplex = hw->duplex;
link.link_speed = hw->speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (!hw->started) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
PMD_INIT_LOG(DEBUG, "Get link status from hw");
virtio_read_dev_config(hw,
offsetof(struct virtio_net_config, status),
&status, sizeof(status));
if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
PMD_INIT_LOG(DEBUG, "Port %d is down",
dev->data->port_id);
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
PMD_INIT_LOG(DEBUG, "Port %d is up",
dev->data->port_id);
}
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2515,8 +2515,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct virtio_hw *hw = dev->data->dev_private;
uint64_t offloads = rxmode->offloads;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(NOTICE,
@@ -2526,8 +2526,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (mask & ETH_VLAN_STRIP_MASK)
- hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
+ hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -2549,32 +2549,32 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mtu = hw->max_mtu;
host_features = VIRTIO_OPS(hw)->get_features(hw);
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
}
if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
}
tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
/*
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a19895af1f17..26d9edf5319c 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,20 +41,20 @@
#define VMXNET3_TX_MAX_SEG UINT8_MAX
#define VMXNET3_TX_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define VMXNET3_RX_OFFLOAD_CAP \
- (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
@@ -398,9 +398,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
/* set the initial link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(eth_dev, &link);
return 0;
@@ -486,8 +486,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -547,7 +547,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
hw->queueDescPA = mz->iova;
hw->queue_desc_len = (uint16_t)size;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Allocate memory structure for UPT1_RSSConf and configure */
mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
"rss_conf", rte_socket_id(),
@@ -843,15 +843,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
devRead->rxFilterConf.rxMode = 0;
/* Setting up feature flags */
- if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
devRead->misc.uptFeatures |= VMXNET3_F_LRO;
devRead->misc.maxNumRxSG = 0;
}
- if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
ret = vmxnet3_rss_configure(dev);
if (ret != VMXNET3_SUCCESS)
return ret;
@@ -863,7 +863,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
}
ret = vmxnet3_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -930,7 +930,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
}
if (VMXNET3_VERSION_GE_4(hw) &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Check for additional RSS */
ret = vmxnet3_v4_rss_configure(dev);
if (ret != VMXNET3_SUCCESS) {
@@ -1039,9 +1039,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clear recorded link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
hw->adapter_stopped = 1;
@@ -1365,7 +1365,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
dev_info->min_mtu = VMXNET3_MIN_MTU;
dev_info->max_mtu = VMXNET3_MAX_MTU;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1447,10 +1447,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
if (ret & 0x1)
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -1503,7 +1503,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1573,8 +1573,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint32_t *vf_table = devRead->rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
else
devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1583,8 +1583,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
VMXNET3_CMD_UPDATE_FEATURE);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 8950175460f0..ef858ac9512f 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
VMXNET3_MAX_RX_QUEUES + 1)
#define VMXNET3_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define VMXNET3_V4_RSS_MASK ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define VMXNET3_MANDATORY_V4_RSS ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
/* RSS configuration structure - shared with device through GPA */
typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b01c4c01f9c9..870100fa4f11 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
rss_hf = port_rss_conf->rss_hf &
(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
/* loading hashType */
dev_rss_conf->hashType = 0;
rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index a26076b312e5..ecafc5e4f1a9 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -70,11 +70,11 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -327,7 +327,7 @@ check_port_link_status(uint16_t port_id)
if (link_get_err >= 0 && link.link_status) {
const char *dp = (link.link_duplex ==
- ETH_LINK_FULL_DUPLEX) ?
+ RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex";
printf("\nPort %u Link Up - speed %s - %s\n",
port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index fd8fd767c811..1087b0dad125 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -114,17 +114,17 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -148,9 +148,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
@@ -240,9 +240,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
BOND_PORT, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
if (retval != 0)
rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 8c4a8feec0c2..c681e237ea46 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,15 +80,15 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
}
},
};
@@ -126,9 +126,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 1bc675962bf3..cdd9e9b60bd8 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
int ret;
memset(&cfg_port, 0, sizeof(cfg_port));
- cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+ cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
pause_param->tx_pause = 0;
pause_param->rx_pause = 0;
switch (fc_conf.mode) {
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
pause_param->rx_pause = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
pause_param->tx_pause = 1;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_param->rx_pause = 1;
pause_param->tx_pause = 1;
default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
if (pause_param->tx_pause) {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_FULL;
+ fc_conf.mode = RTE_ETH_FC_FULL;
else
- fc_conf.mode = RTE_FC_TX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
} else {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_RX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf.mode = RTE_FC_NONE;
+ fc_conf.mode = RTE_ETH_FC_NONE;
}
status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
for (vf = 0; vf < num_vfs; vf++) {
#ifdef RTE_NET_IXGBE
rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
- ETH_VMDQ_ACCEPT_UNTAG, 0);
+ RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
#endif
}
/* Enable Rx vlan filter, VF unspport status is discard */
- ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+ ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
if (ret != 0)
return ret;
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index e26be8edf28f..193a16463449 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,13 +283,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -311,12 +311,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 476b147bdfcc..1b841d46ad93 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,13 +614,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -642,9 +642,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 8a43f6ac0f92..6185b340600c 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -212,9 +212,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index dd8a33d036ee..bfc1949c8428 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
memset(&link, 0, sizeof(link));
do {
link_get_err = rte_eth_link_get(port_id, &link);
- if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+ if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
if (link_get_err < 0)
rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
rte_strerror(-link_get_err));
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
}
@@ -138,12 +138,12 @@ init_port(void)
},
.txmode = {
.offloads =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
},
};
struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ccfee585f850..b1aa2767a0af 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,12 +819,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
/* Configuring port to use RSS for multiple RX queues. 8< */
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_PROTO_MASK,
+ .rss_hf = RTE_ETH_RSS_PROTO_MASK,
}
}
};
@@ -852,9 +852,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index d51133199c42..4ffe997baf23 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -148,13 +148,13 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER),
+ .offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER),
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -623,7 +623,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 9ba02e687adb..0290767af473 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < reta_size; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < reta_size; i++) {
- uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
- uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+ uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
uint32_t rss_qs_pos = i % rss->n_queues;
reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -267,5 +267,5 @@ link_is_up(const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 06dc42799314..41e35593867b 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -160,22 +160,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -737,7 +737,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -1095,9 +1095,9 @@ main(int argc, char **argv)
n_tx_queue = nb_lcores;
if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
n_tx_queue = MAX_TX_QUEUE_PER_PORT;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index a10e330f5003..1c60ac28e317 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -233,19 +233,19 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1444,10 +1444,10 @@ print_usage(const char *prgname)
" \"parallel\" : Parallel\n"
" --" CMD_LINE_OPT_RX_OFFLOAD
": bitmask of the RX HW offload capabilities to enable/use\n"
- " (DEV_RX_OFFLOAD_*)\n"
+ " (RTE_ETH_RX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_TX_OFFLOAD
": bitmask of the TX HW offload capabilities to enable/use\n"
- " (DEV_TX_OFFLOAD_*)\n"
+ " (RTE_ETH_TX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_REASSEMBLE " NUM"
": max number of entries in reassemble(fragment) table\n"
" (zero (default value) disables reassembly)\n"
@@ -1898,7 +1898,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2201,8 +2201,8 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2225,12 +2225,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
portid, local_port_conf.txmode.offloads,
dev_info.tx_offload_capa);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
printf("port %u configurng rx_offloads=0x%" PRIx64
", tx_offloads=0x%" PRIx64 "\n",
@@ -2288,7 +2288,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
/* Pre-populate pkt offloads based on capabilities */
qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
- if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
tx_queueid++;
@@ -2649,7 +2649,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
struct rte_flow *flow;
int ret;
- if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
if (inbound) {
if ((dev_info.rx_offload_capa &
- DEV_RX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware RX IPSec offload is not supported\n");
return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
} else { /* outbound */
if ((dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware TX IPSec offload is not supported\n");
return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+ *rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
}
/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+ *tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
}
return 0;
}
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 73391ce1a96d..bdcaa3bcd1ca 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -114,8 +114,8 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
},
};
@@ -619,7 +619,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 69a0afced6cc..d324ee224109 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -94,7 +94,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
/* Options for configuring ethernet port */
static struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -607,9 +607,9 @@ init_port(uint16_t port)
"Error during getting device (port %u) info: %s\n",
port, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -687,7 +687,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 6e2016752fca..04a3bdace20c 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -215,11 +215,11 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1807,7 +1807,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2631,9 +2631,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (retval < 0) {
printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 9040be5ed9b6..cf3d1b8aaf40 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -14,7 +14,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
uint16_t nb_ports_available = 0;
@@ -22,9 +22,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
int ret;
if (rsrc->event_mode) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+ port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
}
/* Initialise each port */
@@ -60,9 +60,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
local_port_conf.rx_adv_conf.rss_conf.rss_hf);
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queue. 8< */
ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index 62981663ea78..d8eabe4c869e 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -93,7 +93,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -725,7 +725,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -868,9 +868,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index af59d51b3ec4..78fc48f781fc 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -82,7 +82,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -477,7 +477,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -649,9 +649,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 8feb50e0f542..c9d8d4918a34 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -605,7 +605,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -791,9 +791,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the number of queues for a port. */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 410ec94b4131..1fb180723582 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -123,19 +123,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1935,7 +1935,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2003,7 +2003,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -2087,9 +2087,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 05385807e83e..7f00c65609ed 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,17 +111,17 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -607,7 +607,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* Clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -731,7 +731,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -828,9 +828,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 39624993b081..21c79567b1f7 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -249,18 +249,18 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_UDP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
@@ -2196,7 +2196,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2509,7 +2509,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -2637,9 +2637,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
rte_panic("Error during getting device (port %u) info:"
"%s\n", port_id, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 202ef78b6e95..5dd3e4136ea1 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -119,18 +119,18 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -902,7 +902,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -987,7 +987,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -1052,15 +1052,15 @@ l3fwd_poll_resource_setup(void)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
if (dev_info.max_rx_queues == 1)
- local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index ce8ae059d789..551f0524da79 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -82,7 +82,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.intr_conf = {
.lsc = 1, /**< lsc interrupt feature enabled */
@@ -146,7 +146,7 @@ print_stats(void)
link_get_err < 0 ? "0" :
rte_eth_link_speed_to_str(link.link_speed),
link_get_err < 0 ? "Link get failed" :
- (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex"),
port_statistics[portid].tx,
port_statistics[portid].rx,
@@ -506,7 +506,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -633,9 +633,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index be669c2bcc06..a4d7a3e5436a 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -93,7 +93,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS
+ .mq_mode = RTE_ETH_MQ_RX_RSS
}
};
const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -212,7 +212,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index a66328ba0caf..b35886a77b00 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -175,18 +175,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
{
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -217,9 +217,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
info.default_rxconf.rx_drop_en = 1;
- if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -391,7 +391,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
static struct rte_eth_conf eth_port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index 4f6982bc1289..b01ac60fd196 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
return ret;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
if (ret != 0)
return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 74e016e1d20d..3a6a33bda3b0 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -306,18 +306,18 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_TCP,
+ .rss_hf = RTE_ETH_RSS_TCP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -3437,7 +3437,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3490,7 +3490,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -3589,9 +3589,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 4f20dfc4be06..569207a79d62 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < reta_size; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < reta_size; i++) {
- uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
- uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+ uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
uint32_t rss_qs_pos = i % rss->n_queues;
reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 229a277032cb..979d9eb9e9d0 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -193,14 +193,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Force full Tx path in the driver, required for IEEE1588 */
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index c32d2e12e633..743bae2da50a 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,18 +51,18 @@ static struct rte_mempool *pool = NULL;
***/
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -332,8 +332,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_rx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -378,8 +378,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_tx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1367569c65db..9b34e4a76b1b 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -60,7 +60,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -105,9 +105,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6845c396b8d9..1903d8b095a1 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -141,17 +141,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (hw_timestamping) {
- if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+ if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
printf("\nERROR: Port %u does not support hardware timestamping\n"
, port);
return -1;
}
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
if (hwts_dynfield_offset < 0) {
printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index a19934dbe0c8..0e5e3b5a9815 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -95,7 +95,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
};
const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -114,9 +114,9 @@ init_port(uint16_t port_num)
if (retval != 0)
return retval;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/*
* Standard DPDK port initialisation - config port, then set up
@@ -276,7 +276,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index fd7207aee758..16435ee3ccc2 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -49,9 +49,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 97218917067e..44376417f83d 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -110,23 +110,23 @@ static int nb_sockets;
/* empty vmdq configuration structure. Filled in programatically */
static struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
/*
* VLAN strip is necessary for 1G NIC such as I350,
* this fixes bug of ipv4 forwarding in guest can't
* forward pakets from one virtio dev to another virtio dev.
*/
- .offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+ .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO),
},
.rx_adv_conf = {
/*
@@ -134,7 +134,7 @@ static struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -291,9 +291,9 @@ port_init(uint16_t port)
return -1;
rx_rings = (uint16_t)dev_info.max_rx_queues;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0) {
@@ -557,8 +557,8 @@ us_vhost_parse_args(int argc, char **argv)
case 'P':
promiscuous = 1;
vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
- ETH_VMDQ_ACCEPT_BROADCAST |
- ETH_VMDQ_ACCEPT_MULTICAST;
+ RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+ RTE_ETH_VMDQ_ACCEPT_MULTICAST;
break;
case OPT_VM2VM_NUM:
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e19d79a40802..b159291d77ce 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -73,9 +73,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -270,7 +270,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index 85996bf864b7..feee642f594d 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -65,12 +65,12 @@ static uint8_t rss_enable;
/* empty vmdq configuration structure. Filled in programatically */
static const struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
/*
@@ -78,7 +78,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -156,11 +156,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
(void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf,
sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -258,9 +258,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
if (retval != 0)
return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index be0179fdeaf0..d2218f2cf741 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -59,8 +59,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
static unsigned num_ports;
/* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs num_tcs = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs num_tcs = RTE_ETH_4_TCS;
static uint16_t num_queues, num_vmdq_queues;
static uint16_t vmdq_pool_base, vmdq_queue_base;
static uint8_t rss_enable;
@@ -68,11 +68,11 @@ static uint8_t rss_enable;
/* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
static const struct rte_eth_conf vmdq_dcb_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
},
/*
* should be overridden separately in code with
@@ -80,7 +80,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
*/
.rx_adv_conf = {
.vmdq_dcb_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -88,12 +88,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
.dcb_tc = {0},
},
.dcb_rx_conf = {
- .nb_tcs = ETH_4_TCS,
+ .nb_tcs = RTE_ETH_4_TCS,
/** Traffic class each UP mapped to. */
.dcb_tc = {0},
},
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -102,7 +102,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
},
.tx_adv_conf = {
.vmdq_dcb_tx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.dcb_tc = {0},
},
},
@@ -156,7 +156,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
conf.pool_map[i].pools = 1UL << i;
vmdq_conf.pool_map[i].pools = 1UL << i;
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
conf.dcb_tc[i] = i % num_tcs;
dcb_conf.dcb_tc[i] = i % num_tcs;
tx_conf.dcb_tc[i] = i % num_tcs;
@@ -172,11 +172,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
(void)(rte_memcpy(ð_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
sizeof(tx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -270,9 +270,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -381,9 +381,9 @@ vmdq_parse_num_pools(const char *q_arg)
if (n != 16 && n != 32)
return -1;
if (n == 16)
- num_pools = ETH_16_POOLS;
+ num_pools = RTE_ETH_16_POOLS;
else
- num_pools = ETH_32_POOLS;
+ num_pools = RTE_ETH_32_POOLS;
return 0;
}
@@ -403,9 +403,9 @@ vmdq_parse_num_tcs(const char *q_arg)
if (n != 4 && n != 8)
return -1;
if (n == 4)
- num_tcs = ETH_4_TCS;
+ num_tcs = RTE_ETH_4_TCS;
else
- num_tcs = ETH_8_TCS;
+ num_tcs = RTE_ETH_8_TCS;
return 0;
}
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index b530ac6e320a..dcbffd4265fa 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -114,7 +114,7 @@ struct rte_eth_dev_data {
/** Device Ethernet link address. @see rte_eth_dev_release_port() */
struct rte_ether_addr *mac_addrs;
/** Bitmap associating MAC addresses to pools */
- uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+ uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
/**
* Device Ethernet MAC addresses of hash filtering.
* @see rte_eth_dev_release_port()
@@ -1700,23 +1700,23 @@ struct rte_eth_syn_filter {
/**
* filter type of tunneling packet
*/
-#define ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr */
-#define ETH_TUNNEL_FILTER_OIP 0x02 /**< filter by outer IP Addr */
-#define ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
-#define ETH_TUNNEL_FILTER_IMAC 0x08 /**< filter by inner MAC addr */
-#define ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
-#define ETH_TUNNEL_FILTER_IIP 0x20 /**< filter by inner IP addr */
-
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_IVLAN)
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_IVLAN | \
- ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_IMAC_TENID (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_OMAC_TENID_IMAC (ETH_TUNNEL_FILTER_OMAC | \
- ETH_TUNNEL_FILTER_TENID | \
- ETH_TUNNEL_FILTER_IMAC)
+#define RTE_ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_OIP 0x02 /**< filter by outer IP Addr */
+#define RTE_ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
+#define RTE_ETH_TUNNEL_FILTER_IMAC 0x08 /**< filter by inner MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
+#define RTE_ETH_TUNNEL_FILTER_IIP 0x20 /**< filter by inner IP addr */
+
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_IVLAN)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_IVLAN | \
+ RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC (RTE_ETH_TUNNEL_FILTER_OMAC | \
+ RTE_ETH_TUNNEL_FILTER_TENID | \
+ RTE_ETH_TUNNEL_FILTER_IMAC)
/**
* Select IPv4 or IPv6 for tunnel filters.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4ea5a657e003..9b6007803dd8 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -101,9 +101,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
#define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
#define RTE_RX_OFFLOAD_BIT2STR(_name) \
- { DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name) \
{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
static const struct {
@@ -128,14 +125,14 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
- RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+ RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
};
#undef RTE_RX_OFFLOAD_BIT2STR
#undef RTE_ETH_RX_OFFLOAD_BIT2STR
#define RTE_TX_OFFLOAD_BIT2STR(_name) \
- { DEV_TX_OFFLOAD_##_name, #_name }
+ { RTE_ETH_TX_OFFLOAD_##_name, #_name }
static const struct {
uint64_t offload;
@@ -1182,32 +1179,32 @@ uint32_t
rte_eth_speed_bitflag(uint32_t speed, int duplex)
{
switch (speed) {
- case ETH_SPEED_NUM_10M:
- return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
- case ETH_SPEED_NUM_100M:
- return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
- case ETH_SPEED_NUM_1G:
- return ETH_LINK_SPEED_1G;
- case ETH_SPEED_NUM_2_5G:
- return ETH_LINK_SPEED_2_5G;
- case ETH_SPEED_NUM_5G:
- return ETH_LINK_SPEED_5G;
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10M:
+ return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+ case RTE_ETH_SPEED_NUM_100M:
+ return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+ case RTE_ETH_SPEED_NUM_1G:
+ return RTE_ETH_LINK_SPEED_1G;
+ case RTE_ETH_SPEED_NUM_2_5G:
+ return RTE_ETH_LINK_SPEED_2_5G;
+ case RTE_ETH_SPEED_NUM_5G:
+ return RTE_ETH_LINK_SPEED_5G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -1528,7 +1525,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
uint32_t max_rx_pktlen;
uint32_t overhead_len;
@@ -1585,12 +1582,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
- if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
- (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+ (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
RTE_ETHDEV_LOG(ERR,
"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
port_id,
- rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+ rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
ret = -EINVAL;
goto rollback;
}
@@ -2213,7 +2210,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* size is supported by the configured device.
*/
/* Get the real Ethernet overhead length */
- if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
uint32_t overhead_len;
uint32_t max_rx_pktlen;
int ret;
@@ -2793,21 +2790,21 @@ const char *
rte_eth_link_speed_to_str(uint32_t link_speed)
{
switch (link_speed) {
- case ETH_SPEED_NUM_NONE: return "None";
- case ETH_SPEED_NUM_10M: return "10 Mbps";
- case ETH_SPEED_NUM_100M: return "100 Mbps";
- case ETH_SPEED_NUM_1G: return "1 Gbps";
- case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
- case ETH_SPEED_NUM_5G: return "5 Gbps";
- case ETH_SPEED_NUM_10G: return "10 Gbps";
- case ETH_SPEED_NUM_20G: return "20 Gbps";
- case ETH_SPEED_NUM_25G: return "25 Gbps";
- case ETH_SPEED_NUM_40G: return "40 Gbps";
- case ETH_SPEED_NUM_50G: return "50 Gbps";
- case ETH_SPEED_NUM_56G: return "56 Gbps";
- case ETH_SPEED_NUM_100G: return "100 Gbps";
- case ETH_SPEED_NUM_200G: return "200 Gbps";
- case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+ case RTE_ETH_SPEED_NUM_NONE: return "None";
+ case RTE_ETH_SPEED_NUM_10M: return "10 Mbps";
+ case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+ case RTE_ETH_SPEED_NUM_1G: return "1 Gbps";
+ case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+ case RTE_ETH_SPEED_NUM_5G: return "5 Gbps";
+ case RTE_ETH_SPEED_NUM_10G: return "10 Gbps";
+ case RTE_ETH_SPEED_NUM_20G: return "20 Gbps";
+ case RTE_ETH_SPEED_NUM_25G: return "25 Gbps";
+ case RTE_ETH_SPEED_NUM_40G: return "40 Gbps";
+ case RTE_ETH_SPEED_NUM_50G: return "50 Gbps";
+ case RTE_ETH_SPEED_NUM_56G: return "56 Gbps";
+ case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+ case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+ case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
default: return "Invalid";
}
}
@@ -2831,14 +2828,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
return -EINVAL;
}
- if (eth_link->link_status == ETH_LINK_DOWN)
+ if (eth_link->link_status == RTE_ETH_LINK_DOWN)
return snprintf(str, len, "Link down");
else
return snprintf(str, len, "Link up at %s %s %s",
rte_eth_link_speed_to_str(eth_link->link_speed),
- (eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"FDX" : "HDX",
- (eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+ (eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
"Autoneg" : "Fixed");
}
@@ -3745,7 +3742,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
dev = &rte_eth_devices[port_id];
if (!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
RTE_ETHDEV_LOG(ERR, "Port %u: VLAN-filtering disabled\n",
port_id);
return -ENOSYS;
@@ -3832,44 +3829,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
dev_offloads = orig_offloads;
/* check which option changed by application */
- cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
- mask |= ETH_VLAN_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ mask |= RTE_ETH_VLAN_STRIP_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
- mask |= ETH_VLAN_FILTER_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+ mask |= RTE_ETH_VLAN_FILTER_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+ cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
- mask |= ETH_VLAN_EXTEND_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+ mask |= RTE_ETH_VLAN_EXTEND_MASK;
}
- cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+ cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
- mask |= ETH_QINQ_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+ mask |= RTE_ETH_QINQ_STRIP_MASK;
}
/*no change*/
@@ -3914,17 +3911,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
dev = &rte_eth_devices[port_id];
dev_offloads = &dev->data->dev_conf.rxmode.offloads;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- ret |= ETH_VLAN_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- ret |= ETH_VLAN_FILTER_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
- ret |= ETH_VLAN_EXTEND_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+ ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
- ret |= ETH_QINQ_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+ ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
return ret;
}
@@ -4001,7 +3998,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
return -EINVAL;
}
- if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+ if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
return -EINVAL;
}
@@ -4019,7 +4016,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
{
uint16_t i, num;
- num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+ num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
if (reta_conf[i].mask)
return 0;
@@ -4041,8 +4038,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & RTE_BIT64(shift)) &&
(reta_conf[idx].reta[shift] >= max_rxq)) {
RTE_ETHDEV_LOG(ERR,
@@ -4198,7 +4195,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4224,7 +4221,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4365,8 +4362,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
port_id);
return -EINVAL;
}
- if (pool >= ETH_64_POOLS) {
- RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", ETH_64_POOLS - 1);
+ if (pool >= RTE_ETH_64_POOLS) {
+ RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", RTE_ETH_64_POOLS - 1);
return -EINVAL;
}
@@ -6275,7 +6272,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
rte_tel_data_add_dict_string(d, status_str, "UP");
rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
rte_tel_data_add_dict_string(d, "duplex",
- (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex");
return 0;
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 21f570832921..1de810d5cdbf 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -250,7 +250,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
* field is not supported, its value is 0.
* All byte-related statistics do not include Ethernet FCS regardless
* of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
*/
struct rte_eth_stats {
uint64_t ipackets; /**< Total number of successfully received packets. */
@@ -281,43 +281,75 @@ struct rte_eth_stats {
/**@{@name Link speed capabilities
* Device supported speeds bitmap flags
*/
-#define ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */
-#define ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */
-#define ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
-#define ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
-#define ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
-#define ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
-#define ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
-#define ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
-#define ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */
+#define ETH_LINK_SPEED_1G RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */
+#define ETH_LINK_SPEED_5G RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
+#define ETH_LINK_SPEED_10G RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
+#define ETH_LINK_SPEED_20G RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
+#define ETH_LINK_SPEED_25G RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
+#define ETH_LINK_SPEED_40G RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
+#define ETH_LINK_SPEED_50G RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
+#define ETH_LINK_SPEED_56G RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G RTE_ETH_LINK_SPEED_200G
/**@}*/
/**@{@name Link speed
* Ethernet numeric link speeds in Mbps
*/
-#define ETH_SPEED_NUM_NONE 0 /**< Not defined */
-#define ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
-#define ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
-#define ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
-#define ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
-#define ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
-#define ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
-#define ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
-#define ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
-#define ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
-#define ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
+#define ETH_SPEED_NUM_10M RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
+#define ETH_SPEED_NUM_1G RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
+#define ETH_SPEED_NUM_5G RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
+#define ETH_SPEED_NUM_10G RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
+#define ETH_SPEED_NUM_20G RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
+#define ETH_SPEED_NUM_25G RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
+#define ETH_SPEED_NUM_40G RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
+#define ETH_SPEED_NUM_50G RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
+#define ETH_SPEED_NUM_56G RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN RTE_ETH_SPEED_NUM_UNKNOWN
/**@}*/
/**
@@ -325,21 +357,27 @@ struct rte_eth_stats {
*/
__extension__
struct rte_eth_link {
- uint32_t link_speed; /**< ETH_SPEED_NUM_ */
- uint16_t link_duplex : 1; /**< ETH_LINK_[HALF/FULL]_DUPLEX */
- uint16_t link_autoneg : 1; /**< ETH_LINK_[AUTONEG/FIXED] */
- uint16_t link_status : 1; /**< ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /**< RTE_ETH_SPEED_NUM_ */
+ uint16_t link_duplex : 1; /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint16_t link_autoneg : 1; /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint16_t link_status : 1; /**< RTE_ETH_LINK_[DOWN/UP] */
} __rte_aligned(8); /**< aligned for atomic64 read/write */
/**@{@name Link negotiation
* Constants used in link management.
*/
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP 1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG RTE_ETH_LINK_AUTONEG
#define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
/**@}*/
@@ -356,9 +394,12 @@ struct rte_eth_thresh {
/**@{@name Multi-queue mode
* @see rte_eth_conf.rxmode.mq_mode.
*/
-#define ETH_MQ_RX_RSS_FLAG 0x1 /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_DCB_FLAG 0x2 /**< Enable DCB. */
-#define ETH_MQ_RX_VMDQ_FLAG 0x4 /**< Enable VMDq. */
+#define RTE_ETH_MQ_RX_RSS_FLAG 0x1
+#define ETH_MQ_RX_RSS_FLAG RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG 0x2
+#define ETH_MQ_RX_DCB_FLAG RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG RTE_ETH_MQ_RX_VMDQ_FLAG
/**@}*/
/**
@@ -367,50 +408,49 @@ struct rte_eth_thresh {
*/
enum rte_eth_rx_mq_mode {
/** None of DCB, RSS or VMDq mode */
- ETH_MQ_RX_NONE = 0,
+ RTE_ETH_MQ_RX_NONE = 0,
/** For Rx side, only RSS is on */
- ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+ RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
/** For Rx side,only DCB is on. */
- ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
/** Both DCB and RSS enable */
- ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Only VMDq, no RSS nor DCB */
- ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
/** RSS mode with VMDq */
- ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
/** Use VMDq+DCB to route traffic to queues */
- ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Enable both VMDq and DCB in VMDq */
- ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
- ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+ RTE_ETH_MQ_RX_VMDQ_FLAG,
};
-/**
- * for Rx mq mode backward compatible
- */
-#define ETH_RSS ETH_MQ_RX_RSS
-#define VMDQ_DCB ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_ETH_MQ_RX_VMDQ_DCB_RSS
/**
* A set of values to identify what method is to be used to transmit
* packets using multi-TCs.
*/
enum rte_eth_tx_mq_mode {
- ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
- ETH_MQ_TX_DCB, /**< For Tx side,only DCB is on. */
- ETH_MQ_TX_VMDQ_DCB, /**< For Tx side,both DCB and VT is on. */
- ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
+ RTE_ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
+ RTE_ETH_MQ_TX_DCB, /**< For Tx side,only DCB is on. */
+ RTE_ETH_MQ_TX_VMDQ_DCB, /**< For Tx side,both DCB and VT is on. */
+ RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
-
-/**
- * for Tx mq mode backward compatible
- */
-#define ETH_DCB_NONE ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY RTE_ETH_MQ_TX_VMDQ_ONLY
/**
* A structure used to configure the Rx features of an Ethernet port.
@@ -423,7 +463,7 @@ struct rte_eth_rxmode {
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
- * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -438,12 +478,17 @@ struct rte_eth_rxmode {
* Note that single VLAN is treated the same as inner VLAN.
*/
enum rte_vlan_type {
- ETH_VLAN_TYPE_UNKNOWN = 0,
- ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
- ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
- ETH_VLAN_TYPE_MAX,
+ RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+ RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+ RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+ RTE_ETH_VLAN_TYPE_MAX,
};
+#define ETH_VLAN_TYPE_UNKNOWN RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX RTE_ETH_VLAN_TYPE_MAX
+
/**
* A structure used to describe a VLAN filter.
* If the bit corresponding to a VID is set, such VID is on.
@@ -514,38 +559,70 @@ struct rte_eth_rss_conf {
* Below macros are defined for RSS offload types, they can be used to
* fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
*/
-#define ETH_RSS_IPV4 RTE_BIT64(2)
-#define ETH_RSS_FRAG_IPV4 RTE_BIT64(3)
-#define ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4)
-#define ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
-#define ETH_RSS_IPV6 RTE_BIT64(8)
-#define ETH_RSS_FRAG_IPV6 RTE_BIT64(9)
-#define ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10)
-#define ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
-#define ETH_RSS_L2_PAYLOAD RTE_BIT64(14)
-#define ETH_RSS_IPV6_EX RTE_BIT64(15)
-#define ETH_RSS_IPV6_TCP_EX RTE_BIT64(16)
-#define ETH_RSS_IPV6_UDP_EX RTE_BIT64(17)
-#define ETH_RSS_PORT RTE_BIT64(18)
-#define ETH_RSS_VXLAN RTE_BIT64(19)
-#define ETH_RSS_GENEVE RTE_BIT64(20)
-#define ETH_RSS_NVGRE RTE_BIT64(21)
-#define ETH_RSS_GTPU RTE_BIT64(23)
-#define ETH_RSS_ETH RTE_BIT64(24)
-#define ETH_RSS_S_VLAN RTE_BIT64(25)
-#define ETH_RSS_C_VLAN RTE_BIT64(26)
-#define ETH_RSS_ESP RTE_BIT64(27)
-#define ETH_RSS_AH RTE_BIT64(28)
-#define ETH_RSS_L2TPV3 RTE_BIT64(29)
-#define ETH_RSS_PFCP RTE_BIT64(30)
-#define ETH_RSS_PPPOE RTE_BIT64(31)
-#define ETH_RSS_ECPRI RTE_BIT64(32)
-#define ETH_RSS_MPLS RTE_BIT64(33)
-#define ETH_RSS_IPV4_CHKSUM RTE_BIT64(34)
+#define RTE_ETH_RSS_IPV4 RTE_BIT64(2)
+#define ETH_RSS_IPV4 RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4 RTE_BIT64(3)
+#define ETH_RSS_FRAG_IPV4 RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4)
+#define ETH_RSS_NONFRAG_IPV4_TCP RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5)
+#define ETH_RSS_NONFRAG_IPV4_UDP RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6 RTE_BIT64(8)
+#define ETH_RSS_IPV6 RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6 RTE_BIT64(9)
+#define ETH_RSS_FRAG_IPV6 RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10)
+#define ETH_RSS_NONFRAG_IPV6_TCP RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11)
+#define ETH_RSS_NONFRAG_IPV6_UDP RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD RTE_BIT64(14)
+#define ETH_RSS_L2_PAYLOAD RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX RTE_BIT64(15)
+#define ETH_RSS_IPV6_EX RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX RTE_BIT64(16)
+#define ETH_RSS_IPV6_TCP_EX RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX RTE_BIT64(17)
+#define ETH_RSS_IPV6_UDP_EX RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT RTE_BIT64(18)
+#define ETH_RSS_PORT RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN RTE_BIT64(19)
+#define ETH_RSS_VXLAN RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE RTE_BIT64(20)
+#define ETH_RSS_GENEVE RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE RTE_BIT64(21)
+#define ETH_RSS_NVGRE RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU RTE_BIT64(23)
+#define ETH_RSS_GTPU RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH RTE_BIT64(24)
+#define ETH_RSS_ETH RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN RTE_BIT64(25)
+#define ETH_RSS_S_VLAN RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN RTE_BIT64(26)
+#define ETH_RSS_C_VLAN RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP RTE_BIT64(27)
+#define ETH_RSS_ESP RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH RTE_BIT64(28)
+#define ETH_RSS_AH RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3 RTE_BIT64(29)
+#define ETH_RSS_L2TPV3 RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP RTE_BIT64(30)
+#define ETH_RSS_PFCP RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE RTE_BIT64(31)
+#define ETH_RSS_PPPOE RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI RTE_BIT64(32)
+#define ETH_RSS_ECPRI RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS RTE_BIT64(33)
+#define ETH_RSS_MPLS RTE_ETH_RSS_MPLS
+#define RTE_ETH_RSS_IPV4_CHKSUM RTE_BIT64(34)
+#define ETH_RSS_IPV4_CHKSUM RTE_ETH_RSS_IPV4_CHKSUM
/**
* The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
@@ -554,41 +631,48 @@ struct rte_eth_rss_conf {
* checksum type for constructing the use of RSS offload bits.
*
* Due to above reason, some old APIs (and configuration) don't support
- * ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
+ * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
*
* For the case that checksum is not used in an UDP header,
* it takes the reserved value 0 as input for the hash function.
*/
-#define ETH_RSS_L4_CHKSUM RTE_BIT64(35)
+#define RTE_ETH_RSS_L4_CHKSUM RTE_BIT64(35)
+#define ETH_RSS_L4_CHKSUM RTE_ETH_RSS_L4_CHKSUM
/*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
* more specific input set selection. These bits are defined starting
* from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
* both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
* the same level are used simultaneously, it is the same case as none of
* them are added.
*/
-#define ETH_RSS_L3_SRC_ONLY RTE_BIT64(63)
-#define ETH_RSS_L3_DST_ONLY RTE_BIT64(62)
-#define ETH_RSS_L4_SRC_ONLY RTE_BIT64(61)
-#define ETH_RSS_L4_DST_ONLY RTE_BIT64(60)
-#define ETH_RSS_L2_SRC_ONLY RTE_BIT64(59)
-#define ETH_RSS_L2_DST_ONLY RTE_BIT64(58)
+#define RTE_ETH_RSS_L3_SRC_ONLY RTE_BIT64(63)
+#define ETH_RSS_L3_SRC_ONLY RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY RTE_BIT64(62)
+#define ETH_RSS_L3_DST_ONLY RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY RTE_BIT64(61)
+#define ETH_RSS_L4_SRC_ONLY RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY RTE_BIT64(60)
+#define ETH_RSS_L4_DST_ONLY RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY RTE_BIT64(59)
+#define ETH_RSS_L2_SRC_ONLY RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY RTE_BIT64(58)
+#define ETH_RSS_L2_DST_ONLY RTE_ETH_RSS_L2_DST_ONLY
/*
* Only select IPV6 address prefix as RSS input set according to
- * https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * https:tools.ietf.org/html/rfc6052
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
*/
-#define RTE_ETH_RSS_L3_PRE32 RTE_BIT64(57)
-#define RTE_ETH_RSS_L3_PRE40 RTE_BIT64(56)
-#define RTE_ETH_RSS_L3_PRE48 RTE_BIT64(55)
-#define RTE_ETH_RSS_L3_PRE56 RTE_BIT64(54)
-#define RTE_ETH_RSS_L3_PRE64 RTE_BIT64(53)
-#define RTE_ETH_RSS_L3_PRE96 RTE_BIT64(52)
+#define RTE_ETH_RSS_L3_PRE32 RTE_BIT64(57)
+#define RTE_ETH_RSS_L3_PRE40 RTE_BIT64(56)
+#define RTE_ETH_RSS_L3_PRE48 RTE_BIT64(55)
+#define RTE_ETH_RSS_L3_PRE56 RTE_BIT64(54)
+#define RTE_ETH_RSS_L3_PRE64 RTE_BIT64(53)
+#define RTE_ETH_RSS_L3_PRE96 RTE_BIT64(52)
/*
* Use the following macros to combine with the above layers
@@ -603,22 +687,27 @@ struct rte_eth_rss_conf {
* It basically stands for the innermost encapsulation level RSS
* can be performed on according to PMD and device capabilities.
*/
-#define ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_ETH_RSS_LEVEL_PMD_DEFAULT
/**
* level 1, requests RSS to be performed on the outermost packet
* encapsulation level.
*/
-#define ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST RTE_ETH_RSS_LEVEL_OUTERMOST
/**
* level 2, requests RSS to be performed on the specified inner packet
* encapsulation level, from outermost to innermost (lower to higher values).
*/
-#define ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK RTE_ETH_RSS_LEVEL_MASK
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf) RTE_ETH_RSS_LEVEL(rss_hf)
/**
* For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -633,219 +722,312 @@ struct rte_eth_rss_conf {
static inline uint64_t
rte_eth_rss_hf_refine(uint64_t rss_hf)
{
- if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
- rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+ rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
- if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
- rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+ rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
return rss_hf;
}
-#define ETH_RSS_IPV6_PRE32 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32 RTE_ETH_RSS_IPV6_PRE32
-#define ETH_RSS_IPV6_PRE40 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40 RTE_ETH_RSS_IPV6_PRE40
-#define ETH_RSS_IPV6_PRE48 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48 RTE_ETH_RSS_IPV6_PRE48
-#define ETH_RSS_IPV6_PRE56 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56 RTE_ETH_RSS_IPV6_PRE56
-#define ETH_RSS_IPV6_PRE64 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64 RTE_ETH_RSS_IPV6_PRE64
-#define ETH_RSS_IPV6_PRE96 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96 RTE_ETH_RSS_IPV6_PRE96
-#define ETH_RSS_IPV6_PRE32_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP RTE_ETH_RSS_IPV6_PRE32_UDP
-#define ETH_RSS_IPV6_PRE40_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP RTE_ETH_RSS_IPV6_PRE40_UDP
-#define ETH_RSS_IPV6_PRE48_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP RTE_ETH_RSS_IPV6_PRE48_UDP
-#define ETH_RSS_IPV6_PRE56_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP RTE_ETH_RSS_IPV6_PRE56_UDP
-#define ETH_RSS_IPV6_PRE64_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP RTE_ETH_RSS_IPV6_PRE64_UDP
-#define ETH_RSS_IPV6_PRE96_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP RTE_ETH_RSS_IPV6_PRE96_UDP
-#define ETH_RSS_IPV6_PRE32_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP RTE_ETH_RSS_IPV6_PRE32_TCP
-#define ETH_RSS_IPV6_PRE40_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP RTE_ETH_RSS_IPV6_PRE40_TCP
-#define ETH_RSS_IPV6_PRE48_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP RTE_ETH_RSS_IPV6_PRE48_TCP
-#define ETH_RSS_IPV6_PRE56_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP RTE_ETH_RSS_IPV6_PRE56_TCP
-#define ETH_RSS_IPV6_PRE64_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP RTE_ETH_RSS_IPV6_PRE64_TCP
-#define ETH_RSS_IPV6_PRE96_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP RTE_ETH_RSS_IPV6_PRE96_TCP
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP RTE_ETH_RSS_IPV6_PRE32_SCTP
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP RTE_ETH_RSS_IPV6_PRE40_SCTP
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP RTE_ETH_RSS_IPV6_PRE48_SCTP
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP RTE_ETH_RSS_IPV6_PRE56_SCTP
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP RTE_ETH_RSS_IPV6_PRE64_SCTP
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
- ETH_RSS_S_VLAN | \
- ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+ RTE_ETH_RSS_S_VLAN | \
+ RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN RTE_ETH_RSS_VLAN
/** Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | \
- ETH_RSS_PORT | \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE | \
- ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | \
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE | \
+ RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK RTE_ETH_RSS_PROTO_MASK
/*
* Definitions used for redirection table entry size.
* Some RSS RETA sizes may not be supported by some drivers, check the
* documentation or the description of relevant functions for more details.
*/
-#define ETH_RSS_RETA_SIZE_64 64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE 64
+#define RTE_ETH_RSS_RETA_SIZE_64 64
+#define ETH_RSS_RETA_SIZE_64 RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128 RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256 RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512 RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE 64
+#define RTE_RETA_GROUP_SIZE RTE_ETH_RETA_GROUP_SIZE
/**@{@name VMDq and DCB maximums */
-#define ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDq VLAN filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDq DCB queues. */
-#define ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDq VLAN filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDq DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES RTE_ETH_DCB_NUM_QUEUES
/**@}*/
/**@{@name DCB capabilities */
-#define ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT RTE_ETH_DCB_PFC_SUPPORT
/**@}*/
/**@{@name VLAN offload bits */
-#define ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
-
-#define ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
-#define ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
-#define ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
-#define ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
-#define ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
+#define ETH_VLAN_STRIP_MASK RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
+#define ETH_VLAN_FILTER_MASK RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
+#define ETH_VLAN_EXTEND_MASK RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
+#define ETH_QINQ_STRIP_MASK RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX RTE_ETH_VLAN_ID_MAX
/**@}*/
/* Definitions used for receive MAC address */
-#define ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR RTE_ETH_NUM_RECEIVE_MAC_ADDR
/* Definitions used for unicast hash */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
/**@{@name VMDq Rx mode
* @see rte_eth_vmdq_rx_conf.rx_mode
*/
-#define ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST RTE_ETH_VMDQ_ACCEPT_MULTICAST
/**@}*/
+/** Maximum nb. of vlan per mirror rule */
+#define RTE_ETH_MIRROR_MAX_VLANS 64
+#define ETH_MIRROR_MAX_VLANS RTE_ETH_MIRROR_MAX_VLANS
+
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP 0x01 /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT 0x02 /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT 0x04 /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN 0x08 /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN 0x10 /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
+
+/**
+ * A structure used to configure VLAN traffic mirror of an Ethernet port.
+ */
+struct rte_eth_vlan_mirror {
+ uint64_t vlan_mask; /**< mask for valid VLAN ID. */
+ /** VLAN ID list for vlan mirroring. */
+ uint16_t vlan_id[RTE_ETH_MIRROR_MAX_VLANS];
+};
+
+/**
+ * A structure used to configure traffic mirror of an Ethernet port.
+ */
+struct rte_eth_mirror_conf {
+ uint8_t rule_type; /**< Mirroring rule type */
+ uint8_t dst_pool; /**< Destination pool for this mirror rule. */
+ uint64_t pool_mask; /**< Bitmap of pool for pool mirroring */
+ /** VLAN ID setting for VLAN mirroring. */
+ struct rte_eth_vlan_mirror vlan;
+};
+
/**
* A structure used to configure 64 entries of Redirection Table of the
* Receive Side Scaling (RSS) feature of an Ethernet port. To configure
@@ -856,7 +1038,7 @@ struct rte_eth_rss_reta_entry64 {
/** Mask bits indicate which entries need to be updated/queried. */
uint64_t mask;
/** Group of 64 redirection table entries. */
- uint16_t reta[RTE_RETA_GROUP_SIZE];
+ uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
};
/**
@@ -864,38 +1046,44 @@ struct rte_eth_rss_reta_entry64 {
* in DCB configurations
*/
enum rte_eth_nb_tcs {
- ETH_4_TCS = 4, /**< 4 TCs with DCB. */
- ETH_8_TCS = 8 /**< 8 TCs with DCB. */
+ RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+ RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */
};
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
/**
* This enum indicates the possible number of queue pools
* in VMDq configurations.
*/
enum rte_eth_nb_pools {
- ETH_8_POOLS = 8, /**< 8 VMDq pools. */
- ETH_16_POOLS = 16, /**< 16 VMDq pools. */
- ETH_32_POOLS = 32, /**< 32 VMDq pools. */
- ETH_64_POOLS = 64 /**< 64 VMDq pools. */
+ RTE_ETH_8_POOLS = 8, /**< 8 VMDq pools. */
+ RTE_ETH_16_POOLS = 16, /**< 16 VMDq pools. */
+ RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */
+ RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */
};
+#define ETH_8_POOLS RTE_ETH_8_POOLS
+#define ETH_16_POOLS RTE_ETH_16_POOLS
+#define ETH_32_POOLS RTE_ETH_32_POOLS
+#define ETH_64_POOLS RTE_ETH_64_POOLS
/* This structure may be extended in future. */
struct rte_eth_dcb_rx_conf {
enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_vmdq_dcb_tx_conf {
enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_dcb_tx_conf {
enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_vmdq_tx_conf {
@@ -921,9 +1109,9 @@ struct rte_eth_vmdq_dcb_conf {
struct {
uint16_t vlan_id; /**< The VLAN ID of the received frame */
uint64_t pools; /**< Bitmask of pools for packet Rx */
- } pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
+ } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
/** Selects a queue in a pool */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
/**
@@ -933,7 +1121,7 @@ struct rte_eth_vmdq_dcb_conf {
* Using this feature, packets are routed to a pool of queues. By default,
* the pool selection is based on the MAC address, the VLAN ID in the
* VLAN tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
* selection using only the MAC address. MAC address to pool mapping is done
* using the rte_eth_dev_mac_addr_add function, with the pool parameter
* corresponding to the pool ID.
@@ -954,7 +1142,7 @@ struct rte_eth_vmdq_rx_conf {
struct {
uint16_t vlan_id; /**< The VLAN ID of the received frame */
uint64_t pools; /**< Bitmask of pools for packet Rx */
- } pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
+ } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
};
/**
@@ -963,7 +1151,7 @@ struct rte_eth_vmdq_rx_conf {
struct rte_eth_txmode {
enum rte_eth_tx_mq_mode mq_mode; /**< Tx multi-queues mode. */
/**
- * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -1055,7 +1243,7 @@ struct rte_eth_rxconf {
uint16_t share_group;
uint16_t share_qid; /**< Shared Rx queue ID in group */
/**
- * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_queue_offload_capa or rx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1084,7 +1272,7 @@ struct rte_eth_txconf {
uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
/**
- * Per-queue Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_queue_offload_capa or tx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1195,12 +1383,17 @@ struct rte_eth_desc_lim {
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
- RTE_FC_NONE = 0, /**< Disable flow control. */
- RTE_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */
- RTE_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
- RTE_FC_FULL /**< Enable flow control on both side. */
+ RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+ RTE_ETH_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */
+ RTE_ETH_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
+ RTE_ETH_FC_FULL /**< Enable flow control on both side. */
};
+#define RTE_FC_NONE RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL RTE_ETH_FC_FULL
+
/**
* A structure used to configure Ethernet flow control parameter.
* These parameters will be configured into the register of the NIC.
@@ -1231,18 +1424,29 @@ struct rte_eth_pfc_conf {
* @see rte_eth_udp_tunnel
*/
enum rte_eth_tunnel_type {
- RTE_TUNNEL_TYPE_NONE = 0,
- RTE_TUNNEL_TYPE_VXLAN,
- RTE_TUNNEL_TYPE_GENEVE,
- RTE_TUNNEL_TYPE_TEREDO,
- RTE_TUNNEL_TYPE_NVGRE,
- RTE_TUNNEL_TYPE_IP_IN_GRE,
- RTE_L2_TUNNEL_TYPE_E_TAG,
- RTE_TUNNEL_TYPE_VXLAN_GPE,
- RTE_TUNNEL_TYPE_ECPRI,
- RTE_TUNNEL_TYPE_MAX,
+ RTE_ETH_TUNNEL_TYPE_NONE = 0,
+ RTE_ETH_TUNNEL_TYPE_VXLAN,
+ RTE_ETH_TUNNEL_TYPE_GENEVE,
+ RTE_ETH_TUNNEL_TYPE_TEREDO,
+ RTE_ETH_TUNNEL_TYPE_NVGRE,
+ RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+ RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+ RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+ RTE_ETH_TUNNEL_TYPE_ECPRI,
+ RTE_ETH_TUNNEL_TYPE_MAX,
};
+#define RTE_TUNNEL_TYPE_NONE RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX RTE_ETH_TUNNEL_TYPE_MAX
+
/* Deprecated API file for rte_eth_dev_filter_* functions */
#include "rte_eth_ctrl.h"
@@ -1250,11 +1454,16 @@ enum rte_eth_tunnel_type {
* Memory space that can be configured to store Flow Director filters
* in the board memory.
*/
-enum rte_fdir_pballoc_type {
- RTE_FDIR_PBALLOC_64K = 0, /**< 64k. */
- RTE_FDIR_PBALLOC_128K, /**< 128k. */
- RTE_FDIR_PBALLOC_256K, /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+ RTE_ETH_FDIR_PBALLOC_64K = 0, /**< 64k. */
+ RTE_ETH_FDIR_PBALLOC_128K, /**< 128k. */
+ RTE_ETH_FDIR_PBALLOC_256K, /**< 256k. */
};
+#define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K RTE_ETH_FDIR_PBALLOC_256K
/**
* Select report mode of FDIR hash information in Rx descriptors.
@@ -1271,9 +1480,9 @@ enum rte_fdir_status_mode {
*
* If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
*/
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
enum rte_fdir_mode mode; /**< Flow Director mode. */
- enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+ enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
enum rte_fdir_status_mode status; /**< How to report FDIR hash. */
/** Rx queue of packets matching a "drop" filter in perfect mode. */
uint8_t drop_queue;
@@ -1282,6 +1491,8 @@ struct rte_fdir_conf {
struct rte_eth_fdir_flex_conf flex_conf;
};
+#define rte_fdir_conf rte_eth_fdir_conf
+
/**
* UDP tunneling configuration.
*
@@ -1299,7 +1510,7 @@ struct rte_eth_udp_tunnel {
/**
* A structure used to enable/disable specific device interrupts.
*/
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
uint32_t lsc:1;
/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1308,18 +1519,20 @@ struct rte_intr_conf {
uint32_t rmv:1;
};
+#define rte_intr_conf rte_eth_intr_conf
+
/**
* A structure used to configure an Ethernet port.
* Depending upon the Rx multi-queue mode, extra advanced
* configuration settings may be needed.
*/
struct rte_eth_conf {
- uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
- used. ETH_LINK_SPEED_FIXED disables link
+ uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+ used. RTE_ETH_LINK_SPEED_FIXED disables link
autonegotiation, and a unique speed shall be
set. Otherwise, the bitmap defines the set of
speeds to be advertised. If the special value
- ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+ RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
supported are advertised. */
struct rte_eth_rxmode rxmode; /**< Port Rx configuration. */
struct rte_eth_txmode txmode; /**< Port Tx configuration. */
@@ -1346,47 +1559,67 @@ struct rte_eth_conf {
struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
} tx_adv_conf; /**< Port Tx DCB configuration (union). */
/** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
- is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+ is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT. */
uint32_t dcb_capability_en;
- struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
- struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+ struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+ struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
};
/**
* Rx offload capabilities of a device.
*/
-#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_SCATTER 0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP 0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP 0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP 0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT 0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER 0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND 0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_SCATTER 0x00002000
+#define DEV_RX_OFFLOAD_SCATTER RTE_ETH_RX_OFFLOAD_SCATTER
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
-#define DEV_RX_OFFLOAD_SECURITY 0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC 0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM 0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
-#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
-
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP 0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY 0x00008000
+#define DEV_RX_OFFLOAD_SECURITY RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC 0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM 0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH 0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
+#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
+
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN RTE_ETH_RX_OFFLOAD_VLAN
/*
* If new Rx offload capabilities are defined, they also must be
@@ -1396,54 +1629,76 @@ struct rte_eth_conf {
/**
* Tx offload capabilities of a device.
*/
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT 0x00002000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM 0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO 0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO 0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT 0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
/**
* Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
* Tx queue without SW lock.
*/
-#define DEV_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
/** Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_ETH_TX_OFFLOAD_MULTI_SEGS
/**
* Device supports optimization for fast release of mbufs.
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
-#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
+#define RTE_ETH_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_TX_OFFLOAD_SECURITY RTE_ETH_TX_OFFLOAD_SECURITY
/**
* Device supports generic UDP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
/**
* Device supports generic IP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
/** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
/**
* Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1493,7 +1748,7 @@ struct rte_eth_dev_portconf {
* Default values for switch domain ID when ethdev does not support switch
* domain definitions.
*/
-#define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX)
+#define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX)
/**
* Ethernet device associated switch information
@@ -1591,7 +1846,7 @@ struct rte_eth_dev_info {
uint16_t vmdq_pool_base; /**< First ID of VMDq pools. */
struct rte_eth_desc_lim rx_desc_lim; /**< Rx descriptors limits */
struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptors limits */
- uint32_t speed_capa; /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ uint32_t speed_capa; /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
/** Configured number of Rx/Tx queues */
uint16_t nb_rx_queues; /**< Number of Rx queues. */
uint16_t nb_tx_queues; /**< Number of Tx queues. */
@@ -1695,8 +1950,10 @@ struct rte_eth_xstat_name {
char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
};
-#define ETH_DCB_NUM_TCS 8
-#define ETH_MAX_VMDQ_POOL 64
+#define RTE_ETH_DCB_NUM_TCS 8
+#define ETH_DCB_NUM_TCS RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL 64
+#define ETH_MAX_VMDQ_POOL RTE_ETH_MAX_VMDQ_POOL
/**
* A structure used to get the information of queue and
@@ -1707,12 +1964,12 @@ struct rte_eth_dcb_tc_queue_mapping {
struct {
uint16_t base;
uint16_t nb_queue;
- } tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+ } tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
/** Rx queues assigned to tc per Pool */
struct {
uint16_t base;
uint16_t nb_queue;
- } tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+ } tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
};
/**
@@ -1721,8 +1978,8 @@ struct rte_eth_dcb_tc_queue_mapping {
*/
struct rte_eth_dcb_info {
uint8_t nb_tcs; /**< number of TCs */
- uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
- uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */
+ uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+ uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */
/** Rx queues assigned to tc */
struct rte_eth_dcb_tc_queue_mapping tc_queue;
};
@@ -1746,7 +2003,7 @@ enum rte_eth_fec_mode {
/* A structure used to get capabilities per link speed */
struct rte_eth_fec_capa {
- uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+ uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
uint32_t capa; /**< FEC capabilities bitmask */
};
@@ -1769,13 +2026,17 @@ struct rte_eth_fec_capa {
/**@{@name L2 tunnel configuration */
/** L2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK 0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK 0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK RTE_ETH_L2_TUNNEL_ENABLE_MASK
/** L2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK 0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK 0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK RTE_ETH_L2_TUNNEL_INSERTION_MASK
/** L2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK 0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK 0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK RTE_ETH_L2_TUNNEL_STRIPPING_MASK
/** L2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK 0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK 0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK RTE_ETH_L2_TUNNEL_FORWARDING_MASK
/**@}*/
/**
@@ -2086,14 +2347,14 @@ uint16_t rte_eth_dev_count_total(void);
* @param speed
* Numerical speed value in Mbps
* @param duplex
- * ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ * RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
* @return
* 0 if the speed cannot be mapped
*/
uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
/**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2103,7 +2364,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
const char *rte_eth_dev_rx_offload_name(uint64_t offload);
/**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2211,7 +2472,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
* of the Prefetch, Host, and Write-Back threshold registers of the receive
* ring.
* In addition it contains the hardware offloads features to activate using
- * the DEV_RX_OFFLOAD_* flags.
+ * the RTE_ETH_RX_OFFLOAD_* flags.
* If an offloading set in rx_conf->offloads
* hasn't been set in the input argument eth_conf->rxmode.offloads
* to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2788,7 +3049,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
*
* @param str
* A pointer to a string to be filled with textual representation of
- * device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ * device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
* store default link status text.
* @param len
* Length of available memory at 'str' string.
@@ -3334,10 +3595,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
* The port identifier of the Ethernet device.
* @param offload_mask
* The VLAN Offload bit mask can be mixed use with "OR"
- * ETH_VLAN_STRIP_OFFLOAD
- * ETH_VLAN_FILTER_OFFLOAD
- * ETH_VLAN_EXTEND_OFFLOAD
- * ETH_QINQ_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_FILTER_OFFLOAD
+ * RTE_ETH_VLAN_EXTEND_OFFLOAD
+ * RTE_ETH_QINQ_STRIP_OFFLOAD
* @return
* - (0) if successful.
* - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3353,10 +3614,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
* The port identifier of the Ethernet device.
* @return
* - (>0) if successful. Bit mask to indicate
- * ETH_VLAN_STRIP_OFFLOAD
- * ETH_VLAN_FILTER_OFFLOAD
- * ETH_VLAN_EXTEND_OFFLOAD
- * ETH_QINQ_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_FILTER_OFFLOAD
+ * RTE_ETH_VLAN_EXTEND_OFFLOAD
+ * RTE_ETH_QINQ_STRIP_OFFLOAD
* - (-ENODEV) if *port_id* invalid.
*/
int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5382,7 +5643,7 @@ uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
* rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf* buffers
* of those packets whose transmission was effectively completed.
*
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
* invoke this function concurrently on the same Tx queue without SW lock.
* @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
*
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index db3392bf9759..59d9d9eeb63f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2957,7 +2957,7 @@ struct rte_flow_action_rss {
* through.
*/
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
#include "gso_udp4.h"
#define ILLEGAL_UDP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+ ((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
(ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
#define ILLEGAL_TCP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+ ((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
ol_flags = pkt->ol_flags;
if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_tunnel_udp4_segment(pkt, gso_size,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_TCP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_UDP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_udp4_segment(pkt, gso_size, direct_pool,
indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
uint32_t gso_types;
/**< the bit mask of required GSO types. The GSO library
* uses the same macros as that of describing device TX
- * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+ * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
* gso_types.
*
* For example, if applications want to segment TCP/IPv4
- * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+ * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
*/
uint16_t gso_size;
/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index fdaaaf67f2f3..57e871201816 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -185,7 +185,7 @@ extern "C" {
* The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
* HW capability, At minimum, the PMD should support
* PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
*/
#define PKT_RX_OUTER_L4_CKSUM_MASK ((1ULL << 21) | (1ULL << 22))
@@ -208,7 +208,7 @@ extern "C" {
* a) Fill outer_l2_len and outer_l3_len in mbuf.
* b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
* c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
*/
#define PKT_TX_OUTER_UDP_CKSUM (1ULL << 41)
@@ -254,7 +254,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
* or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
@@ -267,7 +267,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
* if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index fb03cf1dcf90..29abe8da53cf 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
* of the dynamic field to be registered:
* const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
* - The application initializes the PMD, and asks for this feature
- * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ * at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
* rxconf. This will make the PMD to register the field by calling
* rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
* stores the returned offset.
--
2.31.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v7] ethdev: add namespace
2021-10-22 2:02 1% ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
@ 2021-10-22 11:03 1% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-10-22 11:03 UTC (permalink / raw)
To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
Min Hu (Connor),
Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Beilei Xing,
Haiyue Wang, Matan Azrad, Viacheslav Ovsiienko, Keith Wiles,
Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal, Declan Doherty,
Ray Kinsella, Radu Nicolau, Hemant Agrawal, Sachin Saxena,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
John W. Linville, Ciara Loftus, Shepard Siegel, Ed Czeck,
John Miller, Igor Russkikh, Steven Webster, Matt Peters,
Chandubabu Namburu, Rasesh Mody, Shahed Shaikh, Bruce Richardson,
Konstantin Ananyev, Ruifeng Wang, Rahul Lakkireddy,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, Gaetan Rivet, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu,
Srisivasubramanian Srinivasan, Jakub Grajciar, Zyta Szpak,
Liron Himi, Stephen Hemminger, Long Li, Martin Spinler,
Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa, Harman Kalra,
Anoob Joseph, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Jasvinder Singh,
Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Nicolas Chautru, David Hunt, Harry van Haaren, Bernard Iremonger,
Anatoly Burakov, John McNamara, Kirill Rybalchenko, Byron Marohn,
Yipeng Wang
Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 1213541 bytes --]
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
v2:
* Updated internal components
* Removed deprecation notice
v3:
* Updated missing macros / structs that David highlighted
* Added release notes update
v4:
* rebased on latest next-net
* depends on https://patches.dpdk.org/user/todo/dpdk/?series=19744
* Not able to complete scripts to update user code, although some
shared by Aman:
https://patches.dpdk.org/project/dpdk/patch/20211008102949.70716-1-aman.deep.singh@intel.com/
Sending new version for possible option to get this patch for -rc1 and
work for scripts later, before release.
v5:
* rebased on latest next-net
v6:
* rebased on latest next-net
v7:
* Remove mirror structures which are rebase residue
* rebased on latest next-net
---
app/proc-info/main.c | 8 +-
app/test-eventdev/test_perf_common.c | 4 +-
app/test-eventdev/test_pipeline_common.c | 10 +-
app/test-flow-perf/config.h | 2 +-
app/test-pipeline/init.c | 8 +-
app/test-pmd/cmdline.c | 286 ++---
app/test-pmd/config.c | 200 ++--
app/test-pmd/csumonly.c | 28 +-
app/test-pmd/flowgen.c | 6 +-
app/test-pmd/macfwd.c | 6 +-
app/test-pmd/macswap_common.h | 6 +-
app/test-pmd/parameters.c | 54 +-
app/test-pmd/testpmd.c | 52 +-
app/test-pmd/testpmd.h | 2 +-
app/test-pmd/txonly.c | 6 +-
app/test/test_ethdev_link.c | 68 +-
app/test/test_event_eth_rx_adapter.c | 4 +-
app/test/test_kni.c | 2 +-
app/test/test_link_bonding.c | 4 +-
app/test/test_link_bonding_mode4.c | 4 +-
| 28 +-
app/test/test_pmd_perf.c | 12 +-
app/test/virtual_pmd.c | 10 +-
doc/guides/eventdevs/cnxk.rst | 2 +-
doc/guides/eventdevs/octeontx2.rst | 2 +-
doc/guides/nics/af_packet.rst | 2 +-
doc/guides/nics/bnxt.rst | 24 +-
doc/guides/nics/enic.rst | 2 +-
doc/guides/nics/features.rst | 114 +-
doc/guides/nics/fm10k.rst | 6 +-
doc/guides/nics/intel_vf.rst | 10 +-
doc/guides/nics/ixgbe.rst | 12 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/tap.rst | 2 +-
.../generic_segmentation_offload_lib.rst | 8 +-
doc/guides/prog_guide/mbuf_lib.rst | 18 +-
doc/guides/prog_guide/poll_mode_drv.rst | 8 +-
doc/guides/prog_guide/rte_flow.rst | 34 +-
doc/guides/prog_guide/rte_security.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 10 +-
doc/guides/rel_notes/release_21_11.rst | 3 +
doc/guides/sample_app_ug/ipsec_secgw.rst | 4 +-
doc/guides/testpmd_app_ug/run_app.rst | 2 +-
drivers/bus/dpaa/include/process.h | 16 +-
drivers/common/cnxk/roc_npc.h | 2 +-
drivers/net/af_packet/rte_eth_af_packet.c | 20 +-
drivers/net/af_xdp/rte_eth_af_xdp.c | 12 +-
drivers/net/ark/ark_ethdev.c | 16 +-
drivers/net/atlantic/atl_ethdev.c | 88 +-
drivers/net/atlantic/atl_ethdev.h | 18 +-
drivers/net/atlantic/atl_rxtx.c | 6 +-
drivers/net/avp/avp_ethdev.c | 26 +-
drivers/net/axgbe/axgbe_dev.c | 6 +-
drivers/net/axgbe/axgbe_ethdev.c | 104 +-
drivers/net/axgbe/axgbe_ethdev.h | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 2 +-
drivers/net/axgbe/axgbe_rxtx.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 12 +-
drivers/net/bnxt/bnxt.h | 62 +-
drivers/net/bnxt/bnxt_ethdev.c | 172 +--
drivers/net/bnxt/bnxt_flow.c | 6 +-
drivers/net/bnxt/bnxt_hwrm.c | 112 +-
drivers/net/bnxt/bnxt_reps.c | 2 +-
drivers/net/bnxt/bnxt_ring.c | 4 +-
drivers/net/bnxt/bnxt_rxq.c | 28 +-
drivers/net/bnxt/bnxt_rxr.c | 4 +-
drivers/net/bnxt/bnxt_rxtx_vec_avx2.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_common.h | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_neon.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
drivers/net/bnxt/bnxt_txr.c | 4 +-
drivers/net/bnxt/bnxt_vnic.c | 30 +-
drivers/net/bnxt/rte_pmd_bnxt.c | 8 +-
drivers/net/bonding/eth_bond_private.h | 4 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 16 +-
drivers/net/bonding/rte_eth_bond_api.c | 6 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 50 +-
drivers/net/cnxk/cn10k_ethdev.c | 42 +-
drivers/net/cnxk/cn10k_rte_flow.c | 2 +-
drivers/net/cnxk/cn10k_rx.c | 4 +-
drivers/net/cnxk/cn10k_tx.c | 4 +-
drivers/net/cnxk/cn9k_ethdev.c | 60 +-
drivers/net/cnxk/cn9k_rx.c | 4 +-
drivers/net/cnxk/cn9k_tx.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 112 +-
drivers/net/cnxk/cnxk_ethdev.h | 49 +-
drivers/net/cnxk/cnxk_ethdev_devargs.c | 6 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 106 +-
drivers/net/cnxk/cnxk_link.c | 14 +-
drivers/net/cnxk/cnxk_ptp.c | 4 +-
drivers/net/cnxk/cnxk_rte_flow.c | 2 +-
drivers/net/cxgbe/cxgbe.h | 46 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 42 +-
drivers/net/cxgbe/cxgbe_main.c | 12 +-
drivers/net/dpaa/dpaa_ethdev.c | 180 ++--
drivers/net/dpaa/dpaa_ethdev.h | 10 +-
drivers/net/dpaa/dpaa_flow.c | 32 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 138 +--
drivers/net/dpaa2/dpaa2_ethdev.h | 22 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 8 +-
drivers/net/e1000/e1000_ethdev.h | 18 +-
drivers/net/e1000/em_ethdev.c | 64 +-
drivers/net/e1000/em_rxtx.c | 38 +-
drivers/net/e1000/igb_ethdev.c | 158 +--
drivers/net/e1000/igb_pf.c | 2 +-
drivers/net/e1000/igb_rxtx.c | 116 +--
drivers/net/ena/ena_ethdev.c | 70 +-
drivers/net/ena/ena_ethdev.h | 4 +-
| 74 +-
drivers/net/enetc/enetc_ethdev.c | 30 +-
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 88 +-
drivers/net/enic/enic_main.c | 40 +-
drivers/net/enic/enic_res.c | 50 +-
drivers/net/failsafe/failsafe.c | 8 +-
drivers/net/failsafe/failsafe_intr.c | 4 +-
drivers/net/failsafe/failsafe_ops.c | 78 +-
drivers/net/fm10k/fm10k.h | 4 +-
drivers/net/fm10k/fm10k_ethdev.c | 146 +--
drivers/net/fm10k/fm10k_rxtx_vec.c | 6 +-
drivers/net/hinic/base/hinic_pmd_hwdev.c | 22 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 136 +--
drivers/net/hinic/hinic_pmd_rx.c | 36 +-
drivers/net/hinic/hinic_pmd_rx.h | 22 +-
drivers/net/hns3/hns3_dcb.c | 14 +-
drivers/net/hns3/hns3_ethdev.c | 352 +++----
drivers/net/hns3/hns3_ethdev.h | 12 +-
drivers/net/hns3/hns3_ethdev_vf.c | 100 +-
drivers/net/hns3/hns3_flow.c | 6 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
| 108 +-
| 28 +-
drivers/net/hns3/hns3_rxtx.c | 30 +-
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/hns3/hns3_rxtx_vec.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 272 ++---
drivers/net/i40e/i40e_ethdev.h | 24 +-
drivers/net/i40e/i40e_flow.c | 32 +-
drivers/net/i40e/i40e_hash.c | 158 +--
drivers/net/i40e/i40e_pf.c | 14 +-
drivers/net/i40e/i40e_rxtx.c | 8 +-
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 8 +-
drivers/net/i40e/i40e_vf_representor.c | 48 +-
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 178 ++--
drivers/net/iavf/iavf_hash.c | 320 +++---
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/iavf/iavf_rxtx.h | 24 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 6 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 86 +-
drivers/net/ice/ice_dcf_vf_representor.c | 56 +-
drivers/net/ice/ice_ethdev.c | 180 ++--
drivers/net/ice/ice_ethdev.h | 26 +-
drivers/net/ice/ice_hash.c | 290 +++---
drivers/net/ice/ice_rxtx.c | 16 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 4 +-
drivers/net/ice/ice_rxtx_vec_common.h | 28 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
drivers/net/igc/igc_ethdev.c | 138 +--
drivers/net/igc/igc_ethdev.h | 54 +-
drivers/net/igc/igc_txrx.c | 48 +-
drivers/net/ionic/ionic_ethdev.c | 138 +--
drivers/net/ionic/ionic_ethdev.h | 12 +-
drivers/net/ionic/ionic_lif.c | 36 +-
drivers/net/ionic/ionic_rxtx.c | 10 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 64 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 285 +++--
drivers/net/ixgbe/ixgbe_ethdev.h | 18 +-
drivers/net/ixgbe/ixgbe_fdir.c | 24 +-
drivers/net/ixgbe/ixgbe_flow.c | 2 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 12 +-
drivers/net/ixgbe/ixgbe_pf.c | 34 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 249 +++--
drivers/net/ixgbe/ixgbe_rxtx.h | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 2 +-
drivers/net/ixgbe/ixgbe_tm.c | 16 +-
drivers/net/ixgbe/ixgbe_vf_representor.c | 16 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 14 +-
drivers/net/ixgbe/rte_pmd_ixgbe.h | 4 +-
drivers/net/kni/rte_eth_kni.c | 8 +-
drivers/net/liquidio/lio_ethdev.c | 114 +-
drivers/net/memif/memif_socket.c | 2 +-
drivers/net/memif/rte_eth_memif.c | 16 +-
drivers/net/mlx4/mlx4_ethdev.c | 32 +-
drivers/net/mlx4/mlx4_flow.c | 30 +-
drivers/net/mlx4/mlx4_intr.c | 8 +-
drivers/net/mlx4/mlx4_rxq.c | 18 +-
drivers/net/mlx4/mlx4_txq.c | 24 +-
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 54 +-
drivers/net/mlx5/linux/mlx5_os.c | 6 +-
drivers/net/mlx5/mlx5.c | 4 +-
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_defs.h | 6 +-
drivers/net/mlx5/mlx5_ethdev.c | 6 +-
drivers/net/mlx5/mlx5_flow.c | 54 +-
drivers/net/mlx5/mlx5_flow.h | 12 +-
drivers/net/mlx5/mlx5_flow_dv.c | 44 +-
drivers/net/mlx5/mlx5_flow_verbs.c | 4 +-
| 10 +-
drivers/net/mlx5/mlx5_rxq.c | 40 +-
drivers/net/mlx5/mlx5_rxtx_vec.h | 8 +-
drivers/net/mlx5/mlx5_tx.c | 30 +-
drivers/net/mlx5/mlx5_txq.c | 58 +-
drivers/net/mlx5/mlx5_vlan.c | 4 +-
drivers/net/mlx5/windows/mlx5_os.c | 4 +-
drivers/net/mvneta/mvneta_ethdev.c | 32 +-
drivers/net/mvneta/mvneta_ethdev.h | 10 +-
drivers/net/mvneta/mvneta_rxtx.c | 2 +-
drivers/net/mvpp2/mrvl_ethdev.c | 112 +-
drivers/net/netvsc/hn_ethdev.c | 70 +-
drivers/net/netvsc/hn_rndis.c | 50 +-
drivers/net/nfb/nfb_ethdev.c | 20 +-
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfp/nfp_common.c | 122 +--
drivers/net/nfp/nfp_ethdev.c | 2 +-
drivers/net/nfp/nfp_ethdev_vf.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 50 +-
drivers/net/null/rte_eth_null.c | 28 +-
drivers/net/octeontx/octeontx_ethdev.c | 74 +-
drivers/net/octeontx/octeontx_ethdev.h | 30 +-
drivers/net/octeontx/octeontx_ethdev_ops.c | 26 +-
drivers/net/octeontx2/otx2_ethdev.c | 96 +-
drivers/net/octeontx2/otx2_ethdev.h | 64 +-
drivers/net/octeontx2/otx2_ethdev_devargs.c | 12 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 14 +-
drivers/net/octeontx2/otx2_ethdev_sec.c | 8 +-
drivers/net/octeontx2/otx2_flow.c | 2 +-
drivers/net/octeontx2/otx2_flow_ctrl.c | 36 +-
drivers/net/octeontx2/otx2_flow_parse.c | 4 +-
drivers/net/octeontx2/otx2_link.c | 40 +-
drivers/net/octeontx2/otx2_mcast.c | 2 +-
drivers/net/octeontx2/otx2_ptp.c | 4 +-
| 70 +-
drivers/net/octeontx2/otx2_rx.c | 4 +-
drivers/net/octeontx2/otx2_tx.c | 2 +-
drivers/net/octeontx2/otx2_vlan.c | 42 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 6 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 +-
drivers/net/pcap/pcap_ethdev.c | 12 +-
drivers/net/pfe/pfe_ethdev.c | 18 +-
drivers/net/qede/base/mcp_public.h | 4 +-
drivers/net/qede/qede_ethdev.c | 156 +--
drivers/net/qede/qede_filter.c | 42 +-
drivers/net/qede/qede_rxtx.c | 2 +-
drivers/net/qede/qede_rxtx.h | 16 +-
drivers/net/ring/rte_eth_ring.c | 20 +-
drivers/net/sfc/sfc.c | 30 +-
drivers/net/sfc/sfc_ef100_rx.c | 10 +-
drivers/net/sfc/sfc_ef100_tx.c | 20 +-
drivers/net/sfc/sfc_ef10_essb_rx.c | 4 +-
drivers/net/sfc/sfc_ef10_rx.c | 8 +-
drivers/net/sfc/sfc_ef10_tx.c | 32 +-
drivers/net/sfc/sfc_ethdev.c | 50 +-
drivers/net/sfc/sfc_flow.c | 2 +-
drivers/net/sfc/sfc_port.c | 52 +-
drivers/net/sfc/sfc_repr.c | 10 +-
drivers/net/sfc/sfc_rx.c | 50 +-
drivers/net/sfc/sfc_tx.c | 50 +-
drivers/net/softnic/rte_eth_softnic.c | 12 +-
drivers/net/szedata2/rte_eth_szedata2.c | 14 +-
drivers/net/tap/rte_eth_tap.c | 104 +-
| 2 +-
drivers/net/thunderx/nicvf_ethdev.c | 102 +-
drivers/net/thunderx/nicvf_ethdev.h | 40 +-
drivers/net/txgbe/txgbe_ethdev.c | 242 ++---
drivers/net/txgbe/txgbe_ethdev.h | 18 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 24 +-
drivers/net/txgbe/txgbe_fdir.c | 20 +-
drivers/net/txgbe/txgbe_flow.c | 2 +-
drivers/net/txgbe/txgbe_ipsec.c | 12 +-
drivers/net/txgbe/txgbe_pf.c | 34 +-
drivers/net/txgbe/txgbe_rxtx.c | 308 +++---
drivers/net/txgbe/txgbe_rxtx.h | 4 +-
drivers/net/txgbe/txgbe_tm.c | 16 +-
drivers/net/vhost/rte_eth_vhost.c | 16 +-
drivers/net/virtio/virtio_ethdev.c | 124 +--
drivers/net/vmxnet3/vmxnet3_ethdev.c | 72 +-
drivers/net/vmxnet3/vmxnet3_ethdev.h | 16 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 16 +-
examples/bbdev_app/main.c | 6 +-
examples/bond/main.c | 14 +-
examples/distributor/main.c | 12 +-
examples/ethtool/ethtool-app/main.c | 2 +-
examples/ethtool/lib/rte_ethtool.c | 18 +-
.../pipeline_worker_generic.c | 16 +-
.../eventdev_pipeline/pipeline_worker_tx.c | 12 +-
examples/flow_classify/flow_classify.c | 4 +-
examples/flow_filtering/main.c | 16 +-
examples/ioat/ioatfwd.c | 8 +-
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 20 +-
examples/ip_reassembly/main.c | 18 +-
examples/ipsec-secgw/ipsec-secgw.c | 32 +-
examples/ipsec-secgw/sa.c | 8 +-
examples/ipv4_multicast/main.c | 6 +-
examples/kni/main.c | 8 +-
examples/l2fwd-crypto/main.c | 10 +-
examples/l2fwd-event/l2fwd_common.c | 10 +-
examples/l2fwd-event/main.c | 2 +-
examples/l2fwd-jobstats/main.c | 8 +-
examples/l2fwd-keepalive/main.c | 8 +-
examples/l2fwd/main.c | 8 +-
examples/l3fwd-acl/main.c | 18 +-
examples/l3fwd-graph/main.c | 14 +-
examples/l3fwd-power/main.c | 16 +-
examples/l3fwd/l3fwd_event.c | 4 +-
examples/l3fwd/main.c | 18 +-
examples/link_status_interrupt/main.c | 10 +-
.../client_server_mp/mp_server/init.c | 4 +-
examples/multi_process/symmetric_mp/main.c | 14 +-
examples/ntb/ntb_fwd.c | 6 +-
examples/packet_ordering/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 16 +-
examples/pipeline/obj.c | 20 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 16 +-
examples/qos_sched/init.c | 6 +-
examples/rxtx_callbacks/main.c | 8 +-
examples/server_node_efd/server/init.c | 8 +-
examples/skeleton/basicfwd.c | 4 +-
examples/vhost/main.c | 26 +-
examples/vm_power_manager/main.c | 6 +-
examples/vmdq/main.c | 20 +-
examples/vmdq_dcb/main.c | 40 +-
lib/ethdev/ethdev_driver.h | 36 +-
lib/ethdev/rte_ethdev.c | 181 ++--
lib/ethdev/rte_ethdev.h | 986 +++++++++++-------
lib/ethdev/rte_flow.h | 2 +-
lib/gso/rte_gso.c | 20 +-
lib/gso/rte_gso.h | 4 +-
lib/mbuf/rte_mbuf_core.h | 8 +-
lib/mbuf/rte_mbuf_dyn.h | 2 +-
339 files changed, 6601 insertions(+), 6385 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index bfe5ce825b70..a4271047e693 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
}
ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
- if (ret == 0 && fc_conf.mode != RTE_FC_NONE) {
+ if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE) {
printf("\t -- flow control mode %s%s high %u low %u pause %u%s%s\n",
- fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
- fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
- fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+ fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+ fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
fc_conf.autoneg ? " auto" : "",
fc_conf.high_water,
fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 660d5a0364b6..31d1b0e14653 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,13 +668,13 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct test_perf *t = evt_test_priv(test);
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 2775e72c580d..d202091077a6 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_rxconf rx_conf;
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
@@ -223,7 +223,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
local_port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = rte_eth_dev_info_get(i, &dev_info);
if (ret != 0) {
@@ -233,9 +233,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
}
/* Enable mbuf fast free if PMD has the capability. */
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
#define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
/* Configuration */
#define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -178,7 +178,7 @@ app_ports_check_link(void)
RTE_LOG(INFO, USER1, "Port %u %s\n",
port,
link_status_text);
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
all_ports_up = 0;
}
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3221f6e1aa40..ebea13f86ab0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1478,51 +1478,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
int duplex;
if (!strcmp(duplexstr, "half")) {
- duplex = ETH_LINK_HALF_DUPLEX;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
} else if (!strcmp(duplexstr, "full")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else if (!strcmp(duplexstr, "auto")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
fprintf(stderr, "Unknown duplex parameter\n");
return -1;
}
if (!strcmp(speedstr, "10")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
} else if (!strcmp(speedstr, "100")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
} else {
- if (duplex != ETH_LINK_FULL_DUPLEX) {
+ if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
fprintf(stderr, "Invalid speed/duplex parameters\n");
return -1;
}
if (!strcmp(speedstr, "1000")) {
- *speed = ETH_LINK_SPEED_1G;
+ *speed = RTE_ETH_LINK_SPEED_1G;
} else if (!strcmp(speedstr, "10000")) {
- *speed = ETH_LINK_SPEED_10G;
+ *speed = RTE_ETH_LINK_SPEED_10G;
} else if (!strcmp(speedstr, "25000")) {
- *speed = ETH_LINK_SPEED_25G;
+ *speed = RTE_ETH_LINK_SPEED_25G;
} else if (!strcmp(speedstr, "40000")) {
- *speed = ETH_LINK_SPEED_40G;
+ *speed = RTE_ETH_LINK_SPEED_40G;
} else if (!strcmp(speedstr, "50000")) {
- *speed = ETH_LINK_SPEED_50G;
+ *speed = RTE_ETH_LINK_SPEED_50G;
} else if (!strcmp(speedstr, "100000")) {
- *speed = ETH_LINK_SPEED_100G;
+ *speed = RTE_ETH_LINK_SPEED_100G;
} else if (!strcmp(speedstr, "200000")) {
- *speed = ETH_LINK_SPEED_200G;
+ *speed = RTE_ETH_LINK_SPEED_200G;
} else if (!strcmp(speedstr, "auto")) {
- *speed = ETH_LINK_SPEED_AUTONEG;
+ *speed = RTE_ETH_LINK_SPEED_AUTONEG;
} else {
fprintf(stderr, "Unknown speed parameter\n");
return -1;
}
}
- if (*speed != ETH_LINK_SPEED_AUTONEG)
- *speed |= ETH_LINK_SPEED_FIXED;
+ if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+ *speed |= RTE_ETH_LINK_SPEED_FIXED;
return 0;
}
@@ -2166,33 +2166,33 @@ cmd_config_rss_parsed(void *parsed_result,
int ret;
if (!strcmp(res->value, "all"))
- rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
- ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
- ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
- ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
- ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+ RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+ RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+ RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "eth"))
- rss_conf.rss_hf = ETH_RSS_ETH;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH;
else if (!strcmp(res->value, "vlan"))
- rss_conf.rss_hf = ETH_RSS_VLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
else if (!strcmp(res->value, "ip"))
- rss_conf.rss_hf = ETH_RSS_IP;
+ rss_conf.rss_hf = RTE_ETH_RSS_IP;
else if (!strcmp(res->value, "udp"))
- rss_conf.rss_hf = ETH_RSS_UDP;
+ rss_conf.rss_hf = RTE_ETH_RSS_UDP;
else if (!strcmp(res->value, "tcp"))
- rss_conf.rss_hf = ETH_RSS_TCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_TCP;
else if (!strcmp(res->value, "sctp"))
- rss_conf.rss_hf = ETH_RSS_SCTP;
+ rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
else if (!strcmp(res->value, "ether"))
- rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
else if (!strcmp(res->value, "port"))
- rss_conf.rss_hf = ETH_RSS_PORT;
+ rss_conf.rss_hf = RTE_ETH_RSS_PORT;
else if (!strcmp(res->value, "vxlan"))
- rss_conf.rss_hf = ETH_RSS_VXLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
else if (!strcmp(res->value, "geneve"))
- rss_conf.rss_hf = ETH_RSS_GENEVE;
+ rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
else if (!strcmp(res->value, "nvgre"))
- rss_conf.rss_hf = ETH_RSS_NVGRE;
+ rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
else if (!strcmp(res->value, "l3-pre32"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
else if (!strcmp(res->value, "l3-pre40"))
@@ -2206,46 +2206,46 @@ cmd_config_rss_parsed(void *parsed_result,
else if (!strcmp(res->value, "l3-pre96"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
else if (!strcmp(res->value, "l3-src-only"))
- rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
else if (!strcmp(res->value, "l3-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
else if (!strcmp(res->value, "l4-src-only"))
- rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
else if (!strcmp(res->value, "l4-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
else if (!strcmp(res->value, "l2-src-only"))
- rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
else if (!strcmp(res->value, "l2-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
else if (!strcmp(res->value, "l2tpv3"))
- rss_conf.rss_hf = ETH_RSS_L2TPV3;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
else if (!strcmp(res->value, "esp"))
- rss_conf.rss_hf = ETH_RSS_ESP;
+ rss_conf.rss_hf = RTE_ETH_RSS_ESP;
else if (!strcmp(res->value, "ah"))
- rss_conf.rss_hf = ETH_RSS_AH;
+ rss_conf.rss_hf = RTE_ETH_RSS_AH;
else if (!strcmp(res->value, "pfcp"))
- rss_conf.rss_hf = ETH_RSS_PFCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
else if (!strcmp(res->value, "pppoe"))
- rss_conf.rss_hf = ETH_RSS_PPPOE;
+ rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
else if (!strcmp(res->value, "gtpu"))
- rss_conf.rss_hf = ETH_RSS_GTPU;
+ rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
else if (!strcmp(res->value, "ecpri"))
- rss_conf.rss_hf = ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "mpls"))
- rss_conf.rss_hf = ETH_RSS_MPLS;
+ rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
else if (!strcmp(res->value, "ipv4-chksum"))
- rss_conf.rss_hf = ETH_RSS_IPV4_CHKSUM;
+ rss_conf.rss_hf = RTE_ETH_RSS_IPV4_CHKSUM;
else if (!strcmp(res->value, "none"))
rss_conf.rss_hf = 0;
else if (!strcmp(res->value, "level-default")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
} else if (!strcmp(res->value, "level-outer")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
} else if (!strcmp(res->value, "level-inner")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
} else if (!strcmp(res->value, "default"))
use_default = 1;
else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2982,8 +2982,8 @@ parse_reta_config(const char *str,
return -1;
}
- idx = hash_index / RTE_RETA_GROUP_SIZE;
- shift = hash_index % RTE_RETA_GROUP_SIZE;
+ idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+ shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
reta_conf[idx].mask |= (1ULL << shift);
reta_conf[idx].reta[shift] = nb_queue;
}
@@ -3012,10 +3012,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
} else
printf("The reta size of port %d is %u\n",
res->port_id, dev_info.reta_size);
- if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+ if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
fprintf(stderr,
"Currently do not support more than %u entries of redirection table\n",
- ETH_RSS_RETA_SIZE_512);
+ RTE_ETH_RSS_RETA_SIZE_512);
return;
}
@@ -3086,8 +3086,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
char *end;
char *str_fld[8];
uint16_t i;
- uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
- RTE_RETA_GROUP_SIZE;
+ uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+ RTE_ETH_RETA_GROUP_SIZE;
int ret;
p = strchr(p0, '(');
@@ -3132,7 +3132,7 @@ cmd_showport_reta_parsed(void *parsed_result,
if (ret != 0)
return;
- max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+ max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
if (res->size == 0 || res->size > max_reta_size) {
fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
res->size, max_reta_size);
@@ -3272,7 +3272,7 @@ cmd_config_dcb_parsed(void *parsed_result,
return;
}
- if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+ if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
fprintf(stderr,
"The invalid number of traffic class, only 4 or 8 allowed.\n");
return;
@@ -4276,9 +4276,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
enum rte_vlan_type vlan_type;
if (!strcmp(res->vlan_type, "inner"))
- vlan_type = ETH_VLAN_TYPE_INNER;
+ vlan_type = RTE_ETH_VLAN_TYPE_INNER;
else if (!strcmp(res->vlan_type, "outer"))
- vlan_type = ETH_VLAN_TYPE_OUTER;
+ vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
else {
fprintf(stderr, "Unknown vlan type\n");
return;
@@ -4615,55 +4615,55 @@ csum_show(int port_id)
printf("Parse tunnel is %s\n",
(ports[port_id].parse_tunnel) ? "on" : "off");
printf("IP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
printf("UDP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
printf("TCP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
printf("SCTP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
printf("Outer-Ip checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
printf("Outer-Udp checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
/* display warnings if configuration is not supported by the NIC */
ret = eth_dev_info_get_print_err(port_id, &dev_info);
if (ret != 0)
return;
- if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware UDP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware TCP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
== 0) {
fprintf(stderr,
"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4713,8 +4713,8 @@ cmd_csum_parsed(void *parsed_result,
if (!strcmp(res->proto, "ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_IPV4_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
} else {
fprintf(stderr,
"IP checksum offload is not supported by port %u\n",
@@ -4722,8 +4722,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_UDP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
} else {
fprintf(stderr,
"UDP checksum offload is not supported by port %u\n",
@@ -4731,8 +4731,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "tcp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_TCP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
} else {
fprintf(stderr,
"TCP checksum offload is not supported by port %u\n",
@@ -4740,8 +4740,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "sctp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SCTP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
} else {
fprintf(stderr,
"SCTP checksum offload is not supported by port %u\n",
@@ -4749,9 +4749,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
} else {
fprintf(stderr,
"Outer IP checksum offload is not supported by port %u\n",
@@ -4759,9 +4759,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
} else {
fprintf(stderr,
"Outer UDP checksum offload is not supported by port %u\n",
@@ -4916,7 +4916,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr, "Error: TSO is not supported by port %d\n",
res->port_id);
return;
@@ -4924,11 +4924,11 @@ cmd_tso_set_parsed(void *parsed_result,
if (ports[res->port_id].tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_TCP_TSO;
+ ~RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO for non-tunneled packets is disabled\n");
} else {
ports[res->port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO segment size for non-tunneled packets is %d\n",
ports[res->port_id].tso_segsz);
}
@@ -4940,7 +4940,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr,
"Warning: TSO enabled but not supported by port %d\n",
res->port_id);
@@ -5011,27 +5011,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
return dev_info;
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
fprintf(stderr,
"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
fprintf(stderr,
"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
fprintf(stderr,
"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
fprintf(stderr,
"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
fprintf(stderr,
"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
fprintf(stderr,
"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
@@ -5059,20 +5059,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
dev_info = check_tunnel_tso_nic_support(res->port_id);
if (ports[res->port_id].tunnel_tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ ~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
printf("TSO for tunneled packets is disabled\n");
} else {
- uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
ports[res->port_id].dev_conf.txmode.offloads |=
(tso_offloads & dev_info.tx_offload_capa);
@@ -5095,7 +5095,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
fprintf(stderr,
"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
if (!(ports[res->port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
fprintf(stderr,
"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
}
@@ -7227,9 +7227,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
return;
}
- if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
rx_fc_en = true;
- if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
tx_fc_en = true;
printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7507,12 +7507,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
/* Partial command line, retrieve current configuration */
@@ -7525,11 +7525,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
return;
}
- if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
rx_fc_en = 1;
- if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
tx_fc_en = 1;
}
@@ -7597,12 +7597,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -9250,13 +9250,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
if (!strcmp(res->what,"rxmode")) {
if (!strcmp(res->mode, "AUPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
else if (!strcmp(res->mode, "ROPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
else if (!strcmp(res->mode, "BAM"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
else if (!strncmp(res->mode, "MPE",3))
- vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
}
RTE_SET_USED(is_on);
@@ -9656,7 +9656,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
int ret;
tunnel_udp.udp_port = res->udp_port;
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
if (!strcmp(res->what, "add"))
ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9722,13 +9722,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
tunnel_udp.udp_port = res->udp_port;
if (!strcmp(res->tunnel_type, "vxlan")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
} else if (!strcmp(res->tunnel_type, "geneve")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
} else if (!strcmp(res->tunnel_type, "ecpri")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
} else {
fprintf(stderr, "Invalid tunnel type\n");
return;
@@ -11859,7 +11859,7 @@ cmd_set_macsec_offload_on_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
#endif
@@ -11870,7 +11870,7 @@ cmd_set_macsec_offload_on_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MACSEC_INSERT;
+ RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
@@ -11956,7 +11956,7 @@ cmd_set_macsec_offload_off_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_disable(port_id);
#endif
@@ -11964,7 +11964,7 @@ cmd_set_macsec_offload_off_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MACSEC_INSERT;
+ ~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cad78350dcc9..a18871d461c4 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,62 +86,62 @@ static const struct {
};
const struct rss_type_info rss_type_table[] = {
- { "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
- ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
- ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
- ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+ { "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+ RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+ RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
{ "none", 0 },
- { "eth", ETH_RSS_ETH },
- { "l2-src-only", ETH_RSS_L2_SRC_ONLY },
- { "l2-dst-only", ETH_RSS_L2_DST_ONLY },
- { "vlan", ETH_RSS_VLAN },
- { "s-vlan", ETH_RSS_S_VLAN },
- { "c-vlan", ETH_RSS_C_VLAN },
- { "ipv4", ETH_RSS_IPV4 },
- { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
- { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
- { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
- { "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
- { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
- { "ipv6", ETH_RSS_IPV6 },
- { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
- { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
- { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
- { "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
- { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
- { "l2-payload", ETH_RSS_L2_PAYLOAD },
- { "ipv6-ex", ETH_RSS_IPV6_EX },
- { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
- { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
- { "port", ETH_RSS_PORT },
- { "vxlan", ETH_RSS_VXLAN },
- { "geneve", ETH_RSS_GENEVE },
- { "nvgre", ETH_RSS_NVGRE },
- { "ip", ETH_RSS_IP },
- { "udp", ETH_RSS_UDP },
- { "tcp", ETH_RSS_TCP },
- { "sctp", ETH_RSS_SCTP },
- { "tunnel", ETH_RSS_TUNNEL },
+ { "eth", RTE_ETH_RSS_ETH },
+ { "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+ { "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+ { "vlan", RTE_ETH_RSS_VLAN },
+ { "s-vlan", RTE_ETH_RSS_S_VLAN },
+ { "c-vlan", RTE_ETH_RSS_C_VLAN },
+ { "ipv4", RTE_ETH_RSS_IPV4 },
+ { "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+ { "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+ { "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+ { "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+ { "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+ { "ipv6", RTE_ETH_RSS_IPV6 },
+ { "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+ { "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+ { "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+ { "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+ { "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+ { "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+ { "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+ { "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+ { "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+ { "port", RTE_ETH_RSS_PORT },
+ { "vxlan", RTE_ETH_RSS_VXLAN },
+ { "geneve", RTE_ETH_RSS_GENEVE },
+ { "nvgre", RTE_ETH_RSS_NVGRE },
+ { "ip", RTE_ETH_RSS_IP },
+ { "udp", RTE_ETH_RSS_UDP },
+ { "tcp", RTE_ETH_RSS_TCP },
+ { "sctp", RTE_ETH_RSS_SCTP },
+ { "tunnel", RTE_ETH_RSS_TUNNEL },
{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
- { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
- { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
- { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
- { "l4-dst-only", ETH_RSS_L4_DST_ONLY },
- { "esp", ETH_RSS_ESP },
- { "ah", ETH_RSS_AH },
- { "l2tpv3", ETH_RSS_L2TPV3 },
- { "pfcp", ETH_RSS_PFCP },
- { "pppoe", ETH_RSS_PPPOE },
- { "gtpu", ETH_RSS_GTPU },
- { "ecpri", ETH_RSS_ECPRI },
- { "mpls", ETH_RSS_MPLS },
- { "ipv4-chksum", ETH_RSS_IPV4_CHKSUM },
- { "l4-chksum", ETH_RSS_L4_CHKSUM },
+ { "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+ { "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+ { "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+ { "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+ { "esp", RTE_ETH_RSS_ESP },
+ { "ah", RTE_ETH_RSS_AH },
+ { "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+ { "pfcp", RTE_ETH_RSS_PFCP },
+ { "pppoe", RTE_ETH_RSS_PPPOE },
+ { "gtpu", RTE_ETH_RSS_GTPU },
+ { "ecpri", RTE_ETH_RSS_ECPRI },
+ { "mpls", RTE_ETH_RSS_MPLS },
+ { "ipv4-chksum", RTE_ETH_RSS_IPV4_CHKSUM },
+ { "l4-chksum", RTE_ETH_RSS_L4_CHKSUM },
{ NULL, 0 },
};
@@ -538,39 +538,39 @@ static void
device_infos_display_speeds(uint32_t speed_capa)
{
printf("\n\tDevice speed capability:");
- if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+ if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
printf(" Autonegotiate (all speeds)");
- if (speed_capa & ETH_LINK_SPEED_FIXED)
+ if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
printf(" Disable autonegotiate (fixed speed) ");
- if (speed_capa & ETH_LINK_SPEED_10M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
printf(" 10 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_10M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M)
printf(" 10 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
printf(" 100 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M)
printf(" 100 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_1G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_1G)
printf(" 1 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_2_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
printf(" 2.5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_5G)
printf(" 5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_10G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10G)
printf(" 10 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_20G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_20G)
printf(" 20 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_25G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_25G)
printf(" 25 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_40G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_40G)
printf(" 40 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_50G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_50G)
printf(" 50 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_56G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_56G)
printf(" 56 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_100G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100G)
printf(" 100 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_200G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_200G)
printf(" 200 Gbps ");
}
@@ -723,9 +723,9 @@ port_infos_display(portid_t port_id)
printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
- printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex"));
- printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+ printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
("On") : ("Off"));
if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -743,22 +743,22 @@ port_infos_display(portid_t port_id)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (vlan_offload >= 0){
printf("VLAN offload: \n");
- if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
printf(" strip on, ");
else
printf(" strip off, ");
- if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
printf("filter on, ");
else
printf("filter off, ");
- if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
printf("extend on, ");
else
printf("extend off, ");
- if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
printf("qinq strip on\n");
else
printf("qinq strip off\n");
@@ -2953,8 +2953,8 @@ port_rss_reta_info(portid_t port_id,
}
for (i = 0; i < nb_entries; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3427,7 +3427,7 @@ dcb_fwd_config_setup(void)
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
fwd_lcores[lc_id]->stream_nb = 0;
fwd_lcores[lc_id]->stream_idx = sm_id;
- for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+ for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
@@ -4490,11 +4490,11 @@ vlan_extend_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
} else {
- vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4520,11 +4520,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
- vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4565,11 +4565,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
- vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4595,11 +4595,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
} else {
- vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4669,7 +4669,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (ports[port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_QINQ_INSERT) {
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
fprintf(stderr, "Error, as QinQ has been enabled.\n");
return;
}
@@ -4678,7 +4678,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
fprintf(stderr,
"Error: vlan insert is not supported by port %d\n",
port_id);
@@ -4686,7 +4686,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
ports[port_id].tx_vlan_id = vlan_id;
}
@@ -4705,7 +4705,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
fprintf(stderr,
"Error: qinq insert not supported by port %d\n",
port_id);
@@ -4713,8 +4713,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = vlan_id;
ports[port_id].tx_vlan_id_outer = vlan_id_outer;
}
@@ -4723,8 +4723,8 @@ void
tx_vlan_reset(portid_t port_id)
{
ports[port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = 0;
ports[port_id].tx_vlan_id_outer = 0;
}
@@ -5130,7 +5130,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
ret = eth_link_get_nowait_print_err(port_id, &link);
if (ret < 0)
return 1;
- if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+ if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
rate > link.link_speed) {
fprintf(stderr,
"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a35..75b24487e72e 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
/* do not recalculate udp cksum if it was 0 */
if (udp_hdr->dgram_cksum != 0) {
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
ol_flags |= PKT_TX_UDP_CKSUM;
} else {
udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
if (tso_segsz)
ol_flags |= PKT_TX_TCP_SEG;
- else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
ol_flags |= PKT_TX_TCP_CKSUM;
} else {
tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
((char *)l3_hdr + info->l3_len);
/* sctp payload must be a multiple of 4 to be
* offloaded */
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
((ipv4_hdr->total_length & 0x3) == 0)) {
ol_flags |= PKT_TX_SCTP_CKSUM;
} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ipv4_hdr->hdr_checksum = 0;
ol_flags |= PKT_TX_OUTER_IPV4;
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
ol_flags |= PKT_TX_OUTER_IP_CKSUM;
else
ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ol_flags |= PKT_TX_TCP_SEG;
/* Skip SW outer UDP checksum generation if HW supports it */
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
udp_hdr->dgram_cksum
= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (info.is_tunnel == 1) {
if (info.tunnel_tso_segsz ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
m->outer_l2_len = info.outer_l2_len;
m->outer_l3_len = info.outer_l3_len;
m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
rte_be_to_cpu_16(info.outer_ethertype),
info.outer_l3_len);
/* dump tx packet info */
- if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+ if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
info.tso_segsz != 0)
printf("tx: m->l2_len=%d m->l3_len=%d "
"m->l4_len=%d\n",
m->l2_len, m->l3_len, m->l4_len);
if (info.is_tunnel == 1) {
if ((tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
(tx_ol_flags & PKT_TX_OUTER_IPV6))
printf("tx: m->outer_l2_len=%d "
"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 7ebed9fed334..03d026dec169 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -99,11 +99,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags |= PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ee76df7f0323..57e00bca20e7 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
fs->rx_packets += nb_rx;
txp = &ports[fs->tx_port];
tx_offloads = txp->dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (i = 0; i < nb_rx; i++) {
if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
{
uint64_t ol_flags = 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
PKT_TX_VLAN : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
PKT_TX_QINQ : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
PKT_TX_MACSEC : 0;
return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index afc75f6bd213..cb40917077ea 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -547,29 +547,29 @@ parse_xstats_list(const char *in_str, struct rte_eth_xstat_name **xstats,
static int
parse_link_speed(int n)
{
- uint32_t speed = ETH_LINK_SPEED_FIXED;
+ uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
switch (n) {
case 1000:
- speed |= ETH_LINK_SPEED_1G;
+ speed |= RTE_ETH_LINK_SPEED_1G;
break;
case 10000:
- speed |= ETH_LINK_SPEED_10G;
+ speed |= RTE_ETH_LINK_SPEED_10G;
break;
case 25000:
- speed |= ETH_LINK_SPEED_25G;
+ speed |= RTE_ETH_LINK_SPEED_25G;
break;
case 40000:
- speed |= ETH_LINK_SPEED_40G;
+ speed |= RTE_ETH_LINK_SPEED_40G;
break;
case 50000:
- speed |= ETH_LINK_SPEED_50G;
+ speed |= RTE_ETH_LINK_SPEED_50G;
break;
case 100000:
- speed |= ETH_LINK_SPEED_100G;
+ speed |= RTE_ETH_LINK_SPEED_100G;
break;
case 200000:
- speed |= ETH_LINK_SPEED_200G;
+ speed |= RTE_ETH_LINK_SPEED_200G;
break;
case 100:
case 10:
@@ -1002,13 +1002,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
if (!strcmp(optarg, "64K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_64K;
+ RTE_ETH_FDIR_PBALLOC_64K;
else if (!strcmp(optarg, "128K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_128K;
+ RTE_ETH_FDIR_PBALLOC_128K;
else if (!strcmp(optarg, "256K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_256K;
+ RTE_ETH_FDIR_PBALLOC_256K;
else
rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
" must be: 64K or 128K or 256K\n",
@@ -1050,34 +1050,34 @@ launch_args_parse(int argc, char** argv)
}
#endif
if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
- rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
- rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
- rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
if (!strcmp(lgopts[opt_idx].name,
"enable-rx-timestamp"))
- rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-filter"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-extend"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-qinq-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
rx_drop_en = 1;
@@ -1099,13 +1099,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
set_pkt_forwarding_mode(optarg);
if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
- rss_hf = ETH_RSS_IP;
+ rss_hf = RTE_ETH_RSS_IP;
if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
- rss_hf = ETH_RSS_UDP;
+ rss_hf = RTE_ETH_RSS_UDP;
if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
- rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
- rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
if (!strcmp(lgopts[opt_idx].name, "rxq")) {
n = atoi(optarg);
if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1495,12 +1495,12 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
char *end = NULL;
n = strtoul(optarg, &end, 16);
- if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+ if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
else
rte_exit(EXIT_FAILURE,
"rx-mq-mode must be >= 0 and <= %d\n",
- ETH_MQ_RX_VMDQ_DCB_RSS);
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
}
if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 2b835a27bcd9..a66dfb297c65 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -349,7 +349,7 @@ uint64_t noisy_lkup_num_reads_writes;
/*
* Receive Side Scaling (RSS) configuration.
*/
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
/*
* Port topology configuration
@@ -460,12 +460,12 @@ lcoreid_t latencystats_lcore_id = -1;
struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
};
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
.mode = RTE_FDIR_MODE_NONE,
- .pballoc = RTE_FDIR_PBALLOC_64K,
+ .pballoc = RTE_ETH_FDIR_PBALLOC_64K,
.status = RTE_FDIR_REPORT_STATUS,
.mask = {
.vlan_tci_mask = 0xFFEF,
@@ -524,7 +524,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
/*
* hexadecimal bitmask of RX mq mode can be enabled.
*/
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
/*
* Used to set forced link speed
@@ -1578,9 +1578,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Apply Rx offloads configuration */
for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1717,8 +1717,8 @@ init_config(void)
init_port_config();
- gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
/*
* Records which Mbuf pool to use by each logical core, if needed.
*/
@@ -3466,7 +3466,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3769,17 +3769,17 @@ init_port_config(void)
if (port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0) {
port->dev_conf.rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_RSS);
} else {
- port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_RSS_HASH;
+ ~RTE_ETH_RX_OFFLOAD_RSS_HASH;
for (i = 0;
i < port->dev_info.nb_rx_queues;
i++)
port->rx_conf[i].offloads &=
- ~DEV_RX_OFFLOAD_RSS_HASH;
+ ~RTE_ETH_RX_OFFLOAD_RSS_HASH;
}
}
@@ -3867,9 +3867,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->enable_default_pool = 0;
vmdq_rx_conf->default_pool = 0;
vmdq_rx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_tx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3877,7 +3877,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->pool_map[i].pools =
1 << (i % vmdq_rx_conf->nb_queue_pools);
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
}
@@ -3885,8 +3885,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
/* set DCB mode of RX and TX of multiple queues */
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
- eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ (rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
} else {
struct rte_eth_dcb_rx_conf *rx_conf =
ð_conf->rx_adv_conf.dcb_rx_conf;
@@ -3902,23 +3902,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
rx_conf->nb_tcs = num_tcs;
tx_conf->nb_tcs = num_tcs;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
rx_conf->dcb_tc[i] = i % num_tcs;
tx_conf->dcb_tc[i] = i % num_tcs;
}
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
eth_conf->rx_adv_conf.rss_conf = rss_conf;
- eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
}
if (pfc_en)
eth_conf->dcb_capability_en =
- ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+ RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
else
- eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+ eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
return 0;
}
@@ -3947,7 +3947,7 @@ init_port_dcb_config(portid_t pid,
retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
if (retval < 0)
return retval;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
/* re-configure the device . */
retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3997,7 +3997,7 @@ init_port_dcb_config(portid_t pid,
rxtx_port_config(pid);
/* VLAN filter */
- rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
for (i = 0; i < RTE_DIM(vlan_tags); i++)
rx_vft_set(pid, vlan_tags[i], 1);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 071e4e7d63a3..669ce1e87d79 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -493,7 +493,7 @@ extern lcoreid_t bitrate_lcore_id;
extern uint8_t bitrate_enabled;
#endif
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
extern uint32_t max_rx_pkt_len;
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index e45f8840c91c..9eb7992815e8 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -354,11 +354,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
tx_offloads = txp->dev_conf.txmode.offloads;
vlan_tci = txp->tx_vlan_id;
vlan_tci_outer = txp->tx_vlan_id_outer;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
text, strlen(text), "Invalid default link status string");
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_FIXED;
- link_status.link_speed = ETH_SPEED_NUM_10M,
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #2: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_NONE;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
"string with HDX");
/* test max str len */
- link_status.link_speed = ETH_SPEED_NUM_200G;
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_AUTONEG;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #4:len = %d, %s\n", ret, text);
RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
int ret = 0;
struct rte_eth_link link_status = {
.link_speed = 55555,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
const char *value;
uint32_t link_speed;
} speed_str_map[] = {
- { "None", ETH_SPEED_NUM_NONE },
- { "10 Mbps", ETH_SPEED_NUM_10M },
- { "100 Mbps", ETH_SPEED_NUM_100M },
- { "1 Gbps", ETH_SPEED_NUM_1G },
- { "2.5 Gbps", ETH_SPEED_NUM_2_5G },
- { "5 Gbps", ETH_SPEED_NUM_5G },
- { "10 Gbps", ETH_SPEED_NUM_10G },
- { "20 Gbps", ETH_SPEED_NUM_20G },
- { "25 Gbps", ETH_SPEED_NUM_25G },
- { "40 Gbps", ETH_SPEED_NUM_40G },
- { "50 Gbps", ETH_SPEED_NUM_50G },
- { "56 Gbps", ETH_SPEED_NUM_56G },
- { "100 Gbps", ETH_SPEED_NUM_100G },
- { "200 Gbps", ETH_SPEED_NUM_200G },
- { "Unknown", ETH_SPEED_NUM_UNKNOWN },
+ { "None", RTE_ETH_SPEED_NUM_NONE },
+ { "10 Mbps", RTE_ETH_SPEED_NUM_10M },
+ { "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+ { "1 Gbps", RTE_ETH_SPEED_NUM_1G },
+ { "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+ { "5 Gbps", RTE_ETH_SPEED_NUM_5G },
+ { "10 Gbps", RTE_ETH_SPEED_NUM_10G },
+ { "20 Gbps", RTE_ETH_SPEED_NUM_20G },
+ { "25 Gbps", RTE_ETH_SPEED_NUM_25G },
+ { "40 Gbps", RTE_ETH_SPEED_NUM_40G },
+ { "50 Gbps", RTE_ETH_SPEED_NUM_50G },
+ { "56 Gbps", RTE_ETH_SPEED_NUM_56G },
+ { "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+ { "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+ { "Unknown", RTE_ETH_SPEED_NUM_UNKNOWN },
{ "Invalid", 50505 }
};
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index add4d8a67821..a09253e91814 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -103,7 +103,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
.intr_conf = {
.rxq = 1,
@@ -118,7 +118,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
};
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
static const struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5388d18125a6..8a9ef851789f 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,11 +134,11 @@ static uint16_t vlan_id = 0x100;
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 189d2430f27e..351129de2f9b 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,11 +107,11 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e7bb0497b663..f9eae9397386 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
struct rte_eth_rss_conf rss_conf;
uint8_t rss_key[40];
- struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
uint8_t is_slave;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
- struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
struct slave_conf slave_ports[SLAVE_COUNT];
struct rte_mempool *mbuf_pool;
@@ -80,27 +80,27 @@ static struct link_bonding_rssconf_unittest_params test_params = {
*/
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IPV6,
+ .rss_hf = RTE_ETH_RSS_IPV6,
},
},
.lpbk_mode = 0,
@@ -207,13 +207,13 @@ bond_slaves(void)
static int
reta_set(uint16_t port_id, uint8_t value, int reta_size)
{
- struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
int i, j;
- for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+ for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
/* select all fields to set */
reta_conf[i].mask = ~0LL;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
reta_conf[i].reta[j] = value;
}
@@ -232,8 +232,8 @@ reta_check_synced(struct slave_conf *port)
for (i = 0; i < test_params.bond_dev_info.reta_size;
i++) {
- int index = i / RTE_RETA_GROUP_SIZE;
- int shift = i % RTE_RETA_GROUP_SIZE;
+ int index = i / RTE_ETH_RETA_GROUP_SIZE;
+ int shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (port->reta_conf[index].reta[shift] !=
test_params.bond_reta_conf[index].reta[shift])
@@ -251,7 +251,7 @@ static int
bond_reta_fetch(void) {
unsigned j;
- for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+ for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
j++)
test_params.bond_reta_conf[j].mask = ~0LL;
@@ -268,7 +268,7 @@ static int
slave_reta_fetch(struct slave_conf *port) {
unsigned j;
- for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
port->reta_conf[j].mask = ~0LL;
TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index a3b4f52c65e6..1df86ce080e5 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,11 +62,11 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 1, /* enable loopback */
};
@@ -155,7 +155,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -822,7 +822,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
/* bulk alloc rx, full-featured tx */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "hybrid")) {
/* bulk alloc rx, vector tx
@@ -831,13 +831,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
*/
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "full")) {
/* full feature rx,tx pair */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
return 0;
}
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7e15b47eb0fb..d9f2e4f66bde 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
void *pkt = NULL;
struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
rte_pktmbuf_free(pkt);
@@ -168,7 +168,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
int wait_to_complete __rte_unused)
{
if (!bonded_eth_dev->data->dev_started)
- bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -562,9 +562,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
eth_dev->data->nb_rx_queues = (uint16_t)1;
eth_dev->data->nb_tx_queues = (uint16_t)1;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
- eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue configuration.
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue config.
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index bdd6e7263c85..54feffdef4bd 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -70,5 +70,5 @@ Features and Limitations
------------------------
The PMD will re-insert the VLAN tag transparently to the packet if the kernel
-strips it, as long as the ``DEV_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
+strips it, as long as the ``RTE_ETH_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
application.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index aa6032889a55..b3d10f30dc77 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,21 +877,21 @@ processing. This improved performance is derived from a number of optimizations:
* TX: only the following reduced set of transmit offloads is supported in
vector mode::
- DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* RX: only the following reduced set of receive offloads is supported in
vector mode (note that jumbo MTU is allowed only when the MTU setting
- does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
- DEV_RX_OFFLOAD_VLAN_STRIP
- DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_IPV4_CKSUM
- DEV_RX_OFFLOAD_UDP_CKSUM
- DEV_RX_OFFLOAD_TCP_CKSUM
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
- DEV_RX_OFFLOAD_RSS_HASH
- DEV_RX_OFFLOAD_VLAN_FILTER
+ does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_RSS_HASH
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER
The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
.. code-block:: console
vlan_offload = rte_eth_dev_get_vlan_offload(port);
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
rte_eth_dev_set_vlan_offload(port, vlan_offload);
Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d35751d5b5a7..594e98a6b803 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
Supports getting the speed capabilities that the current device is capable of.
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
* **[related] API**: ``rte_eth_dev_info_get()``.
@@ -101,11 +101,11 @@ Supports Rx interrupts.
Lock-free Tx queue
------------------
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
* **[related] API**: ``rte_eth_tx_burst()``.
@@ -117,8 +117,8 @@ Fast mbuf free
Supports optimization for fast release of mbufs following successful Tx.
Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
.. _nic_features_free_tx_mbuf_on_demand:
@@ -177,7 +177,7 @@ Scattered Rx
Supports receiving segmented mbufs.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
* **[implements] datapath**: ``Scattered Rx function``.
* **[implements] rte_eth_dev_data**: ``scattered_rx``.
* **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -205,12 +205,12 @@ LRO
Supports Large Receive Offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
@@ -221,12 +221,12 @@ TSO
Supports TCP Segmentation Offloading.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
* **[uses] rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
* **[uses] mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
* **[uses] mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
* **[implements] datapath**: ``TSO functionality``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
.. _nic_features_promiscuous_mode:
@@ -287,9 +287,9 @@ RSS hash
Supports RSS hashing on RX.
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -302,7 +302,7 @@ Inner RSS
Supports RX RSS hashing on Inner headers.
* **[uses] rte_flow_action_rss**: ``level``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -339,7 +339,7 @@ VMDq
Supports Virtual Machine Device Queues (VMDq).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -362,7 +362,7 @@ DCB
Supports Data Center Bridging (DCB).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -378,7 +378,7 @@ VLAN filter
Supports filtering of a VLAN Tag identifier.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
* **[implements] eth_dev_ops**: ``vlan_filter_set``.
* **[related] API**: ``rte_eth_dev_vlan_filter()``.
@@ -416,13 +416,13 @@ Supports inline crypto processing defined by rte_security library to perform cry
operations of security protocol while packet is received in NIC. NIC is not aware
of protocol operations. See Security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[uses] mbuf**: ``mbuf.l2_len``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -438,14 +438,14 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
packet is received at NIC. The NIC is capable of understanding the security
protocol operations. See security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[uses] mbuf**: ``mbuf.l2_len``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -459,7 +459,7 @@ CRC offload
Supports CRC stripping by hardware.
A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
.. _nic_features_vlan_offload:
@@ -469,13 +469,13 @@ VLAN offload
Supports VLAN offload to hardware.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
* **[implements] eth_dev_ops**: ``vlan_offload_set``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[related] API**: ``rte_eth_dev_set_vlan_offload()``,
``rte_eth_dev_get_vlan_offload()``.
@@ -487,14 +487,14 @@ QinQ offload
Supports QinQ (queue in queue) offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
.. _nic_features_fec:
@@ -508,7 +508,7 @@ information to correct the bit errors generated during data packet transmission
improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
* **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides] rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides] rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
* **[related] API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
@@ -519,16 +519,16 @@ L3 checksum offload
Supports L3 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
* **[uses] mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
.. _nic_features_l4_checksum_offload:
@@ -538,8 +538,8 @@ L4 checksum offload
Supports L4 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -547,8 +547,8 @@ Supports L4 checksum offload.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
.. _nic_features_hw_timestamp:
@@ -557,10 +557,10 @@ Timestamp offload
Supports Timestamp.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[related] eth_dev_ops**: ``read_clock``.
.. _nic_features_macsec_offload:
@@ -570,11 +570,11 @@ MACsec offload
Supports MACsec.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
.. _nic_features_inner_l3_checksum:
@@ -584,16 +584,16 @@ Inner L3 checksum
Supports inner packet L3 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
.. _nic_features_inner_l4_checksum:
@@ -603,15 +603,15 @@ Inner L4 checksum
Supports inner packet L4 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
.. _nic_features_shared_rx_queue:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index ed6afd62703d..bba53f5a64ee 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
will be checked:
-* ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+* ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
-* ``DEV_RX_OFFLOAD_CHECKSUM``
+* ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
-* ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+* ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
* ``fdir_conf->mode``
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 2efdd1a41bb4..a1e236ad75e5 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -216,21 +216,21 @@ For example,
* If the max number of VFs (max_vfs) is set in the range of 1 to 32:
If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
* If the max number of VFs (max_vfs) is in the range of 33 to 64:
If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
as ``rxq`` is not correct at this case;
- If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+ If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
and each VF have 2 Rx queues;
- On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
- or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+ On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+ or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
It also needs config VF RSS information like hash function, RSS key, RSS key length.
.. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5bcd..148d2f5fc2be 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,13 +89,13 @@ Other features are supported using optional MACRO configuration. They include:
To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
-* DEV_RX_OFFLOAD_VLAN_STRIP
+* RTE_ETH_RX_OFFLOAD_VLAN_STRIP
-* DEV_RX_OFFLOAD_VLAN_EXTEND
+* RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
-* DEV_RX_OFFLOAD_CHECKSUM
+* RTE_ETH_RX_OFFLOAD_CHECKSUM
-* DEV_RX_OFFLOAD_HEADER_SPLIT
+* RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
* dev_conf
@@ -163,13 +163,13 @@ l3fwd
~~~~~
When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
Otherwise, by default, RX vPMD is disabled.
load_balancer
~~~~~~~~~~~~~
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index dd059b227d8e..86927a0b56b0 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
- CRC:
- - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+ - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
@@ -611,7 +611,7 @@ Driver options
small-packet traffic.
When MPRQ is enabled, MTU can be larger than the size of
- user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+ user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
be added in next releases
TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
**Known limitation:** TAP supports all of the above hash functions together
and not in partial combinations.
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
- the bit mask of required GSO types. The GSO library uses the same macros as
those that describe a physical device's TX offloading capabilities (i.e.
- ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+ ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
wants to segment TCP/IPv4 packets, it should set gso_types to
- ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
- supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
- ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+ ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+ supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+ ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
allowed.
- a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
set out_ip checksum to 0 in the packet
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
- calculate checksum of out_ip and out_udp::
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
set out_ip checksum to 0 in the packet
set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
- and DEV_TX_OFFLOAD_UDP_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+ and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
- calculate checksum of in_ip::
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
This is similar to case 1), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
This is similar to case 2), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
- DEV_TX_OFFLOAD_TCP_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header without including the IP
payload length using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
- DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
The list of flags and their precise meaning is described in the mbuf API
documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
Avoiding lock contention is a key issue in a multi-core environment.
To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
enables more scaling as all workers can send the packets.
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
Device Identification, Ownership and Configuration
--------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
Any requested offloading by an application must be within the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index a2169517c3f9..d798adb83e1d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1993,23 +1993,23 @@ only matching traffic goes through.
.. table:: RSS
- +---------------+---------------------------------------------+
- | Field | Value |
- +===============+=============================================+
- | ``func`` | RSS hash function to apply |
- +---------------+---------------------------------------------+
- | ``level`` | encapsulation level for ``types`` |
- +---------------+---------------------------------------------+
- | ``types`` | specific RSS hash types (see ``ETH_RSS_*``) |
- +---------------+---------------------------------------------+
- | ``key_len`` | hash key length in bytes |
- +---------------+---------------------------------------------+
- | ``queue_num`` | number of entries in ``queue`` |
- +---------------+---------------------------------------------+
- | ``key`` | hash key |
- +---------------+---------------------------------------------+
- | ``queue`` | queue indices to use |
- +---------------+---------------------------------------------+
+ +---------------+-------------------------------------------------+
+ | Field | Value |
+ +===============+=================================================+
+ | ``func`` | RSS hash function to apply |
+ +---------------+-------------------------------------------------+
+ | ``level`` | encapsulation level for ``types`` |
+ +---------------+-------------------------------------------------+
+ | ``types`` | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+ +---------------+-------------------------------------------------+
+ | ``key_len`` | hash key length in bytes |
+ +---------------+-------------------------------------------------+
+ | ``queue_num`` | number of entries in ``queue`` |
+ +---------------+-------------------------------------------------+
+ | ``key`` | hash key |
+ +---------------+-------------------------------------------------+
+ | ``queue`` | queue indices to use |
+ +---------------+-------------------------------------------------+
Action: ``PF``
^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index ad92c16868c1..46c9b51d1bf9 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -569,7 +569,7 @@ created by the application is attached to the security session by the API
For Inline Crypto and Inline protocol offload, device specific defined metadata is
updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
For inline protocol offloaded ingress traffic, the application can register a
pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index cc2b89850b07..f11550dc78ac 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -69,22 +69,16 @@ Deprecation Notices
``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
usage in following public struct hierarchy:
- ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+ ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
Need to identify this kind of usages and fix in 20.11, otherwise this blocks
us extending existing enum/define.
One solution can be using a fixed size array instead of ``.*MAX.*`` value.
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
- Macros will be added for backward compatibility.
- Backward compatibility macros will be removed on v22.11.
- A few old backward compatibility macros from 2013 that does not have
- proper prefix will be removed on v21.11.
-
* ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
will be removed in DPDK 20.11.
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
This will allow application to enable or disable PMDs from updating
``rte_mbuf::hash::fdir``.
This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 569d3c00b9ee..b327c2bfca1c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -446,6 +446,9 @@ ABI Changes
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+ updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
Known Issues
------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
* ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
- (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the RX HW offload capabilities.
By default all HW RX offloads are enabled.
* ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
- (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the TX HW offload capabilities.
By default all HW TX offloads are enabled.
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index d23e0b6a7a2e..30edef07ea20 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -546,7 +546,7 @@ The command line options are:
Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
The default value is 0x7::
- ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+ RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
* ``--record-core-cycles``
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
struct usdpaa_ioctl_link_status_args_old {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
- /* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+ /* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
int link_autoneg;
};
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
struct usdpaa_ioctl_update_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_update_link_speed {
/* network device node name*/
char if_name[IF_NAME_MAX_LEN];
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
};
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index ef85073b17e1..e13d55713625 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -167,7 +167,7 @@ enum roc_npc_rss_hash_function {
struct roc_npc_action_rss {
enum roc_npc_rss_hash_function func;
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index a077376dc0fb..8f778f0c2419 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -93,10 +93,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -290,7 +290,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -320,7 +320,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
internals->tx_queue[i].sockfd = -1;
}
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -331,7 +331,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
const struct rte_eth_rxmode *rxmode = &dev_conf->rxmode;
struct pmd_internals *internals = dev->data->dev_private;
- internals->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ internals->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -346,9 +346,9 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return 0;
}
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index b362ccdcd38c..e156246f24df 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
/* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -652,7 +652,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -661,7 +661,7 @@ eth_dev_start(struct rte_eth_dev *dev)
static int
eth_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
/* ARK PMD supports all line rates, how do we indicate that here ?? */
- dev_info->speed_capa = (ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G);
-
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G);
+
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return 0;
}
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 5a198f53fce7..f7bfac796c07 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,20 +154,20 @@ static struct rte_pci_driver rte_atl_pmd = {
.remove = eth_atl_pci_remove,
};
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
- | DEV_RX_OFFLOAD_IPV4_CKSUM \
- | DEV_RX_OFFLOAD_UDP_CKSUM \
- | DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_MACSEC_STRIP \
- | DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
- | DEV_TX_OFFLOAD_IPV4_CKSUM \
- | DEV_TX_OFFLOAD_UDP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_TSO \
- | DEV_TX_OFFLOAD_MACSEC_INSERT \
- | DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+ | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+ | RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+ | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_TSO \
+ | RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+ | RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define SFP_EEPROM_SIZE 0x100
@@ -488,7 +488,7 @@ atl_dev_start(struct rte_eth_dev *dev)
/* set adapter started */
hw->adapter_stopped = 0;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR,
"Invalid link_speeds for port %u, fix speed not supported",
dev->data->port_id);
@@ -655,18 +655,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
uint32_t link_speeds = dev->data->dev_conf.link_speeds;
uint32_t speed_mask = 0;
- if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed_mask = hw->aq_nic_cfg->link_speed_msk;
} else {
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
speed_mask |= AQ_NIC_RATE_10G;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
speed_mask |= AQ_NIC_RATE_5G;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
speed_mask |= AQ_NIC_RATE_1G;
- if (link_speeds & ETH_LINK_SPEED_2_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed_mask |= AQ_NIC_RATE_2G5;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
speed_mask |= AQ_NIC_RATE_100M;
}
@@ -1127,10 +1127,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
return 0;
}
@@ -1175,10 +1175,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
u32 fc = AQ_NIC_FC_OFF;
int err = 0;
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
memset(&old, 0, sizeof(old));
/* load old link status */
@@ -1198,8 +1198,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
return 0;
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = hw->aq_link_status.mbps;
rte_eth_linkstatus_set(dev, &link);
@@ -1333,7 +1333,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1532,13 +1532,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
hw->aq_fw_ops->get_flow_control(hw, &fc);
if (fc == AQ_NIC_FC_OFF)
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (fc & AQ_NIC_FC_RX)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (fc & AQ_NIC_FC_TX)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
return 0;
}
@@ -1553,13 +1553,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
if (hw->aq_fw_ops->set_flow_control == NULL)
return -ENOTSUP;
- if (fc_conf->mode == RTE_FC_NONE)
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
- else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
- else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
- else if (fc_conf->mode == RTE_FC_FULL)
+ else if (fc_conf->mode == RTE_ETH_FC_FULL)
hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1727,14 +1727,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+ ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
- cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+ cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
for (i = 0; i < dev->data->nb_rx_queues; i++)
hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
- if (mask & ETH_VLAN_EXTEND_MASK)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK)
ret = -ENOTSUP;
return ret;
@@ -1750,10 +1750,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
PMD_INIT_FUNC_TRACE();
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
break;
default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index fbc9917ed30d..ed9ef9f0cc52 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
#include "hw_atl/hw_atl_utils.h"
#define ATL_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define ATL_DEV_PRIVATE_TO_HW(adapter) \
(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 0d3460383a50..2ff426892df2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 932ec90265cf..5d94db02c506 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1998,9 +1998,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
/* Setup required number of queues */
_avp_set_queue_counts(eth_dev);
- mask = (ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ mask = (RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
ret = avp_vlan_offload_set(eth_dev, mask);
if (ret < 0) {
PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2140,8 +2140,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_eth_link *link = ð_dev->data->dev_link;
- link->link_speed = ETH_SPEED_NUM_10G;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_status = !!(avp->flags & AVP_F_LINKUP);
return -1;
@@ -2191,8 +2191,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
}
return 0;
@@ -2205,9 +2205,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
uint64_t offloads = dev_conf->rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
else
avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2216,13 +2216,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
}
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index ca32ad641873..3aaa2193272f 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
pdata->rss_hf = rss_conf->rss_hf;
rss_hf = rss_conf->rss_hf;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
}
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 0250256830ac..dab0c6775d1d 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
/* Checksum offload to hardware */
pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
}
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
{
struct axgbe_port *pdata = dev->data->dev_private;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
pdata->rss_enable = 1;
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
pdata->rss_enable = 0;
else
return -1;
@@ -385,7 +385,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -521,8 +521,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
continue;
pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -552,8 +552,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
continue;
reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -590,13 +590,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
- if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
/* Set the RSS options */
@@ -765,7 +765,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
link.link_status = pdata->phy_link;
link.link_speed = pdata->phy_speed;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1208,24 +1208,24 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (pdata->hw_feat.rss) {
dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1262,13 +1262,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
fc.autoneg = pdata->pause_autoneg;
if (pdata->rx_pause && pdata->tx_pause)
- fc.mode = RTE_FC_FULL;
+ fc.mode = RTE_ETH_FC_FULL;
else if (pdata->rx_pause)
- fc.mode = RTE_FC_RX_PAUSE;
+ fc.mode = RTE_ETH_FC_RX_PAUSE;
else if (pdata->tx_pause)
- fc.mode = RTE_FC_TX_PAUSE;
+ fc.mode = RTE_ETH_FC_TX_PAUSE;
else
- fc.mode = RTE_FC_NONE;
+ fc.mode = RTE_ETH_FC_NONE;
fc_conf->high_water = (1024 + (fc.low_water[0] << 9)) / 1024;
fc_conf->low_water = (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1298,13 +1298,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
AXGMAC_IOWRITE(pdata, reg, reg_val);
fc.mode = fc_conf->mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
} else {
@@ -1386,15 +1386,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
fc.mode = pfc_conf->fc.mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1830,8 +1830,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+ case RTE_ETH_VLAN_TYPE_INNER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
if (qinq) {
if (tpid != 0x8100 && tpid != 0x88a8)
PMD_DRV_LOG(ERR,
@@ -1848,8 +1848,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"Inner type not supported in single tag\n");
}
break;
- case ETH_VLAN_TYPE_OUTER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+ case RTE_ETH_VLAN_TYPE_OUTER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
if (qinq) {
PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
/*Enable outer VLAN tag*/
@@ -1866,11 +1866,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"tag supported 0x8100/0x88A8\n");
}
break;
- case ETH_VLAN_TYPE_MAX:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+ case RTE_ETH_VLAN_TYPE_MAX:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
break;
- case ETH_VLAN_TYPE_UNKNOWN:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+ case RTE_ETH_VLAN_TYPE_UNKNOWN:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
break;
}
return 0;
@@ -1904,8 +1904,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1915,8 +1915,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_stripping(pdata);
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1926,14 +1926,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_filtering(pdata);
}
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
axgbe_vlan_extend_enable(pdata);
/* Set global registers with default ethertype*/
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
} else {
PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
/* Receive Side Scaling */
#define AXGBE_RSS_OFFLOAD ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define AXGBE_RSS_HASH_KEY_SIZE 40
#define AXGBE_RSS_MAX_TABLE_SIZE 256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
pdata->an_int = 0;
axgbe_an73_clear_interrupts(pdata);
pdata->eth_dev->data->dev_link.link_status =
- ETH_LINK_DOWN;
+ RTE_ETH_LINK_DOWN;
} else if (pdata->an_state == AXGBE_AN_ERROR) {
PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c8618d2d6daa..aa2c27ebaa49 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
(DMA_CH_INC * rxq->queue_id));
rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
DMA_CH_RDTR_LO);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 567ea2382864..78fc717ec44a 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
link.link_speed = sc->link_vars.line_speed;
switch (sc->link_vars.duplex) {
case DUPLEX_FULL:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case DUPLEX_HALF:
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
link.link_status = sc->link_vars.link_up;
return rte_eth_linkstatus_set(dev, &link);
@@ -408,7 +408,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
"VF device is no longer operational");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
}
return ret;
@@ -534,7 +534,7 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
- dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -669,7 +669,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
bnx2x_load_firmware(sc);
assert(sc->firmware);
- if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
sc->udp_rss = 1;
sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 6743cf92b0e6..39bd739c7bc9 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,37 +569,37 @@ struct bnxt_rep_info {
#define BNXT_FW_STATUS_SHUTDOWN 0x100000
#define BNXT_ETH_RSS_SUPPORT ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define BNXT_HWRM_SHORT_REQ_LEN sizeof(struct hwrm_short_input)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f65..2791a5c62db1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
goto err_out;
/* Alloc RSS context only if RSS mode is enabled */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int j, nr_ctxs = bnxt_rss_ctxts(bp);
/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
* setting is not available at this time, it will not be
* configured correctly in the CFA.
*/
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
- (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
true : false);
if (rc)
goto err_out;
@@ -923,35 +923,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
link_speed = bp->link_info->support_pam4_speeds;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
- speed_capa |= ETH_LINK_SPEED_2_5G;
+ speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
- speed_capa |= ETH_LINK_SPEED_20G;
+ speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
if (bp->link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -995,14 +995,14 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
dev_info->tx_queue_offload_capa;
if (bp->fw_cap & BNXT_FW_CAP_VLAN_TX_INSERT)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
@@ -1049,8 +1049,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
*/
/* VMDq resources */
- vpool = 64; /* ETH_64_POOLS */
- vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+ vpool = 64; /* RTE_ETH_64_POOLS */
+ vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
for (i = 0; i < 4; vpool >>= 1, i++) {
if (max_vnics > vpool) {
for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1145,15 +1145,15 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
(uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
goto resource_error;
- if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+ if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
bp->max_vnics < eth_dev->data->nb_rx_queues)
goto resource_error;
bp->rx_cp_nr_rings = bp->rx_nr_rings;
bp->tx_cp_nr_rings = bp->tx_nr_rings;
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
@@ -1182,7 +1182,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
eth_dev->data->port_id,
(uint32_t)link->link_speed,
- (link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex\n"));
else
PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1199,10 +1199,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
uint16_t buf_size;
int i;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return 1;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
return 1;
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1247,15 +1247,15 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
* a limited subset have been enabled.
*/
if (eth_dev->data->dev_conf.rxmode.offloads &
- ~(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_VLAN_FILTER))
+ ~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
goto use_scalar_rx;
#if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1307,7 +1307,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
* or tx offloads.
*/
if (eth_dev->data->scattered_rx ||
- (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+ (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
BNXT_TRUFLOW_EN(bp))
goto use_scalar_tx;
@@ -1608,10 +1608,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
bnxt_link_update_op(eth_dev, 1);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- vlan_mask |= ETH_VLAN_FILTER_MASK;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- vlan_mask |= ETH_VLAN_STRIP_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
if (rc)
goto error;
@@ -1833,8 +1833,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
/* Retrieve link info from hardware */
rc = bnxt_get_hwrm_link_config(bp, &new);
if (rc) {
- new.link_speed = ETH_LINK_SPEED_100M;
- new.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new.link_speed = RTE_ETH_LINK_SPEED_100M;
+ new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR,
"Failed to retrieve link rc = 0x%x!\n", rc);
goto out;
@@ -2028,7 +2028,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
if (!vnic->rss_table)
return -EINVAL;
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return -EINVAL;
if (reta_size != tbl_size) {
@@ -2041,8 +2041,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
for (i = 0; i < reta_size; i++) {
struct bnxt_rx_queue *rxq;
- idx = i / RTE_RETA_GROUP_SIZE;
- sft = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ sft = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << sft)))
continue;
@@ -2095,8 +2095,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
}
for (idx = 0, i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- sft = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ sft = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << sft)) {
uint16_t qid;
@@ -2134,7 +2134,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
* If RSS enablement were different than dev_configure,
* then return -EINVAL
*/
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (!rss_conf->rss_hf)
PMD_DRV_LOG(ERR, "Hash type NONE\n");
} else {
@@ -2152,7 +2152,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
vnic->hash_mode =
bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
- ETH_RSS_LEVEL(rss_conf->rss_hf));
+ RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
/*
* If hashkey is not specified, use the previously configured
@@ -2197,30 +2197,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
hash_types = vnic->hash_type;
rss_conf->rss_hf = 0;
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
}
@@ -2260,17 +2260,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
fc_conf->autoneg = 1;
switch (bp->link_info->pause) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
}
return 0;
@@ -2293,11 +2293,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
bp->link_info->auto_pause = 0;
bp->link_info->force_pause = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2308,7 +2308,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
}
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2319,7 +2319,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
}
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2350,7 +2350,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2364,7 +2364,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
tunnel_type =
HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2413,7 +2413,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (!bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2430,7 +2430,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
port = bp->vxlan_fw_dst_port_id;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (!bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2608,7 +2608,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
int rc;
vnic = BNXT_GET_DEFAULT_VNIC(bp);
- if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
/* Remove any VLAN filters programmed */
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
@@ -2628,7 +2628,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
bnxt_add_vlan_filter(bp, 0);
}
PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
return 0;
}
@@ -2641,7 +2641,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
/* Destroy vnic filters and vnic */
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
}
@@ -2680,7 +2680,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = bnxt_add_vlan_filter(bp, 0);
if (rc)
return rc;
@@ -2698,7 +2698,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
return rc;
}
@@ -2718,22 +2718,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
if (!dev->data->dev_started)
return 0;
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering */
rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
else
PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2748,10 +2748,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
{
struct bnxt *bp = dev->data->dev_private;
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
- if (vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -2763,7 +2763,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
return -EINVAL;
}
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
switch (tpid) {
case RTE_ETHER_TYPE_QINQ:
bp->outer_tpid_bd =
@@ -2791,7 +2791,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
}
bp->outer_tpid_bd |= tpid;
PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer vlan in QinQ\n");
return -EINVAL;
@@ -2831,7 +2831,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
bnxt_del_dflt_mac_filter(bp, vnic);
memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
/* This filter will allow only untagged packets */
rc = bnxt_add_vlan_filter(bp, 0);
} else {
@@ -6556,4 +6556,4 @@ bool is_bnxt_supported(struct rte_eth_dev *dev)
RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE);
RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_bnxt, "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index b2ebb5634e3a..ced697a73980 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -978,7 +978,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -1177,7 +1177,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp,
}
/* If RSS types is 0, use a best effort configuration */
- types = rss->types ? rss->types : ETH_RSS_IPV4;
+ types = rss->types ? rss->types : RTE_ETH_RSS_IPV4;
hash_type = bnxt_rte_to_hwrm_hash_types(types);
@@ -1322,7 +1322,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
rxq = bp->rx_queues[act_q->index];
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
vnic->fw_vnic_id != INVALID_HW_RING_ID)
goto use_vnic;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 181e607d7bf8..82e89b7c8af7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
uint16_t j = dst_id - 1;
//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
- if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+ if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
conf->pool_map[j].pools & (1UL << j)) {
PMD_DRV_LOG(DEBUG,
"Add vlan %u to vmdq pool %u\n",
@@ -2979,12 +2979,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
{
uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
- if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+ if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
switch (conf_link_speed) {
- case ETH_LINK_SPEED_10M_HD:
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
}
@@ -3001,51 +3001,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
{
uint16_t eth_link_speed = 0;
- if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
- return ETH_LINK_SPEED_AUTONEG;
+ if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+ return RTE_ETH_LINK_SPEED_AUTONEG;
- switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_100M:
- case ETH_LINK_SPEED_100M_HD:
+ switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
break;
- case ETH_LINK_SPEED_2_5G:
+ case RTE_ETH_LINK_SPEED_2_5G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
break;
- case ETH_LINK_SPEED_20G:
+ case RTE_ETH_LINK_SPEED_20G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
break;
@@ -3058,11 +3058,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
return eth_link_speed;
}
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
- ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+ RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
static int bnxt_validate_link_speed(struct bnxt *bp)
{
@@ -3071,13 +3071,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
uint32_t link_speed_capa;
uint32_t one_speed;
- if (link_speed == ETH_LINK_SPEED_AUTONEG)
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
return 0;
link_speed_capa = bnxt_get_speed_capabilities(bp);
- if (link_speed & ETH_LINK_SPEED_FIXED) {
- one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+ if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+ one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
if (one_speed & (one_speed - 1)) {
PMD_DRV_LOG(ERR,
@@ -3107,71 +3107,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
{
uint16_t ret = 0;
- if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
if (bp->link_info->support_speeds)
return bp->link_info->support_speeds;
link_speed = BNXT_SUPPORTED_SPEEDS;
}
- if (link_speed & ETH_LINK_SPEED_100M)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_100M_HD)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_1G)
+ if (link_speed & RTE_ETH_LINK_SPEED_1G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
- if (link_speed & ETH_LINK_SPEED_2_5G)
+ if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
- if (link_speed & ETH_LINK_SPEED_10G)
+ if (link_speed & RTE_ETH_LINK_SPEED_10G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
- if (link_speed & ETH_LINK_SPEED_20G)
+ if (link_speed & RTE_ETH_LINK_SPEED_20G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
- if (link_speed & ETH_LINK_SPEED_25G)
+ if (link_speed & RTE_ETH_LINK_SPEED_25G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
- if (link_speed & ETH_LINK_SPEED_40G)
+ if (link_speed & RTE_ETH_LINK_SPEED_40G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
- if (link_speed & ETH_LINK_SPEED_50G)
+ if (link_speed & RTE_ETH_LINK_SPEED_50G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
- if (link_speed & ETH_LINK_SPEED_100G)
+ if (link_speed & RTE_ETH_LINK_SPEED_100G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
- if (link_speed & ETH_LINK_SPEED_200G)
+ if (link_speed & RTE_ETH_LINK_SPEED_200G)
ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
return ret;
}
static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
{
- uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+ uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
switch (hw_link_speed) {
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
- eth_link_speed = ETH_SPEED_NUM_100M;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
- eth_link_speed = ETH_SPEED_NUM_1G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
- eth_link_speed = ETH_SPEED_NUM_2_5G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
- eth_link_speed = ETH_SPEED_NUM_10G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
- eth_link_speed = ETH_SPEED_NUM_20G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
- eth_link_speed = ETH_SPEED_NUM_25G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
- eth_link_speed = ETH_SPEED_NUM_40G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
- eth_link_speed = ETH_SPEED_NUM_50G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
- eth_link_speed = ETH_SPEED_NUM_100G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
- eth_link_speed = ETH_SPEED_NUM_200G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_200G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
default:
@@ -3184,16 +3184,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
{
- uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (hw_link_duplex) {
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
/* FALLTHROUGH */
- eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
- eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
default:
PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3222,12 +3222,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
link->link_speed =
bnxt_parse_hw_link_speed(link_info->link_speed);
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
link->link_status = link_info->link_up;
link->link_autoneg = link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
- ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+ RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
exit:
return rc;
}
@@ -3253,7 +3253,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
if (BNXT_CHIP_P5(bp) &&
- dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+ dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
/* 40G is not supported as part of media auto detect.
* The speed should be forced and autoneg disabled
* to configure 40G speed.
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
HWRM_CHECK_RESULT();
- bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+ bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
svif_info = rte_le_to_cpu_16(resp->svif_info);
if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a84..1c07db3ca9c5 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -537,7 +537,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 08cefa1baaef..7940d489a102 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -187,7 +187,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
rx_ring_info->rx_ring_struct->ring_size *
AGG_RING_SIZE_FACTOR)) : 0;
- if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
int tpa_max = BNXT_TPA_MAX_AGGS(bp);
tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -283,7 +283,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
ag_bitmap_start, ag_bitmap_len);
/* TPA info */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rx_ring_info->tpa_info =
((struct bnxt_tpa_info *)
((char *)mz->addr + tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 38ec4aa14b77..1456f8b54ffa 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -52,13 +52,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->nr_vnics = 0;
/* Multi-queue mode */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_RSS:
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* FALLTHROUGH */
/* ETH_8/64_POOLs */
pools = conf->nb_queue_pools;
@@ -66,14 +66,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
max_pools = RTE_MIN(bp->max_vnics,
RTE_MIN(bp->max_l2_ctx,
RTE_MIN(bp->max_rsscos_ctx,
- ETH_64_POOLS)));
+ RTE_ETH_64_POOLS)));
PMD_DRV_LOG(DEBUG,
"pools = %u max_pools = %u\n",
pools, max_pools);
if (pools > max_pools)
pools = max_pools;
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
break;
default:
@@ -111,7 +111,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
ring_idx, rxq, i, vnic);
}
if (i == 0) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
bp->eth_dev->data->promiscuous = 1;
vnic->flags |= BNXT_VNIC_INFO_PROMISC;
}
@@ -121,8 +121,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
vnic->end_grp_id = end_grp_id;
if (i) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
- !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+ !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
vnic->rss_dflt_cr = true;
goto skip_filter_allocation;
}
@@ -147,14 +147,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->rx_num_qs_per_vnic = nb_q_per_grp;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
if (bp->flags & BNXT_FLAG_UPDATE_HASH)
bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
for (i = 0; i < bp->nr_vnics; i++) {
- uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+ uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
vnic = &bp->vnic_info[i];
vnic->hash_type =
@@ -363,7 +363,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
rxq->queue_id = queue_idx;
rxq->port_id = eth_dev->data->port_id;
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -478,7 +478,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
vnic = rxq->vnic;
if (BNXT_HAS_RING_GRPS(bp)) {
@@ -549,7 +549,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
rxq->rx_started = false;
PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (BNXT_HAS_RING_GRPS(bp))
vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index aeacc60a0127..eb555c4545e6 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
dev_conf = &rxq->bp->eth_dev->data->dev_conf;
offloads = dev_conf->rxmode.offloads;
- outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+ outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
/* Initialize ol_flags table. */
pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 9e45ddd7a82e..f2fcaf53021c 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -353,7 +353,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -479,7 +479,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
{
uint16_t hwrm_type = 0;
- if (rte_type & ETH_RSS_IPV4)
+ if (rte_type & RTE_ETH_RSS_IPV4)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
- if (rte_type & ETH_RSS_IPV6)
+ if (rte_type & RTE_ETH_RSS_IPV6)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
{
uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
- bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
- bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP));
+ bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+ bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP));
bool l3_only = l3 && !l4;
bool l3_and_l4 = l3 && l4;
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
* return default hash mode.
*/
if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
- return ETH_RSS_LEVEL_PMD_DEFAULT;
+ return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
- rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
- rss_level |= ETH_RSS_LEVEL_INNERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
else
- rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+ rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
return rss_level;
}
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
if (vf >= bp->pdev->max_vfs)
return -EINVAL;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
return -ENOTSUP;
}
/* Is this really the correct mapping? VFd seems to think it is. */
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
flag |= BNXT_VNIC_INFO_PROMISC;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
flag |= BNXT_VNIC_INFO_BCAST;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptor limits */
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
- RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+ RTE_ETH_RETA_GROUP_SIZE];
uint8_t rss_key[52]; /**< 52-byte hash key buffer. */
uint8_t rss_key_len; /**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2029955c1092..ca50583d62d8 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
uint16_t key_speed;
switch (speed) {
- case ETH_SPEED_NUM_NONE:
+ case RTE_ETH_SPEED_NUM_NONE:
key_speed = 0x00;
break;
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
key_speed = BOND_LINK_SPEED_KEY_10M;
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
key_speed = BOND_LINK_SPEED_KEY_100M;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
key_speed = BOND_LINK_SPEED_KEY_1000M;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
key_speed = BOND_LINK_SPEED_KEY_10G;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
key_speed = BOND_LINK_SPEED_KEY_20G;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
key_speed = BOND_LINK_SPEED_KEY_40G;
break;
default:
@@ -887,7 +887,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (ret >= 0 && link_info.link_status != 0) {
key = link_speed_key(link_info.link_speed) << 1;
- if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+ if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
key |= BOND_LINK_FULL_DUPLEX_KEY;
} else {
key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 5140ef14c2ee..84943cffe2bb 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
return 0;
internals = bonded_eth_dev->data->dev_private;
@@ -592,7 +592,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
return -1;
}
- if (link_props.link_status == ETH_LINK_UP) {
+ if (link_props.link_status == RTE_ETH_LINK_UP) {
if (internals->active_slave_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
@@ -727,7 +727,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
internals->tx_queue_offload_capa = 0;
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
internals->reta_size = 0;
internals->candidate_max_rx_pktlen = 0;
internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 8d038ba6b6c4..834a5937b3aa 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1369,8 +1369,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
* In any other mode the link properties are set to default
* values of AUTONEG/DUPLEX
*/
- ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
- ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+ ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
}
}
@@ -1700,7 +1700,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
/* If RSS is enabled for bonding, try to enable it for slaves */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
/* rss_key won't be empty if RSS is configured in bonded dev */
slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
@@ -1714,12 +1714,12 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
@@ -1823,7 +1823,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* If RSS is enabled for bonding, synchronize RETA */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int i;
struct bond_dev_private *internals;
@@ -1946,7 +1946,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
return -1;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 1;
internals = eth_dev->data->dev_private;
@@ -2086,7 +2086,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
tlb_last_obytets[internals->active_slaves[i]] = 0;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
@@ -2416,15 +2416,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
bond_ctx = ethdev->data->dev_private;
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
bond_ctx->active_slave_count == 0) {
- ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
- ethdev->data->dev_link.link_status = ETH_LINK_UP;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
if (wait_to_complete)
link_update = rte_eth_link_get;
@@ -2449,7 +2449,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
&slave_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
- ETH_SPEED_NUM_NONE;
+ RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
"Slave (port %u) link get failed: %s",
bond_ctx->active_slaves[idx],
@@ -2491,7 +2491,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
* In theses mode the maximum theoretical link speed is the sum
* of all the slaves
*/
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2865,7 +2865,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
goto link_update;
/* check link state properties if bonded link is up*/
- if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
"for slave %d in bonding mode %d",
@@ -2881,7 +2881,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (internals->active_slave_count < 1) {
/* If first active slave, then change link status */
bonded_eth_dev->data->dev_link.link_status =
- ETH_LINK_UP;
+ RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
@@ -2973,12 +2973,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
/* Copy RETA table */
- reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
- RTE_RETA_GROUP_SIZE;
+ reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+ RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < reta_count; i++) {
internals->reta_conf[i].mask = reta_conf[i].mask;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
}
@@ -3011,8 +3011,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
/* Copy RETA table */
- for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
@@ -3274,7 +3274,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->max_rx_pktlen = 0;
/* Initially allow to choose any offload type */
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
memset(&internals->default_rxconf, 0,
sizeof(internals->default_rxconf));
@@ -3501,7 +3501,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
* set key to the the value specified in port RSS configuration.
* Fall back to default RSS key if the key is not specified
*/
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
struct rte_eth_rss_conf *rss_conf =
&dev->data->dev_conf.rx_adv_conf.rss_conf;
if (rss_conf->rss_key != NULL) {
@@ -3526,9 +3526,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
internals->reta_conf[i].mask = ~0LL;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
internals->reta_conf[i].reta[j] =
- (i * RTE_RETA_GROUP_SIZE + j) %
+ (i * RTE_ETH_RETA_GROUP_SIZE + j) %
dev->data->nb_rx_queues;
}
}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 25da5f6691d0..f7eb0f437b77 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
flags |= NIX_RX_OFFLOAD_PTYPE_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
- if (conf & DEV_TX_OFFLOAD_SECURITY)
+ if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
return flags;
diff --git a/drivers/net/cnxk/cn10k_rte_flow.c b/drivers/net/cnxk/cn10k_rte_flow.c
index 8c87452934eb..dff4c7746cf5 100644
--- a/drivers/net/cnxk/cn10k_rte_flow.c
+++ b/drivers/net/cnxk/cn10k_rte_flow.c
@@ -98,7 +98,7 @@ cn10k_rss_action_validate(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("multi-queue mode is disabled");
return -ENOTSUP;
}
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index d6af54b56de6..5d603514c045 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -77,12 +77,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index eb962ef08cab..5e6c5ee11188 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -78,11 +78,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b7b..17f8f6debbc8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
flags |= NIX_RX_OFFLOAD_PTYPE_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
return flags;
@@ -298,9 +298,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
/* Platform specific checks */
if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
plt_err("Outer IP and SCTP checksum unsupported");
return -EINVAL;
}
@@ -553,17 +553,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* TSO not supported for earlier chip revisions
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
- dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
/* 50G and 100G to be supported for board version C0
* and above of CN9K.
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
}
dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 5c4387e74e0b..8d504c4a6d92 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -77,12 +77,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index e5691a2a7e16..f3f19fed9780 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -77,11 +77,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 2e05d8bf1552..db54468dbca1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
if (roc_nix_is_vf_or_sdp(&dev->nix) ||
dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
uint32_t speed_capa;
/* Auto negotiation disabled */
- speed_capa = ETH_LINK_SPEED_FIXED;
+ speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
- speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
}
return speed_capa;
@@ -65,7 +65,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
struct roc_nix *nix = &dev->nix;
int i, rc = 0;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Setup Inline Inbound */
rc = roc_nix_inl_inb_init(nix);
if (rc) {
@@ -80,8 +80,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
cnxk_nix_inb_mode_set(dev, true);
}
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
- dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+ dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
struct plt_bitmap *bmap;
size_t bmap_sz;
void *mem;
@@ -100,8 +100,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
- /* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+ /* Skip the rest if RTE_ETH_TX_OFFLOAD_SECURITY is not enabled */
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY))
goto done;
rc = -ENOMEM;
@@ -136,7 +136,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
done:
return 0;
cleanup:
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
rc |= roc_nix_inl_inb_fini(nix);
return rc;
}
@@ -182,7 +182,7 @@ nix_security_release(struct cnxk_eth_dev *dev)
int rc, ret = 0;
/* Cleanup Inline inbound */
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Destroy inbound sessions */
tvar = NULL;
RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
@@ -199,8 +199,8 @@ nix_security_release(struct cnxk_eth_dev *dev)
}
/* Cleanup Inline outbound */
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
- dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+ dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Destroy outbound sessions */
tvar = NULL;
RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
@@ -242,8 +242,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
}
@@ -273,7 +273,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
struct rte_eth_fc_conf fc_conf = {0};
int rc;
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -281,10 +281,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
@@ -305,11 +305,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (roc_model_is_cn96_ax() &&
dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
- (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+ (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_cfg.mode =
- (fc_cfg.mode == RTE_FC_FULL ||
- fc_cfg.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_cfg.mode == RTE_ETH_FC_FULL ||
+ fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -352,7 +352,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -380,7 +380,7 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
/* When Tx Security offload is enabled, increase tx desc count by
* max possible outbound desc count.
*/
- if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
nb_desc += dev->outb.nb_desc;
/* Setup ROC SQ */
@@ -499,7 +499,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
* to avoid meta packet drop as LBK does not currently support
* backpressure.
*/
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
/* Use current RQ's aura limit if inl rq is not available */
@@ -561,7 +561,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
rxq_sp->qconf.nb_desc = nb_desc;
rxq_sp->qconf.mp = mp;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
/* Setup rq reference for inline dev if present */
rc = roc_nix_inl_dev_rq_get(rq);
if (rc)
@@ -579,7 +579,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
rc = cnxk_nix_tsc_convert(dev);
if (rc) {
plt_err("Failed to calculate delta and freq mult");
@@ -618,7 +618,7 @@ cnxk_nix_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
plt_nix_dbg("Releasing rxq %u", qid);
/* Release rq reference for inline dev if present */
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
roc_nix_inl_dev_rq_put(rq);
/* Cleanup ROC RQ */
@@ -657,24 +657,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
dev->ethdev_rss_hf = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -683,34 +683,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -746,7 +746,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
uint64_t rss_hf;
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
@@ -958,8 +958,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
/* Nothing much to do if offload is not enabled */
if (!(dev->tx_offloads &
- (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+ (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
return 0;
/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -1007,13 +1007,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
@@ -1054,7 +1054,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
/* Prepare rx cfg */
rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
}
@@ -1062,7 +1062,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
/* Disable drop re if rx offload security is enabled and
* platform does not support it.
@@ -1454,12 +1454,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled on PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
cnxk_eth_dev_ops.timesync_enable(eth_dev);
else
cnxk_eth_dev_ops.timesync_disable(eth_dev);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
rc = rte_mbuf_dyn_rx_timestamp_register
(&dev->tstamp.tstamp_dynfield_offset,
&dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 72f80ae948cf..29a3540ed3f8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -58,41 +58,44 @@
CNXK_NIX_TX_NB_SEG_MAX)
#define CNXK_NIX_RSS_L3_L4_SRC_DST \
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
#define CNXK_NIX_RSS_OFFLOAD \
- (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
- ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+ (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL | \
+ RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST | \
+ RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
#define CNXK_NIX_TX_OFFLOAD_CAPA \
- (DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
+ (RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_SECURITY)
#define CNXK_NIX_RX_OFFLOAD_CAPA \
- (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
- DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_SECURITY)
+ (RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_SECURITY)
#define RSS_IPV4_ENABLE \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE \
- (ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+ (RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index c0b949e21ab0..e068f553495c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -104,11 +104,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
val = ROC_NIX_RSS_RETA_SZ_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
val = ROC_NIX_RSS_RETA_SZ_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
val = ROC_NIX_RSS_RETA_SZ_256;
else
val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df76152..67464302653d 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,24 +81,24 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
- {DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
- {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
- {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_SECURITY, " Security,"},
- {DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+ {RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+ {RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
"Scalar, Rx Offloads:"
@@ -142,28 +142,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
- {DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
- {DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
- {DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
- {DEV_TX_OFFLOAD_SECURITY, " Security,"},
- {DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+ {RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
};
static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
"Scalar, Tx Offloads:"
@@ -203,8 +203,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
enum rte_eth_fc_mode mode_map[] = {
- RTE_FC_NONE, RTE_FC_RX_PAUSE,
- RTE_FC_TX_PAUSE, RTE_FC_FULL
+ RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+ RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
};
struct roc_nix *nix = &dev->nix;
int mode;
@@ -264,10 +264,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -408,13 +408,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
plt_err("Scatter offload is not enabled for mtu");
goto exit;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
plt_err("Greater than maximum supported packet length");
goto exit;
@@ -734,8 +734,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
reta[idx] = reta_conf[i].reta[j];
idx++;
@@ -770,8 +770,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
goto fail;
/* Copy RETA table */
- for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = reta[idx];
idx++;
@@ -804,7 +804,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_conf->rss_key)
roc_nix_rss_key_set(nix, rss_conf->rss_key);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 6a7080167598..f10a502826c6 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
plt_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex"
: "half-duplex");
else
@@ -89,7 +89,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
eth_link.link_status = link->status;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
/* Print link info */
@@ -117,17 +117,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
return 0;
if (roc_nix_is_lbk(&dev->nix)) {
- link.link_status = ETH_LINK_UP;
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_autoneg = ETH_LINK_FIXED;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
rc = roc_nix_mac_link_info_get(&dev->nix, &info);
if (rc)
return rc;
link.link_status = info.status;
link.link_speed = info.speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (info.full_duplex)
link.link_duplex = info.full_duplex;
}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rc = roc_nix_ptp_rx_ena_dis(nix, true);
if (!rc) {
@@ -257,7 +257,7 @@ int
cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
- uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+ uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
struct roc_nix *nix = &dev->nix;
int rc = 0;
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index dfc33ba8654a..b08d7c34faa9 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("multi-queue mode is disabled");
return -ENOTSUP;
}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 37625c5bfb69..dbcbfaf68a30 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,31 +28,31 @@
#define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
#define CXGBE_DEFAULT_RSS_KEY_LEN 40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
/* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
/* Devargs filtermode and filtermask representation */
enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b2976002c..4758321778d1 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
}
new_link.link_status = cxgbe_force_linkup(adapter) ?
- ETH_LINK_UP : pi->link_cfg.link_ok;
+ RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -374,7 +374,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
goto out;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
else
eth_dev->data->scattered_rx = 0;
@@ -438,9 +438,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
CXGBE_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!(adapter->flags & FW_QUEUE_BOUND)) {
err = cxgbe_setup_sge_fwevtq(adapter);
@@ -1080,13 +1080,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
rx_pause = 1;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1099,12 +1099,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
u8 tx_pause = 0, rx_pause = 0;
int ret;
- if (fc_conf->mode == RTE_FC_FULL) {
+ if (fc_conf->mode == RTE_ETH_FC_FULL) {
tx_pause = 1;
rx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
tx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
rx_pause = 1;
}
@@ -1200,9 +1200,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
}
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1246,8 +1246,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
@@ -1277,8 +1277,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
@@ -1479,7 +1479,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_100G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
}
@@ -1488,7 +1488,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_50G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
}
@@ -1497,7 +1497,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_25G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 91d6bb9bbcb0..f1ac32270961 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1670,7 +1670,7 @@ int cxgbe_link_start(struct port_info *pi)
* that step explicitly.
*/
ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
- !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+ !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
true);
if (ret == 0) {
ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1694,7 +1694,7 @@ int cxgbe_link_start(struct port_info *pi)
}
if (ret == 0 && cxgbe_force_linkup(adapter))
- pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return ret;
}
@@ -1725,10 +1725,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
F_FW_RSS_VI_CONFIG_CMD_UDPEN;
@@ -1865,7 +1865,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
{
#define SET_SPEED(__speed_name) \
do { \
- *speed_caps |= ETH_LINK_ ## __speed_name; \
+ *speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
} while (0)
#define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1952,7 +1952,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
speed_caps);
if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
- *speed_caps |= ETH_LINK_SPEED_FIXED;
+ *speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
}
/**
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c79cdb8d8ad7..89ea7dd47c0b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,29 +54,29 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
- if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
dev->data->scattered_rx = 1;
@@ -283,43 +283,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
/* Configure link only if link is UP*/
if (link->link_status) {
- if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
/* Start autoneg only if link is not in autoneg mode */
if (!link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- } else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
- switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M_HD:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ } else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ switch (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_10M:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10M:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_100M_HD:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M_HD:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_100M:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_1G:
- speed = ETH_SPEED_NUM_1G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_1G:
+ speed = RTE_ETH_SPEED_NUM_1G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_2_5G:
- speed = ETH_SPEED_NUM_2_5G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_2_5G:
+ speed = RTE_ETH_SPEED_NUM_2_5G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_10G:
- speed = ETH_SPEED_NUM_10G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10G:
+ speed = RTE_ETH_SPEED_NUM_10G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
- speed = ETH_SPEED_NUM_NONE;
- duplex = ETH_LINK_FULL_DUPLEX;
+ speed = RTE_ETH_SPEED_NUM_NONE;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
}
/* Set link speed */
@@ -535,30 +535,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
if (fif->mac_type == fman_mac_1g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G;
} else if (fif->mac_type == fman_mac_2_5g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G
- | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G
+ | RTE_ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -591,12 +591,12 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
/* Update Rx offload info */
@@ -623,14 +623,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -664,7 +664,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
return ret;
- if (link->link_status == ETH_LINK_DOWN &&
+ if (link->link_status == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -675,15 +675,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
if (ioctl_version < 2) {
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (fif->mac_type == fman_mac_1g)
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
else if (fif->mac_type == fman_mac_2_5g)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else if (fif->mac_type == fman_mac_10g)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -962,7 +962,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
@@ -1268,7 +1268,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
return 0;
@@ -1284,7 +1284,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
return 0;
@@ -1314,10 +1314,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- if (fc_conf->mode == RTE_FC_NONE) {
+ if (fc_conf->mode == RTE_ETH_FC_NONE) {
return 0;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
- fc_conf->mode == RTE_FC_FULL) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+ fc_conf->mode == RTE_ETH_FC_FULL) {
fman_if_set_fc_threshold(dev->process_private,
fc_conf->high_water,
fc_conf->low_water,
@@ -1361,11 +1361,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
}
ret = fman_if_get_fc_threshold(dev->process_private);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time =
fman_if_get_fc_quanta(dev->process_private);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -1626,10 +1626,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
fc_conf = dpaa_intf->fc_conf;
ret = fman_if_get_fc_threshold(fman_intf);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
#define DPAA_DEBUG_FQ_TX_ERROR 1
#define DPAA_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP)
#define DPAA_TX_CKSUM_OFFLOAD_MASK ( \
PKT_TX_IP_CKSUM | \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
if (req_dist_set % 2 != 0) {
dist_field = 1U << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_L2_PAYLOAD:
if (l2_configured)
break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_ETH;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
if (ipv4_configured)
break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV4;
break;
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (ipv6_configured)
break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV6;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
if (tcp_configured)
break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_TCP;
break;
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (udp_configured)
break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_UDP;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 08f49af7685d..3170694841df 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -220,9 +220,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
if (req_dist_set % 2 != 0) {
dist_field = 1ULL << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
- case ETH_RSS_ETH:
-
+ case RTE_ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_ETH:
if (l2_configured)
break;
l2_configured = 1;
@@ -238,7 +237,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_PPPOE:
+ case RTE_ETH_RSS_PPPOE:
if (pppoe_configured)
break;
kg_cfg->extracts[i].extract.from_hdr.prot =
@@ -252,7 +251,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_ESP:
+ case RTE_ETH_RSS_ESP:
if (esp_configured)
break;
esp_configured = 1;
@@ -268,7 +267,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_AH:
+ case RTE_ETH_RSS_AH:
if (ah_configured)
break;
ah_configured = 1;
@@ -284,8 +283,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_C_VLAN:
- case ETH_RSS_S_VLAN:
+ case RTE_ETH_RSS_C_VLAN:
+ case RTE_ETH_RSS_S_VLAN:
if (vlan_configured)
break;
vlan_configured = 1;
@@ -301,7 +300,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_MPLS:
+ case RTE_ETH_RSS_MPLS:
if (mpls_configured)
break;
@@ -338,13 +337,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (l3_configured)
break;
@@ -382,12 +381,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_TCP_EX:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (l4_configured)
break;
@@ -414,8 +413,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e78520e..59e728577f53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,33 +38,33 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* enable timestamp in mbuf */
bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -142,7 +142,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN Filter not avaialble */
if (!priv->max_vlan_filters) {
DPAA2_PMD_INFO("VLAN filter not available");
@@ -150,7 +150,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
priv->token, true);
else
@@ -251,13 +251,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_rx_offloads_nodis;
dev_info->tx_offload_capa = dev_tx_offloads_sup |
dev_tx_offloads_nodis;
- dev_info->speed_capa = ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -270,10 +270,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
if (dpaa2_svr_family == SVR_LX2160A) {
- dev_info->speed_capa |= ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
return 0;
@@ -291,15 +291,15 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+ {RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
};
/* Update Rx offload info */
@@ -326,15 +326,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -573,7 +573,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
return -1;
}
- if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
ret = dpaa2_setup_flow_dist(dev,
eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -587,12 +587,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rx_l3_csum_offload = true;
- if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
rx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -610,7 +610,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
#if !defined(RTE_LIBRTE_IEEE1588)
- if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
#endif
{
ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -623,12 +623,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dpaa2_enable_ts[dev->data->port_id] = true;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
tx_l3_csum_offload = true;
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
tx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -660,8 +660,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
dpaa2_tm_init(dev);
@@ -1856,7 +1856,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
return -1;
}
- if (state.up == ETH_LINK_DOWN &&
+ if (state.up == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -1868,9 +1868,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
link.link_speed = state.rate;
if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
@@ -2031,9 +2031,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* No TX side flow control (send Pause frame disabled)
*/
if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
} else {
/* DPNI_LINK_OPT_PAUSE not set
* if ASYM_PAUSE set,
@@ -2043,9 +2043,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* Flow control disabled
*/
if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return ret;
@@ -2089,14 +2089,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* update cfg with fc_conf */
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
/* Full flow control;
* OPT_PAUSE set, ASYM_PAUSE not set
*/
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
/* Enable RX flow control
* OPT_PAUSE not set;
* ASYM_PAUSE set;
@@ -2104,7 +2104,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_PAUSE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
/* Enable TX Flow control
* OPT_PAUSE set
* ASYM_PAUSE set
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
/* Disable Flow control
* OPT_PAUSE not set
* ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fdc62ec30d22..c5e9267bf04d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,17 +65,17 @@
#define DPAA2_TX_CONF_ENABLE 0x08
#define DPAA2_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP | \
- ETH_RSS_MPLS | \
- ETH_RSS_C_VLAN | \
- ETH_RSS_S_VLAN | \
- ETH_RSS_ESP | \
- ETH_RSS_AH | \
- ETH_RSS_PPPOE)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_MPLS | \
+ RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_S_VLAN | \
+ RTE_ETH_RSS_ESP | \
+ RTE_ETH_RSS_AH | \
+ RTE_ETH_RSS_PPPOE)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
rte_vlan_strip(bufs[num_rx]);
dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
eth_data->port_id);
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP) {
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
rte_vlan_strip(bufs[num_rx]);
}
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags
& PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
int ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 7d5d6377859a..a548ae2ccb2c 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -82,15 +82,15 @@
#define E1000_FTQF_QUEUE_ENABLE 0x00000100
#define IGB_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
/*
* The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6ed1..9da477e59def 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -597,8 +597,8 @@ eth_em_start(struct rte_eth_dev *dev)
e1000_clear_hw_cntrs_base_generic(hw);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_em_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -611,39 +611,39 @@ eth_em_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -1102,9 +1102,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
/* Preferred queue parameters */
dev_info->default_rxportconf.nb_queues = 1;
@@ -1162,17 +1162,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -1424,15 +1424,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
em_vlan_hw_strip_enable(dev);
else
em_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
em_vlan_hw_filter_enable(dev);
else
em_vlan_hw_filter_disable(dev);
@@ -1601,7 +1601,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
if (link.link_status) {
PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1683,13 +1683,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 344149c19147..648b04154c5b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
struct em_rx_entry *sw_ring; /**< address of RX software ring. */
struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
- uint64_t offloads; /**< Offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
uint16_t nb_rx_desc; /**< number of RX descriptors. */
uint16_t rx_tail; /**< current value of RDT register. */
uint16_t nb_rx_hold; /**< number of held free RX desc. */
@@ -173,7 +173,7 @@ struct em_tx_queue {
uint8_t wthresh; /**< Write-back threshold register. */
struct em_ctx_info ctx_cache;
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -1171,11 +1171,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
RTE_SET_USED(dev);
tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
return tx_offload_capa;
}
@@ -1369,13 +1369,13 @@ em_get_rx_port_offloads_capa(void)
uint64_t rx_offload_capa;
rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
return rx_offload_capa;
}
@@ -1469,7 +1469,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1788,7 +1788,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1831,7 +1831,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
}
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1844,7 +1844,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1870,7 +1870,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
}
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad2f..ae3bc4a9c201 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1073,21 +1073,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
- if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
- tx_mq_mode == ETH_MQ_TX_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+ tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* Check multi-queue mode.
- * To no break software we accept ETH_MQ_RX_NONE as this might
+ * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
* be used to turn off VLAN filter.
*/
- if (rx_mq_mode == ETH_MQ_RX_NONE ||
- rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
} else {
/* Only support one queue on VFs.
@@ -1099,12 +1099,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
/* TX mode is not used here, so mode might be ignored.*/
- if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(WARNING, "SRIOV is active,"
" TX mode %d is not supported. "
" Driver will behave as %d mode.",
- tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+ tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
}
/* check valid queue number */
@@ -1117,17 +1117,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
return -EINVAL;
}
- if (tx_mq_mode != ETH_MQ_TX_NONE &&
- tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+ tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
" Due to txmode is meaningless in this"
" driver, just ignore.",
@@ -1146,8 +1146,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = igb_check_mq_mode(dev);
@@ -1287,8 +1287,8 @@ eth_igb_start(struct rte_eth_dev *dev)
/*
* VLAN Offload Settings
*/
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_igb_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1296,7 +1296,7 @@ eth_igb_start(struct rte_eth_dev *dev)
return ret;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable VLAN filter since VMDq always use VLAN filter */
igb_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1310,39 +1310,39 @@ eth_igb_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -2185,21 +2185,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
case e1000_82576:
dev_info->max_rx_queues = 16;
dev_info->max_tx_queues = 16;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 16;
break;
case e1000_82580:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
case e1000_i350:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
@@ -2225,7 +2225,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
return -EINVAL;
}
dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2251,9 +2251,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2296,12 +2296,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
dev_info->max_rx_pktlen = 0x3FFF; /* See RLPML register. */
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
switch (hw->mac.type) {
case e1000_vfadapt:
dev_info->max_rx_queues = 2;
@@ -2402,17 +2402,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else if (!link_check) {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2588,7 +2588,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= E1000_CTRL_EXT_EXT_VLAN;
/* only outer TPID of double VLAN can be configured*/
- if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg = E1000_READ_REG(hw, E1000_VET);
reg = (reg & (~E1000_VET_VET_EXT)) |
((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2703,22 +2703,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igb_vlan_hw_strip_enable(dev);
else
igb_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igb_vlan_hw_filter_enable(dev);
else
igb_vlan_hw_filter_disable(dev);
}
- if(mask & ETH_VLAN_EXTEND_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
igb_vlan_hw_extend_enable(dev);
else
igb_vlan_hw_extend_disable(dev);
@@ -2870,7 +2870,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3024,13 +3024,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3099,18 +3099,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* on configuration
*/
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
ctrl |= E1000_CTRL_RFCE;
ctrl &= ~E1000_CTRL_TFCE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
ctrl |= E1000_CTRL_TFCE;
ctrl &= ~E1000_CTRL_RFCE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
break;
default:
@@ -3258,22 +3258,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -3571,16 +3571,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGB_4_BIT_MASK);
if (!mask)
@@ -3612,16 +3612,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGB_4_BIT_MASK);
if (!mask)
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
if (*vfinfo == NULL)
rte_panic("Cannot allocate memory for private VF data\n");
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index a1d5eecc14a1..bcce2fc726d8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -186,7 +186,7 @@ struct igb_tx_queue {
/**< Start context position for transmit queue. */
struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -1459,13 +1459,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
RTE_SET_USED(dev);
- tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return tx_offload_capa;
}
@@ -1640,19 +1640,19 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == e1000_i350 ||
hw->mac.type == e1000_i210 ||
hw->mac.type == e1000_i211)
- rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
return rx_offload_capa;
}
@@ -1733,7 +1733,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1950,23 +1950,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
}
@@ -2032,23 +2032,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -2170,15 +2170,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
E1000_VMOLR_MPME);
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
vmolr |= E1000_VMOLR_AUPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
vmolr |= E1000_VMOLR_ROMPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
vmolr |= E1000_VMOLR_ROPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
vmolr |= E1000_VMOLR_BAM;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
vmolr |= E1000_VMOLR_MPME;
E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2214,9 +2214,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VLVF: set up filters for vlan tags as configured */
for (i = 0; i < cfg->nb_pool_maps; i++) {
/* set vlan id in VF register and set the valid bit */
- E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
- (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
- ((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+ E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+ (cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+ ((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
E1000_VLVF_POOLSEL_MASK)));
}
@@ -2268,7 +2268,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t mrqc;
- if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+ if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
/*
* SRIOV active scheme
* FIXME if support RSS together with VMDq & SRIOV
@@ -2282,14 +2282,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igb_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
/*Configure general VMDQ only RX parameters*/
igb_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode.*/
default:
igb_rss_disable(dev);
@@ -2338,7 +2338,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Set maximum packet length by default, and might be updated
* together with enabling/disabling dual VLAN.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
max_len += VLAN_TAG_SIZE;
E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2374,7 +2374,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2444,7 +2444,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2488,16 +2488,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
rxcsum |= E1000_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
if (rxmode->offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
rxcsum |= E1000_RXCSUM_TUOFL;
else
rxcsum &= ~E1000_RXCSUM_TUOFL;
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_CRCOFL;
else
rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2505,7 +2505,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
/* clear STRCRC bit in all queues */
@@ -2545,7 +2545,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
/* Make sure VLAN Filters are off. */
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
rctl &= ~E1000_RCTL_VFE;
/* Don't store bad packets. */
rctl &= ~E1000_RCTL_SBP;
@@ -2743,7 +2743,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index f3b17d70c9a4..4d2601d15a57 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -117,10 +117,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
#define ENA_STATS_ARRAY_TX ARRAY_SIZE(ena_stats_tx_strings)
#define ENA_STATS_ARRAY_RX ARRAY_SIZE(ena_stats_rx_strings)
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
- DEV_TX_OFFLOAD_UDP_CKSUM |\
- DEV_TX_OFFLOAD_IPV4_CKSUM |\
- DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
PKT_TX_IP_CKSUM |\
PKT_TX_TCP_SEG)
@@ -332,7 +332,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
(queue_offloads & QUEUE_OFFLOADS)) {
/* check if TSO is required */
if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
ena_tx_ctx->tso_enable = true;
ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -340,7 +340,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L3 checksum is needed */
if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
ena_tx_ctx->l3_csum_enable = true;
if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -357,12 +357,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L4 checksum is needed */
if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
ena_tx_ctx->l4_csum_enable = true;
} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
PKT_TX_UDP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
ena_tx_ctx->l4_csum_enable = true;
} else {
@@ -643,9 +643,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
struct rte_eth_link *link = &dev->data->dev_link;
struct ena_adapter *adapter = dev->data->dev_private;
- link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -923,7 +923,7 @@ static int ena_start(struct rte_eth_dev *dev)
if (rc)
goto err_start_tx;
- if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
rc = ena_rss_configure(adapter);
if (rc)
goto err_rss_init;
@@ -2004,9 +2004,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
adapter->state = ENA_ADAPTER_STATE_CONFIG;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
- dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Scattered Rx cannot be turned off in the HW, so this capability must
* be forced.
@@ -2067,17 +2067,17 @@ static uint64_t ena_get_rx_port_offloads(struct ena_adapter *adapter)
uint64_t port_offloads = 0;
if (adapter->offloads.rx_offloads & ENA_L3_IPV4_CSUM)
- port_offloads |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
if (adapter->offloads.rx_offloads &
(ENA_L4_IPV4_CSUM | ENA_L4_IPV6_CSUM))
port_offloads |=
- DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if (adapter->offloads.rx_offloads & ENA_RX_RSS_HASH)
- port_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- port_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ port_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
return port_offloads;
}
@@ -2087,17 +2087,17 @@ static uint64_t ena_get_tx_port_offloads(struct ena_adapter *adapter)
uint64_t port_offloads = 0;
if (adapter->offloads.tx_offloads & ENA_IPV4_TSO)
- port_offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (adapter->offloads.tx_offloads & ENA_L3_IPV4_CSUM)
- port_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
if (adapter->offloads.tx_offloads &
(ENA_L4_IPV4_CSUM_PARTIAL | ENA_L4_IPV4_CSUM |
ENA_L4_IPV6_CSUM | ENA_L4_IPV6_CSUM_PARTIAL))
port_offloads |=
- DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
- port_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return port_offloads;
}
@@ -2130,14 +2130,14 @@ static int ena_infos_get(struct rte_eth_dev *dev,
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
dev_info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/* Inform framework about available features */
dev_info->rx_offload_capa = ena_get_rx_port_offloads(adapter);
@@ -2303,7 +2303,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
#endif
- fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+ fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
descs_in_use = rx_ring->ring_size -
ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
@@ -2416,11 +2416,11 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
#ifdef RTE_LIBRTE_ETHDEV_DEBUG
/* Check if requested offload is also enabled for the queue */
if ((ol_flags & PKT_TX_IP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)) ||
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) ||
(l4_csum_flag == PKT_TX_TCP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) ||
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) ||
(l4_csum_flag == PKT_TX_UDP_CKSUM &&
- !(tx_ring->offloads & DEV_TX_OFFLOAD_UDP_CKSUM))) {
+ !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM))) {
PMD_TX_LOG(DEBUG,
"mbuf[%" PRIu32 "]: requested offloads: %" PRIu16 " are not enabled for the queue[%u]\n",
i, m->nb_segs, tx_ring->id);
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 4f4142ed12d0..865e1241e0ce 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -58,8 +58,8 @@
#define ENA_HASH_KEY_SIZE 40
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define ENA_IO_TXQ_IDX(q) (2 * (q))
#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1)
--git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 152098410fa2..be4007e3f3fe 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
if (reta_size == 0 || reta_conf == NULL)
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
/* Each reta_conf is for 64 entries.
* To support 128 we use 2 conf of 64.
*/
- conf_idx = i / RTE_RETA_GROUP_SIZE;
- idx = i % RTE_RETA_GROUP_SIZE;
+ conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ idx = i % RTE_ETH_RETA_GROUP_SIZE;
if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
entry_value =
ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -139,7 +139,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
if (reta_size == 0 || reta_conf == NULL)
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -154,8 +154,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0 ; i < reta_size ; i++) {
- reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
- reta_idx = i % RTE_RETA_GROUP_SIZE;
+ reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
reta_conf[reta_conf_idx].reta[reta_idx] =
ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -199,34 +199,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Convert proto to ETH flag */
switch (proto) {
case ENA_ADMIN_RSS_TCP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
break;
case ENA_ADMIN_RSS_UDP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
break;
case ENA_ADMIN_RSS_TCP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
break;
case ENA_ADMIN_RSS_UDP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
break;
case ENA_ADMIN_RSS_IP4:
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
break;
case ENA_ADMIN_RSS_IP6:
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
break;
case ENA_ADMIN_RSS_IP4_FRAG:
- rss_hf |= ETH_RSS_FRAG_IPV4;
+ rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
break;
case ENA_ADMIN_RSS_NOT_IP:
- rss_hf |= ETH_RSS_L2_PAYLOAD;
+ rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
break;
case ENA_ADMIN_RSS_TCP6_EX:
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
break;
case ENA_ADMIN_RSS_IP6_EX:
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
break;
default:
break;
@@ -235,10 +235,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L3. */
switch (fields & ENA_HF_RSS_ALL_L3) {
case ENA_ADMIN_RSS_L3_SA:
- rss_hf |= ETH_RSS_L3_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L3_DA:
- rss_hf |= ETH_RSS_L3_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
break;
default:
break;
@@ -247,10 +247,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L4. */
switch (fields & ENA_HF_RSS_ALL_L4) {
case ENA_ADMIN_RSS_L4_SP:
- rss_hf |= ETH_RSS_L4_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L4_DP:
- rss_hf |= ETH_RSS_L4_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
break;
default:
break;
@@ -268,11 +268,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
/* Determine which fields of L3 should be used. */
- switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
- case ETH_RSS_L3_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+ case RTE_ETH_RSS_L3_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_DA;
break;
- case ETH_RSS_L3_SRC_ONLY:
+ case RTE_ETH_RSS_L3_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_SA;
break;
default:
@@ -284,11 +284,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
}
/* Determine which fields of L4 should be used. */
- switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
- case ETH_RSS_L4_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ case RTE_ETH_RSS_L4_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_DP;
break;
- case ETH_RSS_L4_SRC_ONLY:
+ case RTE_ETH_RSS_L4_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_SP;
break;
default:
@@ -334,43 +334,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
int rc, i;
/* Turn on appropriate fields for each requested packet type */
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
- if ((rss_hf & ETH_RSS_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
selected_fields[ENA_ADMIN_RSS_IP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
- if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
- if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+ if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
@@ -541,7 +541,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
uint16_t admin_hf;
static bool warn_once;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
return -ENOTSUP;
}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 1b567f01eae0..7cdb8ce463ed 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
if (status & ENETC_LINK_MODE)
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
else
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
if (status & ENETC_LINK_STATUS)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
switch (status & ENETC_LINK_SPEED_MASK) {
case ENETC_LINK_SPEED_1G:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ENETC_LINK_SPEED_100M:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
default:
case ENETC_LINK_SPEED_10M:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -207,10 +207,10 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
dev_info->max_tx_queues = MAX_TX_RINGS;
dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
dev_info->rx_offload_capa =
- (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC);
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC);
return 0;
}
@@ -463,7 +463,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
RTE_ETH_QUEUE_STATE_STOPPED;
}
- rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0);
return 0;
@@ -705,7 +705,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
int config;
config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -713,10 +713,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
checksum &= ~L3_CKSUM;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
checksum &= ~L4_CKSUM;
enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
*/
uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
uint8_t rss_enable;
- uint64_t rss_hf; /* ETH_RSS flags */
+ uint64_t rss_hf; /* RTE_ETH_RSS flags */
union vnic_rss_key rss_key;
union vnic_rss_cpu rss_cpu;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5e0..c8bdaf1a8e79 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
uint16_t sub_devid;
uint32_t capa;
} vic_speed_capa_map[] = {
- { 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
- { 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
- { 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
- { 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
- { 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
- { 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
- { 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
- { 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
- { 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
- { 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
- { 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
- { 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
- { 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
- { 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
- { 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1440 Mezz */
- { 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1480 MLOM */
- { 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
- { 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
- { 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
- { 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
- { 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
- { 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+ { 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+ { 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+ { 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+ { 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+ { 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+ { 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+ { 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+ { 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+ { 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+ { 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+ { 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+ { 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+ { 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+ { 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+ { 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+ { 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+ { 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+ { 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+ { 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+ { 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+ { 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+ { 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
{ 0, 0 }, /* End marker */
};
@@ -297,8 +297,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
ENICPMD_FUNC_TRACE();
offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
enic->ig_vlan_strip_en = 1;
else
enic->ig_vlan_strip_en = 0;
@@ -323,17 +323,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
return ret;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->mc_count = 0;
enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM);
+ RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* All vlan offload masks to apply the current settings */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = enicpmd_vlan_offload_set(eth_dev, mask);
if (ret) {
dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -435,14 +435,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
}
/* 1300 and later models are at least 40G */
if (id >= 0x0100)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
/* VFs have subsystem id 0, check device id */
if (id == 0) {
/* Newer VF implies at least 40G model */
if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
}
- return ETH_LINK_SPEED_10G;
+ return RTE_ETH_LINK_SPEED_10G;
}
static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -774,8 +774,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -806,8 +806,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
*/
rss_cpu = enic->rss_cpu;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
rss_cpu.cpu[i / 4].b[i % 4] =
enic_rte_rq_idx_to_sop_idx(
@@ -883,7 +883,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
*/
conf->offloads = enic->rx_offload_capa;
if (!enic->ig_vlan_strip_en)
- conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* rx_thresh and other fields are not applicable for enic */
}
@@ -969,8 +969,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
static int udp_tunnel_common_check(struct enic *enic,
struct rte_eth_udp_tunnel *tnl)
{
- if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
- tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+ if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+ tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
return -ENOTSUP;
if (!enic->overlay_offload) {
ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1010,7 +1010,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
@@ -1039,7 +1039,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index dfc7f5d1f94f..21b1fffb14f0 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
memset(&link, 0, sizeof(link));
link.link_status = enic_get_link_status(enic);
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = vnic_dev_port_speed(enic->vdev);
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
}
eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* vnic notification of link status has already been turned on in
* enic_dev_init() which is called during probe time. Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
* and vlan insertion are supported.
*/
simple_tx_offloads = enic->tx_offload_capa &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if ((eth_dev->data->dev_conf.txmode.offloads &
~simple_tx_offloads) == 0) {
ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
@@ -1385,15 +1385,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type = 0;
rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
if (enic->rq_count > 1 &&
- (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+ (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
rss_hf != 0) {
rss_enable = 1;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
if (enic->udp_rss_weak) {
/*
@@ -1404,12 +1404,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
}
}
- if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
if (enic->udp_rss_weak)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1745,9 +1745,9 @@ enic_enable_overlay_offload(struct enic *enic)
return -EINVAL;
}
enic->tx_offload_capa |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- (enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
- (enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ (enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+ (enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
enic->tx_offload_mask |=
PKT_TX_OUTER_IPV6 |
PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index c5777772a09e..918a9e170ff6 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
* IPV4 hash type handles both non-frag and frag packet types.
* TCP/UDP is controlled via a separate flag below.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (ENIC_SETTING(enic, RSSHASH_IPV6))
/*
* The VIC adapter can perform RSS on IPv6 packets with and
* without extension headers. An IPv6 "fragment" is an IPv6
* packet with the fragment extension header.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (enic->udp_rss_weak)
enic->flow_type_rss_offloads |=
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
/* Zero offloads if RSS is not enabled */
if (!ENIC_SETTING(enic, RSS))
@@ -201,19 +201,19 @@ int enic_get_vnic_config(struct enic *enic)
enic->tx_queue_offload_capa = 0;
enic->tx_offload_capa =
enic->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->tx_offload_mask =
PKT_TX_IPV6 |
PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e6014..82d595b1d1a0 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
static const struct rte_eth_link eth_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
};
static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
int qid;
struct rte_eth_dev *fsdev;
struct rxq **rxq;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð(sdev)->data->dev_conf.intr_conf;
fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
failsafe_rx_intr_install(struct rte_eth_dev *dev)
{
struct fs_priv *priv = PRIV(dev);
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
&priv->data->dev_conf.intr_conf;
if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c6e..a3a8a1c82e3a 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1172,51 +1172,51 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
* configuring a sub-device.
*/
infos->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->rx_queue_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
infos->flow_type_rss_offloads =
- ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP;
+ RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP;
infos->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 17c73c4dc5ae..b7522a47a80b 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
uint8_t drop_en;
uint8_t rx_deferred_start; /* don't start this queue in dev start. */
uint16_t rx_ftag_en; /* indicates FTAG RX supported */
- uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
};
/*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
uint16_t next_rs; /* Next pos to set RS flag */
uint16_t next_dd; /* Next pos to check DD flag */
volatile uint32_t *tail_ptr;
- uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
uint16_t nb_desc;
uint16_t port_id;
uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 66f4a5c6df2c..d256334bfde9 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
- if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
return 0;
if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
};
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
*/
hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
if (mrqc == 0) {
PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
if (hw->mac.type != fm10k_mac_pf)
return;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
nb_queue_pools = vmdq_conf->nb_queue_pools;
/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
/* It adds dual VLAN length for supporting dual VLAN */
if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
- rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t reg;
dev->data->scattered_rx = 1;
reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
}
/* Update default vlan when not in VMDQ mode */
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_50G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
dev->data->dev_link.link_status =
- dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+ dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
return 0;
}
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->max_vfs = pdev->max_vfs;
dev_info->vmdq_pool_base = 0;
dev_info->vmdq_queue_base = 0;
- dev_info->max_vmdq_pools = ETH_32_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_32_POOLS;
dev_info->vmdq_queue_num = FM10K_MAX_QUEUES_PF;
dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
dev_info->reta_size = FM10K_MAX_RSS_INDICES;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
return -EINVAL;
}
- if (vlan_id > ETH_VLAN_ID_MAX) {
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
return -EINVAL;
}
@@ -1767,20 +1767,20 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
}
static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_RSS_HASH);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
}
static int
@@ -1965,12 +1965,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO);
+ return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO);
}
static int
@@ -2111,8 +2111,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
* 128-entries in 32 registers
*/
for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
BIT_MASK_PER_UINT32);
if (mask == 0)
@@ -2160,8 +2160,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
* 128-entries in 32 registers
*/
for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
BIT_MASK_PER_UINT32);
if (mask == 0)
@@ -2198,15 +2198,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
return -EINVAL;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
/* If the mapping doesn't fit any supported, return */
if (mrqc == 0)
@@ -2243,15 +2243,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
hf = 0;
- hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV4) ? RTE_ETH_RSS_IPV4 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX : 0;
rss_conf->rss_hf = hf;
@@ -2606,7 +2606,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
/* first clear the internal SW recording structure */
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
false);
@@ -2622,7 +2622,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
MAIN_VSI_POOL_NUMBER);
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
true);
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
#ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
/* whithout rx ol_flags, no VP flag report */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
#endif
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
static int hinic_link_event_process(struct hinic_hwdev *hwdev,
struct rte_eth_dev *eth_dev, u8 status)
{
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
struct nic_port_info port_info;
struct rte_eth_link link;
int rc = HINIC_OK;
if (!status) {
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
memset(&port_info, 0, sizeof(port_info));
rc = hinic_get_port_info(hwdev, &port_info);
if (rc) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
link.link_speed = port_speed[port_info.speed %
LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb6759..4cd5a85d5f8d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
/* init vlan offoad */
err = hinic_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
} else {
*speed_capa = 0;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
- *speed_capa |= ETH_LINK_SPEED_1G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
- *speed_capa |= ETH_LINK_SPEED_10G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
- *speed_capa |= ETH_LINK_SPEED_25G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
- *speed_capa |= ETH_LINK_SPEED_40G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
- *speed_capa |= ETH_LINK_SPEED_100G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_100G;
}
}
@@ -732,24 +732,24 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
hinic_get_speed_capa(dev, &info->speed_capa);
info->rx_queue_offload_capa = 0;
- info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_RSS_HASH;
+ info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
info->tx_queue_offload_capa = 0;
- info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
info->hash_key_size = HINIC_RSS_KEY_SIZE;
info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -846,20 +846,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
u8 port_link_status = 0;
struct nic_port_info port_link_info;
struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
rc = hinic_get_link_status(nic_hwdev, &port_link_status);
if (rc)
return rc;
if (!port_link_status) {
- link->link_status = ETH_LINK_DOWN;
+ link->link_status = RTE_ETH_LINK_DOWN;
link->link_speed = 0;
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
- link->link_autoneg = ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
return HINIC_OK;
}
@@ -901,8 +901,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* Get link status information from hardware */
rc = hinic_priv_get_dev_link_status(nic_dev, &link);
if (rc != HINIC_OK) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Get link status failed");
goto out;
}
@@ -1650,8 +1650,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
int err;
/* Enable or disable VLAN filter */
- if (mask & ETH_VLAN_FILTER_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
TRUE : FALSE;
err = hinic_config_vlan_filter(nic_dev->hwdev, on);
if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1672,8 +1672,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Enable or disable VLAN stripping */
- if (mask & ETH_VLAN_STRIP_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
TRUE : FALSE;
err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
if (err) {
@@ -1859,13 +1859,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
fc_conf->autoneg = nic_pause.auto_neg;
if (nic_pause.tx_pause && nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (nic_pause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else if (nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1879,14 +1879,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
nic_pause.auto_neg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
nic_pause.tx_pause = true;
else
nic_pause.tx_pause = false;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
nic_pause.rx_pause = true;
else
nic_pause.rx_pause = false;
@@ -1930,7 +1930,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err = 0;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_OK;
}
@@ -1951,14 +1951,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
}
}
- rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
if (err) {
@@ -1994,7 +1994,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_ERROR;
}
@@ -2015,15 +2015,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
rss_conf->rss_hf |= rss_type.ipv4 ?
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
rss_conf->rss_hf |= rss_type.ipv6 ?
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
- rss_conf->rss_hf |= rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+ rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
return HINIC_OK;
}
@@ -2053,7 +2053,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
u16 i = 0;
u16 idx, shift;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
return HINIC_OK;
if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2067,8 +2067,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
/* update rss indir_tbl */
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2133,8 +2133,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
{
u64 rss_hf = rss_conf->rss_hf;
- rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
}
static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
{
int err, i;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
nic_dev->num_rss = 0;
if (nic_dev->num_rq > 1) {
/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
PMD_DRV_LOG(WARNING, "Alloc rss template failed");
return err;
}
- nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
for (i = 0; i < nic_dev->num_rq; i++)
hinic_add_rq_to_rx_queue_list(nic_dev, i);
}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
{
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (hinic_rss_template_free(nic_dev->hwdev,
nic_dev->rss_tmpl_idx))
PMD_DRV_LOG(WARNING, "Free rss template failed");
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
}
}
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
int ret = 0;
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
ret = hinic_config_mq_rx_rss(nic_dev, on);
break;
default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
int lro_wqe_num;
int buf_size;
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (rss_conf.rss_hf == 0) {
rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
}
/* Enable both L3/L4 rx checksum offload */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
goto rx_csum_ofl_err;
/* config lro */
- lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+ lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
true : false;
max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
hinic_rss_deinit(nic_dev);
hinic_destroy_num_qps(nic_dev);
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
#define HINIC_DEFAULT_RX_FREE_THRESH 32
#define HINIC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
enum rq_completion_fmt {
RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 8753c340e790..3d0159d78778 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
return ret;
}
- if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
if (dcb_rx_conf->nb_tcs == 0)
hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
uint16_t nb_tx_q = hw->data->nb_tx_queues;
int ret;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
return 0;
ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
{
switch (mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->requested_fc_mode = HNS3_FC_NONE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->requested_fc_mode = HNS3_FC_FULL;
break;
default:
hw->requested_fc_mode = HNS3_FC_NONE;
hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
- "configured to RTE_FC_NONE", mode);
+ "configured to RTE_ETH_FC_NONE", mode);
break;
}
}
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 6b89bcef97ba..9881659cebfc 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
};
static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
- { ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
};
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
struct hns3_cmd_desc desc;
int ret;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
return -EINVAL;
}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rte_spinlock_lock(&hw->lock);
rxmode = &dev->data->dev_conf.rxmode;
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
/* ignore vlan filter configuration during promiscuous mode */
if (!dev->data->promiscuous) {
/* Enable or disable VLAN filter */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
true : false;
ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
true : false;
ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
return ret;
}
- ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+ ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
if (ret) {
hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
if (!hw->data->promiscuous) {
/* restore vlan filter states */
offloads = hw->data->dev_conf.rxmode.offloads;
- enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+ enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
ret = hns3_enable_vlan_filter(hns, enable);
if (ret) {
hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
txmode->hw_vlan_reject_untagged);
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
ret = hns3_vlan_offload_set(dev, mask);
if (ret) {
hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2213,9 +2213,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
int max_tc = 0;
int i;
- if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
- (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+ (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
rx_mq_mode, tx_mq_mode);
return -EOPNOTSUPP;
@@ -2223,7 +2223,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
if (dcb_rx_conf->nb_tcs > pf->tc_max) {
hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2232,7 +2232,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
- hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+ hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
"nb_tcs(%d) != %d or %d in rx direction.",
dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
return -EINVAL;
@@ -2400,11 +2400,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
* configure link_speeds (default 0), which means auto-negotiation.
* In this case, it should return success.
*/
- if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
hw->mac.support_autoneg == 0)
return 0;
- if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
ret = hns3_check_port_speed(hw, link_speeds);
if (ret)
return ret;
@@ -2464,15 +2464,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
if (ret)
goto cfg_err;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = hns3_setup_dcb(dev);
if (ret)
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rss_conf = conf->rx_adv_conf.rss_conf;
hw->rss_dis_flag = false;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2493,7 +2493,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -2600,15 +2600,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_10M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
- speed_capa |= ETH_LINK_SPEED_10M;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
return speed_capa;
}
@@ -2619,19 +2619,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
return speed_capa;
}
@@ -2650,7 +2650,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
hns3_get_firber_port_speed_capa(mac->supported_speed);
if (mac->support_autoneg == 0)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -2676,40 +2676,40 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_get_support(hw, INDEP_TXRX))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
if (hns3_dev_get_support(hw, PTP))
- info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = HNS3_MAX_RING_DESC,
@@ -2793,7 +2793,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
ret = hns3_update_link_info(eth_dev);
if (ret)
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
return ret;
}
@@ -2806,29 +2806,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link->link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_NONE;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link->link_duplex = mac->link_duplex;
- new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link->link_autoneg = mac->link_autoneg;
}
@@ -2848,8 +2848,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
if (eth_dev->data->dev_started == 0) {
new_link.link_autoneg = mac->link_autoneg;
new_link.link_duplex = mac->link_duplex;
- new_link.link_speed = ETH_SPEED_NUM_NONE;
- new_link.link_status = ETH_LINK_DOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ new_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
@@ -2861,7 +2861,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
break;
}
- if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+ if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3207,31 +3207,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
{
switch (speed_cmd) {
case HNS3_CFG_SPEED_10M:
- *speed = ETH_SPEED_NUM_10M;
+ *speed = RTE_ETH_SPEED_NUM_10M;
break;
case HNS3_CFG_SPEED_100M:
- *speed = ETH_SPEED_NUM_100M;
+ *speed = RTE_ETH_SPEED_NUM_100M;
break;
case HNS3_CFG_SPEED_1G:
- *speed = ETH_SPEED_NUM_1G;
+ *speed = RTE_ETH_SPEED_NUM_1G;
break;
case HNS3_CFG_SPEED_10G:
- *speed = ETH_SPEED_NUM_10G;
+ *speed = RTE_ETH_SPEED_NUM_10G;
break;
case HNS3_CFG_SPEED_25G:
- *speed = ETH_SPEED_NUM_25G;
+ *speed = RTE_ETH_SPEED_NUM_25G;
break;
case HNS3_CFG_SPEED_40G:
- *speed = ETH_SPEED_NUM_40G;
+ *speed = RTE_ETH_SPEED_NUM_40G;
break;
case HNS3_CFG_SPEED_50G:
- *speed = ETH_SPEED_NUM_50G;
+ *speed = RTE_ETH_SPEED_NUM_50G;
break;
case HNS3_CFG_SPEED_100G:
- *speed = ETH_SPEED_NUM_100G;
+ *speed = RTE_ETH_SPEED_NUM_100G;
break;
case HNS3_CFG_SPEED_200G:
- *speed = ETH_SPEED_NUM_200G;
+ *speed = RTE_ETH_SPEED_NUM_200G;
break;
default:
return -EINVAL;
@@ -3559,39 +3559,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
switch (speed) {
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
break;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
break;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
break;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
break;
@@ -4254,14 +4254,14 @@ hns3_mac_init(struct hns3_hw *hw)
int ret;
pf->support_sfp_query = true;
- mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+ mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
if (ret) {
PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
return ret;
}
- mac->link_status = ETH_LINK_DOWN;
+ mac->link_status = RTE_ETH_LINK_DOWN;
return hns3_config_mtu(hw, pf->mps);
}
@@ -4511,7 +4511,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
* all packets coming in in the receiving direction.
*/
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, false);
if (ret) {
hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4552,7 +4552,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
}
/* when promiscuous mode was disabled, restore the vlan filter status */
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, true);
if (ret) {
hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4672,8 +4672,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
mac_info->supported_speed =
rte_le_to_cpu_32(resp->supported_speed);
mac_info->support_autoneg = resp->autoneg_ability;
- mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
- : ETH_LINK_AUTONEG;
+ mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+ : RTE_ETH_LINK_AUTONEG;
} else {
mac_info->query_type = HNS3_DEFAULT_QUERY;
}
@@ -4684,8 +4684,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
static uint8_t
hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
{
- if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
- duplex = ETH_LINK_FULL_DUPLEX;
+ if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
return duplex;
}
@@ -4735,7 +4735,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
return ret;
/* Do nothing if no SFP */
- if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+ if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
return 0;
/*
@@ -4762,7 +4762,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
/* Config full duplex for SFP */
return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
- ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_FULL_DUPLEX);
}
static void
@@ -4881,10 +4881,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
/*
- * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+ * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
* when receiving frames. Otherwise, CRC will be stripped.
*/
- if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
else
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4912,7 +4912,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret) {
hns3_err(hw, "get link status cmd failed %d", ret);
- return ETH_LINK_DOWN;
+ return RTE_ETH_LINK_DOWN;
}
req = (struct hns3_link_status_cmd *)desc.data;
@@ -5094,19 +5094,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
return HNS3_FIBER_LINK_SPEED_1G_BIT;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
return HNS3_FIBER_LINK_SPEED_10G_BIT;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
return HNS3_FIBER_LINK_SPEED_25G_BIT;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
return HNS3_FIBER_LINK_SPEED_40G_BIT;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
return HNS3_FIBER_LINK_SPEED_50G_BIT;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
return HNS3_FIBER_LINK_SPEED_100G_BIT;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
return HNS3_FIBER_LINK_SPEED_200G_BIT;
default:
hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5344,20 +5344,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M:
speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
break;
- case ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
break;
- case ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M:
speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
break;
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
break;
default:
@@ -5373,26 +5373,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_1G:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
break;
default:
@@ -5427,28 +5427,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
static inline uint32_t
hns3_get_link_speed(uint32_t link_speeds)
{
- uint32_t speed = ETH_SPEED_NUM_NONE;
-
- if (link_speeds & ETH_LINK_SPEED_10M ||
- link_speeds & ETH_LINK_SPEED_10M_HD)
- speed = ETH_SPEED_NUM_10M;
- if (link_speeds & ETH_LINK_SPEED_100M ||
- link_speeds & ETH_LINK_SPEED_100M_HD)
- speed = ETH_SPEED_NUM_100M;
- if (link_speeds & ETH_LINK_SPEED_1G)
- speed = ETH_SPEED_NUM_1G;
- if (link_speeds & ETH_LINK_SPEED_10G)
- speed = ETH_SPEED_NUM_10G;
- if (link_speeds & ETH_LINK_SPEED_25G)
- speed = ETH_SPEED_NUM_25G;
- if (link_speeds & ETH_LINK_SPEED_40G)
- speed = ETH_SPEED_NUM_40G;
- if (link_speeds & ETH_LINK_SPEED_50G)
- speed = ETH_SPEED_NUM_50G;
- if (link_speeds & ETH_LINK_SPEED_100G)
- speed = ETH_SPEED_NUM_100G;
- if (link_speeds & ETH_LINK_SPEED_200G)
- speed = ETH_SPEED_NUM_200G;
+ uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+ if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+ link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+ speed = RTE_ETH_SPEED_NUM_10M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+ link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+ speed = RTE_ETH_SPEED_NUM_100M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+ speed = RTE_ETH_SPEED_NUM_1G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+ speed = RTE_ETH_SPEED_NUM_10G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+ speed = RTE_ETH_SPEED_NUM_25G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+ speed = RTE_ETH_SPEED_NUM_40G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+ speed = RTE_ETH_SPEED_NUM_50G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+ speed = RTE_ETH_SPEED_NUM_100G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+ speed = RTE_ETH_SPEED_NUM_200G;
return speed;
}
@@ -5456,11 +5456,11 @@ hns3_get_link_speed(uint32_t link_speeds)
static uint8_t
hns3_get_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
static int
@@ -5594,9 +5594,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
struct hns3_set_link_speed_cfg cfg;
memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
- cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
- if (cfg.autoneg != ETH_LINK_AUTONEG) {
+ cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+ if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
cfg.speed = hns3_get_link_speed(conf->link_speeds);
cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
}
@@ -5869,7 +5869,7 @@ hns3_do_stop(struct hns3_adapter *hns)
ret = hns3_cfg_mac_mode(hw, false);
if (ret)
return ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
hns3_configure_all_mac_addr(hns, true);
@@ -6080,17 +6080,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
current_mode = hns3_get_current_fc_mode(dev);
switch (current_mode) {
case HNS3_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case HNS3_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HNS3_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case HNS3_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
}
@@ -6236,7 +6236,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
int i;
rte_spinlock_lock(&hw->lock);
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = pf->local_max_tc;
else
dcb_info->nb_tcs = 1;
@@ -6536,7 +6536,7 @@ hns3_stop_service(struct hns3_adapter *hns)
struct rte_eth_dev *eth_dev;
eth_dev = &rte_eth_devices[hw->data->port_id];
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (hw->adapter_state == HNS3_NIC_STARTED) {
rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
hns3_update_linkstatus_and_event(hw, false);
@@ -6826,7 +6826,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
* in device of link speed
* below 10 Gbps.
*/
- if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+ if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
*state = 0;
return 0;
}
@@ -6858,7 +6858,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
* configured FEC mode is returned.
* If link is up, current FEC mode is returned.
*/
- if (hw->mac.link_status == ETH_LINK_DOWN) {
+ if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
ret = get_current_fec_auto_state(hw, &auto_state);
if (ret)
return ret;
@@ -6957,12 +6957,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
uint32_t cur_capa;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
cur_capa = fec_capa[1].capa;
break;
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
cur_capa = fec_capa[0].capa;
break;
default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index fa08fadc9497..eb3470535363 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -190,10 +190,10 @@ struct hns3_mac {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t media_type;
uint8_t phy_addr;
- uint8_t link_duplex : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
- uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
- uint8_t link_status : 1; /* ETH_LINK_[DOWN/UP] */
- uint32_t link_speed; /* ETH_SPEED_NUM_ */
+ uint8_t link_duplex : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint8_t link_status : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /* RTE_ETH_SPEED_NUM_ */
/*
* Some firmware versions support only the SFP speed query. In addition
* to the SFP speed query, some firmware supports the query of the speed
@@ -1079,9 +1079,9 @@ static inline uint64_t
hns3_txvlan_cap_get(struct hns3_hw *hw)
{
if (hw->port_base_vlan_cfg.state)
- return DEV_TX_OFFLOAD_VLAN_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
else
- return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
}
#endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8e5df05aa285..c0c1f1c4c107 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -807,15 +807,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
}
hw->adapter_state = HNS3_NIC_CONFIGURING;
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
hns3_err(hw, "setting link speed/duplex not supported");
ret = -EINVAL;
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
hw->rss_dis_flag = false;
rss_conf = conf->rx_adv_conf.rss_conf;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -832,7 +832,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -935,32 +935,32 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_get_support(hw, INDEP_TXRX))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1640,10 +1640,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN filter */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = hns3vf_en_vlan_filter(hw, true);
else
ret = hns3vf_en_vlan_filter(hw, false);
@@ -1653,10 +1653,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Vlan stripping setting */
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ret = hns3vf_en_hw_strip_rxvtag(hw, true);
else
ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1724,7 +1724,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
int ret;
dev_conf = &hw->data->dev_conf;
- en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+ en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
: false;
ret = hns3vf_en_hw_strip_rxvtag(hw, en);
if (ret)
@@ -1749,8 +1749,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
}
/* Apply vlan offload setting */
- ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
@@ -2059,7 +2059,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
struct hns3_hw *hw = &hns->hw;
int ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
/*
* The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2218,31 +2218,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&new_link, 0, sizeof(new_link));
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link.link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link.link_duplex = mac->link_duplex;
- new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link.link_autoneg =
- !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+ !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(eth_dev, &new_link);
}
@@ -2570,11 +2570,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
* Make sure call update link status before hns3vf_stop_poll_job
* because update link status depend on polling job exist.
*/
- hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+ hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
hw->mac.link_duplex);
hns3vf_stop_poll_job(eth_dev);
}
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
hns3_set_rxtx_function(eth_dev);
rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 38a2ee58a651..da6918fddda3 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
* Kunpeng930 and future kunpeng series support to use src/dst port
* fields to RSS hash for IPv6 SCTP packet type.
*/
- if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
- (rss->types & ETH_RSS_IP ||
+ if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+ (rss->types & RTE_ETH_RSS_IP ||
(!hw->rss_info.ipv6_sctp_offload_supported &&
- rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+ rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return false;
return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 5dfe68cc4dbd..9a829d7011ad 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
struct hns3_hw *hw = &hns->hw;
int ret;
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
return 0;
ret = rte_mbuf_dyn_rx_timestamp_register
--git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_tuple_table[] = {
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
};
@@ -146,44 +146,44 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_rss_types[] = {
- { ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+ { RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+ { RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
};
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
* When user does not specify the following types or a combination of
* the following types, it enables all fields for the supported RSS
* types. the following types as:
- * - ETH_RSS_L3_SRC_ONLY
- * - ETH_RSS_L3_DST_ONLY
- * - ETH_RSS_L4_SRC_ONLY
- * - ETH_RSS_L4_DST_ONLY
+ * - RTE_ETH_RSS_L3_SRC_ONLY
+ * - RTE_ETH_RSS_L3_DST_ONLY
+ * - RTE_ETH_RSS_L4_SRC_ONLY
+ * - RTE_ETH_RSS_L4_DST_ONLY
*/
if (fields_count == 0) {
for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
sizeof(rss_cfg->rss_indirection_tbl));
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
rte_spinlock_unlock(&hw->lock);
hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
}
rte_spinlock_lock(&hw->lock);
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] =
rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
}
/* When RSS is off, redirect the packet queue 0 */
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
hns3_rss_uninit(hns);
/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
* When RSS is off, it doesn't need to configure rss redirection table
* to hardware.
*/
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
hw->rss_ind_tbl_size);
if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
return ret;
rss_indir_table_uninit:
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret1 = hns3_rss_reset_indir_table(hw);
if (ret1 != 0)
return ret;
--git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
#include <rte_flow.h>
#define HNS3_ETH_RSS_SUPPORT ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
#define HNS3_RSS_IND_TBL_SIZE 512 /* The size of hash lookup table */
#define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 602548a4f25b..920ee8ceeab9 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1924,7 +1924,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
/* CRC len set here is used for amending packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1969,7 +1969,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
rxq->rx_buf_len);
}
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
@@ -2845,7 +2845,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
vec_allowed = vec_support && hns3_get_default_vec_support();
sve_allowed = vec_support && hns3_get_sve_support();
simple_allowed = !dev->data->scattered_rx &&
- (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+ (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
return hns3_recv_pkts_vec;
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
int ret;
offloads = hw->data->dev_conf.rxmode.offloads;
- gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4291,7 +4291,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
if (hns3_dev_get_support(hw, PTP))
return false;
- return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+ return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
}
static bool
@@ -4303,16 +4303,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
return true;
#else
#define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index c8229e9076b5..dfea5d5b4c2f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
uint16_t rx_rearm_nb; /* number of remaining BDs to be re-armed */
- /* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+ /* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
uint8_t crc_len;
/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index ff434d2d33ed..455110361aac 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
if (hns3_dev_get_support(hw, PTP))
return -ENOTSUP;
- /* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
- if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ /* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+ if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
return -ENOTSUP;
return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
int
hns3_rx_check_vec_support(struct rte_eth_dev *dev)
{
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN;
+ uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hns3_dev_get_support(hw, PTP))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d4a..293df887bf7c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1629,7 +1629,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* Set the global registers with default ether type value */
if (!pf->support_multi_driver) {
- ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
if (ret != I40E_SUCCESS) {
PMD_INIT_LOG(ERR,
@@ -1896,8 +1896,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
ad->tx_simple_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Only legacy filter API needs the following fdir config. So when the
* legacy filter API is deprecated, the following codes should also be
@@ -1931,13 +1931,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
* number, which will be available after rx_queue_setup(). dev_start()
* function is good to place RSS setup.
*/
- if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
ret = i40e_vmdq_setup(dev);
if (ret)
goto err;
}
- if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = i40e_dcb_setup(dev);
if (ret) {
PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2214,17 +2214,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
{
uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed |= I40E_LINK_SPEED_40GB;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed |= I40E_LINK_SPEED_25GB;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed |= I40E_LINK_SPEED_20GB;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed |= I40E_LINK_SPEED_10GB;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed |= I40E_LINK_SPEED_1GB;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
link_speed |= I40E_LINK_SPEED_100MB;
return link_speed;
@@ -2332,13 +2332,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
I40E_AQ_PHY_LINK_ENABLED;
- if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
- conf->link_speeds = ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_100M;
+ if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+ conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_100M;
abilities |= I40E_AQ_PHY_AN_ENABLED;
} else {
@@ -2876,34 +2876,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
/* Parse the link status */
switch (link_speed) {
case I40E_REG_SPEED_0:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_REG_SPEED_1:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_REG_SPEED_2:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_REG_SPEED_3:
if (hw->mac.type == I40E_MAC_X722) {
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
} else {
reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
if (reg_val & I40E_REG_MACC_25GB)
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
else
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
}
break;
case I40E_REG_SPEED_4:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
default:
PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2930,8 +2930,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
status = i40e_aq_get_link_info(hw, enable_lse,
&link_status, NULL);
if (unlikely(status != I40E_SUCCESS)) {
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
return;
}
@@ -2946,28 +2946,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
/* Parse the link status */
switch (link_status.link_speed) {
case I40E_LINK_SPEED_100MB:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_LINK_SPEED_1GB:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_LINK_SPEED_20GB:
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case I40E_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case I40E_LINK_SPEED_40GB:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
default:
if (link->link_status)
- link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -2984,9 +2984,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
/* i40e uses full duplex only */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (!wait_to_complete && !enable_lse)
update_link_reg(hw, &link);
@@ -3720,33 +3720,33 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3805,7 +3805,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
/* For XL710 */
- dev_info->speed_capa = ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
dev_info->default_rxportconf.nb_queues = 2;
dev_info->default_txportconf.nb_queues = 2;
if (dev->data->nb_rx_queues == 1)
@@ -3819,17 +3819,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
/* For XXV710 */
- dev_info->speed_capa = ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
dev_info->default_rxportconf.ring_size = 256;
dev_info->default_txportconf.ring_size = 256;
} else {
/* For X710 */
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
dev_info->default_rxportconf.ring_size = 512;
dev_info->default_txportconf.ring_size = 256;
} else {
@@ -3868,7 +3868,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
int ret;
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
reg_id = 2;
}
@@ -3915,12 +3915,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
int ret = 0;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) ||
- (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+ (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -3934,12 +3934,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
/* 802.1ad frames ability is added in NVM API 1.7*/
if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->first_tag = rte_cpu_to_le_16(tpid);
- else if (vlan_type == ETH_VLAN_TYPE_INNER)
+ else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
hw->second_tag = rte_cpu_to_le_16(tpid);
} else {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->second_tag = rte_cpu_to_le_16(tpid);
}
ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -3998,37 +3998,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
i40e_vsi_config_vlan_filter(vsi, TRUE);
else
i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_vlan_stripping(vsi, FALSE);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
i40e_vsi_config_double_vlan(vsi, TRUE);
/* Set global registers with default ethertype. */
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
}
else
i40e_vsi_config_double_vlan(vsi, FALSE);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
/* Enable or disable outer VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4111,17 +4111,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* Return current mode according to actual setting*/
switch (hw->fc.current_mode) {
case I40E_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case I40E_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case I40E_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case I40E_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
};
return 0;
@@ -4137,10 +4137,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
struct i40e_hw *hw;
struct i40e_pf *pf;
enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
- [RTE_FC_NONE] = I40E_FC_NONE,
- [RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
- [RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
- [RTE_FC_FULL] = I40E_FC_FULL
+ [RTE_ETH_FC_NONE] = I40E_FC_NONE,
+ [RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+ [RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+ [RTE_ETH_FC_FULL] = I40E_FC_FULL
};
/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4287,7 +4287,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
}
rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
else
mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4440,7 +4440,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4456,8 +4456,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -4483,7 +4483,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4500,8 +4500,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -4818,7 +4818,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
hw->func_caps.num_vsis - vsi_count);
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
- ETH_64_POOLS);
+ RTE_ETH_64_POOLS);
if (pf->max_nb_vmdq_vsi) {
pf->flags |= I40E_FLAG_VMDQ;
pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6104,10 +6104,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
int mask = 0;
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = i40e_vlan_offload_set(dev, mask);
if (ret) {
PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6236,9 +6236,9 @@ i40e_pf_setup(struct i40e_pf *pf)
/* Configure filter control */
memset(&settings, 0, sizeof(settings));
- if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+ if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
- else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+ else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
else {
PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7098,7 +7098,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
{
uint32_t vid_idx, vid_bit;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return 0;
vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7133,7 +7133,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
int ret;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return;
i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -7727,25 +7727,25 @@ static int
i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag)
{
switch (filter_type) {
- case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
break;
- case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
break;
- case RTE_TUNNEL_FILTER_IMAC_TENID:
+ case RTE_ETH_TUNNEL_FILTER_IMAC_TENID:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
break;
- case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+ case RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC:
*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
break;
- case ETH_TUNNEL_FILTER_IMAC:
+ case RTE_ETH_TUNNEL_FILTER_IMAC:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
break;
- case ETH_TUNNEL_FILTER_OIP:
+ case RTE_ETH_TUNNEL_FILTER_OIP:
*flag = I40E_AQC_ADD_CLOUD_FILTER_OIP;
break;
- case ETH_TUNNEL_FILTER_IIP:
+ case RTE_ETH_TUNNEL_FILTER_IIP:
*flag = I40E_AQC_ADD_CLOUD_FILTER_IIP;
break;
default:
@@ -8711,16 +8711,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8746,12 +8746,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8843,7 +8843,7 @@ int
i40e_pf_reset_rss_reta(struct i40e_pf *pf)
{
struct i40e_hw *hw = &pf->adapter->hw;
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
int num;
@@ -8851,7 +8851,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
* configured. It's necessary to calculate the actual PF
* queues that are configured.
*/
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
else
num = pf->dev_data->nb_rx_queues;
@@ -8930,7 +8930,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
if (!(rss_hf & pf->adapter->flow_types_mask) ||
- !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+ !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return 0;
hw = I40E_PF_TO_HW(pf);
@@ -10267,16 +10267,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_25G:
tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
break;
@@ -10504,7 +10504,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
else
*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_cfg->pfc.willing = 0;
dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11012,7 +11012,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
uint16_t bsf, tc_mapping;
int i, j = 0;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
else
dcb_info->nb_tcs = 1;
@@ -11060,7 +11060,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
}
j++;
- } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+ } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 1d57b9617e66..d8042abbd9be 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -147,17 +147,17 @@ enum i40e_flxpld_layer_idx {
I40E_FLAG_RSS_AQ_CAPABLE)
#define I40E_RSS_OFFLOAD_ALL ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/* All bits of RSS hash enable for X722*/
#define I40E_RSS_HENA_ALL_X722 ( \
@@ -1063,7 +1063,7 @@ struct i40e_rte_flow_rss_conf {
uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /**< Hash key. */
- uint16_t queue[ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
+ uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
bool symmetric_enable; /**< true, if enable symmetric */
uint64_t config_pctypes; /**< All PCTYPES with the flow */
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index e41a84f1d737..9acaa1875105 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
uint64_t reg_r = 0;
uint16_t reg_id;
uint16_t tpid;
@@ -3601,13 +3601,13 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
}
static uint16_t i40e_supported_tunnel_filter_types[] = {
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID |
- ETH_TUNNEL_FILTER_IVLAN,
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID,
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID |
- ETH_TUNNEL_FILTER_IMAC,
- ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC,
};
static int
@@ -3697,12 +3697,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
rte_memcpy(&filter->outer_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_OMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
} else {
rte_memcpy(&filter->inner_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_IMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
}
}
break;
@@ -3724,7 +3724,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
filter->inner_vlan =
rte_be_to_cpu_16(vlan_spec->tci) &
I40E_VLAN_TCI_MASK;
- filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
}
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -3798,7 +3798,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
vxlan_spec->vni, 3);
filter->tenant_id =
rte_be_to_cpu_32(tenant_id_be);
- filter_type |= ETH_TUNNEL_FILTER_TENID;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
}
vxlan_flag = 1;
@@ -3927,12 +3927,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
rte_memcpy(&filter->outer_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_OMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
} else {
rte_memcpy(&filter->inner_mac,
ð_spec->dst,
RTE_ETHER_ADDR_LEN);
- filter_type |= ETH_TUNNEL_FILTER_IMAC;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
}
}
@@ -3955,7 +3955,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
filter->inner_vlan =
rte_be_to_cpu_16(vlan_spec->tci) &
I40E_VLAN_TCI_MASK;
- filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
}
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -4050,7 +4050,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
nvgre_spec->tni, 3);
filter->tenant_id =
rte_be_to_cpu_32(tenant_id_be);
- filter_type |= ETH_TUNNEL_FILTER_TENID;
+ filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
}
nvgre_flag = 1;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 5da3d187076e..8962e9d97aa7 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -105,47 +105,47 @@ struct i40e_hash_map_rss_inset {
const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
/* IPv4 */
- { ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* IPv6 */
- { ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* Port */
- { ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+ { RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
/* Ether */
- { ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
- { ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+ { RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+ { RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
/* VLAN */
- { ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
- { ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+ { RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+ { RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
};
#define I40E_HASH_VOID_NEXT_ALLOW BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -208,30 +208,30 @@ struct i40e_hash_match_pattern {
#define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
pattern, rss_mask, true, cus_pctype }
-#define I40E_HASH_L2_RSS_MASK (ETH_RSS_VLAN | ETH_RSS_ETH | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK (RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY)
#define I40E_HASH_L23_RSS_MASK (I40E_HASH_L2_RSS_MASK | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY)
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
-#define I40E_HASH_IPV4_L23_RSS_MASK (ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK (ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK (RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK (RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
#define I40E_HASH_L234_RSS_MASK (I40E_HASH_L23_RSS_MASK | \
- ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
-#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
-#define I40E_HASH_L4_TYPES (ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES (RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
@@ -239,72 +239,72 @@ struct i40e_hash_match_pattern {
static const struct i40e_hash_match_pattern match_patterns[] = {
/* Ether */
I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
- ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+ RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
I40E_FILTER_PCTYPE_L2_PAYLOAD),
/* IPv4 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV4),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
- ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
- ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
- ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
/* IPv6 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV6),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_FRAG,
- ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV6),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
- ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
- ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
- ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
/* ESP */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
/* GTPC */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -319,27 +319,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
/* L2TPV3 */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
/* AH */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV6),
};
@@ -575,29 +575,29 @@ i40e_hash_get_inset(uint64_t rss_types)
/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
* it is the same case as none of them are added.
*/
- mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
- if (mask == ETH_RSS_L2_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
inset &= ~I40E_INSET_DMAC;
- else if (mask == ETH_RSS_L2_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
inset &= ~I40E_INSET_SMAC;
- mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
- if (mask == ETH_RSS_L3_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
- else if (mask == ETH_RSS_L3_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
- mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
- if (mask == ETH_RSS_L4_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
inset &= ~I40E_INSET_DST_PORT;
- else if (mask == ETH_RSS_L4_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
inset &= ~I40E_INSET_SRC_PORT;
if (rss_types & I40E_HASH_L4_TYPES) {
uint64_t l3_mask = rss_types &
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
uint64_t l4_mask = rss_types &
- (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
if (l3_mask && !l4_mask)
inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -836,7 +836,7 @@ i40e_hash_config(struct i40e_pf *pf,
/* Update lookup table */
if (rss_info->queue_num > 0) {
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i, j = 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -943,7 +943,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
"RSS key is ignored when queues specified");
pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
max_queue = i40e_pf_calc_configured_queues_num(pf);
else
max_queue = pf->dev_data->nb_rx_queues;
@@ -1081,22 +1081,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
uint64_t type, mask;
/* Validate L2 */
- type = ETH_RSS_ETH & rss_types;
- mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+ type = RTE_ETH_RSS_ETH & rss_types;
+ mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L3 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
- mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+ mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L4 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
- mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+ mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
if (!type && mask)
return false;
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
event.event_data.link_event.link_status =
dev->data->dev_link.link_status;
- /* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+ /* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
switch (dev->data->dev_link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
break;
default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 554b1142c136..a13bb81115f4 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
for (i = 0; i < tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
if (k) {
for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -1995,7 +1995,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->queue_id = queue_idx;
rxq->reg_idx = reg_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2243,7 +2243,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
}
/* check simple tx conflict */
if (ad->tx_simple_allowed) {
- if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+ if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
PMD_DRV_LOG(ERR, "No-simple tx is required.");
return -EINVAL;
@@ -3417,7 +3417,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
ad->tx_vec_allowed = (ad->tx_simple_allowed &&
txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2301e6301d7d..5e6eecc50116 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
bool rx_deferred_start; /**< don't start this queue in dev start */
uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
uint8_t dcb_tc; /**< Traffic class of rx queue */
- uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
@@ -166,7 +166,7 @@ struct i40e_tx_queue {
bool q_set; /**< indicate if tx queue has been configured */
bool tx_deferred_start; /**< don't start this queue in dev start */
uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
const struct rte_memzone *mz;
};
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 4ffe030fcb64..7abc0821d119 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -900,7 +900,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52e3c567558..f9a7f4655050 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
*/
txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < n; i++) {
free[i] = txep[i].mbuf;
txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct i40e_rx_queue *rxq;
uint16_t desc, i;
bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
/* no QinQ support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a9b..663c46b91dc5 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
return -EINVAL;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
return i40e_vsi_config_vlan_filter(vsi, TRUE);
else
return i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
return i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 34bfa9af4734..12f541f53926 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -50,18 +50,18 @@
VIRTCHNL_VF_OFFLOAD_RX_POLLING)
#define IAVF_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
#define IAVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define IAVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722b0..df44df772e4e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -266,53 +266,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
static const uint64_t map_hena_rss[] = {
/* IPv4 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
- ETH_RSS_NONFRAG_IPV4_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
- ETH_RSS_NONFRAG_IPV4_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
/* IPv6 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
- ETH_RSS_NONFRAG_IPV6_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
- ETH_RSS_NONFRAG_IPV6_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
/* L2 Payload */
- [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+ [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
};
- const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV4_OTHER |
- ETH_RSS_FRAG_IPV4;
+ const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_FRAG_IPV4;
- const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP |
- ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_FRAG_IPV6;
+ const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_FRAG_IPV6;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -331,13 +331,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
/**
- * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+ * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
* generalizations of all other IPv4 and IPv6 RSS types.
*/
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
rss_hf |= ipv4_rss;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
rss_hf |= ipv6_rss;
RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -363,10 +363,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
if (valid_rss_hf & ipv4_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
if (valid_rss_hf & ipv6_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
if (rss_hf & ~valid_rss_hf)
PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -467,7 +467,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
return 0;
enable = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
iavf_config_vlan_insert_v2(adapter, enable);
return 0;
@@ -479,10 +479,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
int err;
err = iavf_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to update vlan offload");
return err;
@@ -512,8 +512,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
ad->rx_vec_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Large VF setting */
if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -611,7 +611,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
rxq->max_pkt_len > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -961,34 +961,34 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1048,42 +1048,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
*/
switch (vf->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = vf->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -1231,14 +1231,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
bool enable;
int err;
- if (mask & ETH_VLAN_FILTER_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
iavf_iterate_vlan_filters_v2(dev, enable);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = iavf_config_vlan_strip_v2(adapter, enable);
/* If not support, the stripping is already disabled by PF */
@@ -1267,9 +1267,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
err = iavf_enable_vlan_strip(adapter);
else
err = iavf_disable_vlan_strip(adapter);
@@ -1311,8 +1311,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(lut, vf->rss_lut, reta_size);
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -1348,8 +1348,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = vf->rss_lut[i];
}
@@ -1556,7 +1556,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
ret = iavf_query_stats(adapter, &pstats);
if (ret == 0) {
uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
RTE_ETHER_CRC_LEN;
iavf_update_stats(vsi, pstats);
stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 01724cd569dd..55d8a11da388 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -395,90 +395,90 @@ struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_IPV4_CHKSUM)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_IPV4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
/* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define IAVF_RSS_TYPE_OUTER_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
/* VLAN IPV4 */
#define IAVF_RSS_TYPE_VLAN_IPV4 (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4 ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4 RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6 ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6 RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* GTPU IPv4 */
#define IAVF_RSS_TYPE_GTPU_IPV4 (IAVF_RSS_TYPE_INNER_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_UDP (IAVF_RSS_TYPE_INNER_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_TCP (IAVF_RSS_TYPE_INNER_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define IAVF_RSS_TYPE_GTPU_IPV6 (IAVF_RSS_TYPE_INNER_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_UDP (IAVF_RSS_TYPE_INNER_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_TCP (IAVF_RSS_TYPE_INNER_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/**
* Supported pattern for hash.
@@ -496,7 +496,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv4_udp, IAVF_RSS_TYPE_VLAN_IPV4_UDP, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_vlan_ipv4_tcp, IAVF_RSS_TYPE_VLAN_IPV4_TCP, &outer_ipv4_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv4_sctp, IAVF_RSS_TYPE_VLAN_IPV4_SCTP, &outer_ipv4_sctp_tmplt},
- {iavf_pattern_eth_ipv4_gtpu, ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
+ {iavf_pattern_eth_ipv4_gtpu, RTE_ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4, IAVF_RSS_TYPE_GTPU_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_udp, IAVF_RSS_TYPE_GTPU_IPV4_UDP, &inner_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp, IAVF_RSS_TYPE_GTPU_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -538,9 +538,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ah, IAVF_RSS_TYPE_IPV4_AH, &ipv4_ah_tmplt},
{iavf_pattern_eth_ipv4_l2tpv3, IAVF_RSS_TYPE_IPV4_L2TPV3, &ipv4_l2tpv3_tmplt},
{iavf_pattern_eth_ipv4_pfcp, IAVF_RSS_TYPE_IPV4_PFCP, &ipv4_pfcp_tmplt},
- {iavf_pattern_eth_ipv4_gtpc, ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
- {iavf_pattern_eth_ecpri, ETH_RSS_ECPRI, ð_ecpri_tmplt},
- {iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_gtpc, RTE_ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ecpri, RTE_ETH_RSS_ECPRI, ð_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_ecpri, RTE_ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4_tcp, IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -565,7 +565,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
- {iavf_pattern_eth_ipv6_gtpu, ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
+ {iavf_pattern_eth_ipv6_gtpu, RTE_ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6, IAVF_RSS_TYPE_GTPU_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_udp, IAVF_RSS_TYPE_GTPU_IPV6_UDP, &inner_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp, IAVF_RSS_TYPE_GTPU_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -607,7 +607,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv6_ah, IAVF_RSS_TYPE_IPV6_AH, &ipv6_ah_tmplt},
{iavf_pattern_eth_ipv6_l2tpv3, IAVF_RSS_TYPE_IPV6_L2TPV3, &ipv6_l2tpv3_tmplt},
{iavf_pattern_eth_ipv6_pfcp, IAVF_RSS_TYPE_IPV6_PFCP, &ipv6_pfcp_tmplt},
- {iavf_pattern_eth_ipv6_gtpc, ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ipv6_gtpc, RTE_ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6_tcp, IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -648,52 +648,52 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
struct virtchnl_rss_cfg rss_cfg;
#define IAVF_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
rss_cfg.proto_hdrs = inner_ipv4_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
rss_cfg.proto_hdrs = inner_ipv6_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
@@ -855,28 +855,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr = &proto_hdrs->proto_hdr[i];
switch (hdr->type) {
case VIRTCHNL_PROTO_HDR_ETH:
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
hdr->field_selector = 0;
- else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
REFINE_PROTO_FLD(DEL, ETH_DST);
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
REFINE_PROTO_FLD(DEL, ETH_SRC);
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
- } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
REFINE_PROTO_FLD(DEL, IPV4_SRC);
}
@@ -884,39 +884,39 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
REFINE_PROTO_FLD(DEL, IPV6_SRC);
}
@@ -933,7 +933,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
}
break;
case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
else
hdr->field_selector = 0;
@@ -941,87 +941,87 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_UDP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, UDP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_TCP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, TCP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_SCTP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
REFINE_PROTO_FLD(ADD, SCTP_CHKSUM);
break;
case VIRTCHNL_PROTO_HDR_S_VLAN:
- if (!(rss_type & ETH_RSS_S_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_S_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_C_VLAN:
- if (!(rss_type & ETH_RSS_C_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_C_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_L2TPV3:
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ESP:
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_AH:
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_PFCP:
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ECPRI:
- if (!(rss_type & ETH_RSS_ECPRI))
+ if (!(rss_type & RTE_ETH_RSS_ECPRI))
hdr->field_selector = 0;
break;
default:
@@ -1038,7 +1038,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
struct virtchnl_proto_hdr *hdr;
int i;
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
for (i = 0; i < proto_hdrs->count; i++) {
@@ -1163,10 +1163,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -1177,27 +1177,27 @@ struct rss_attr_type {
uint64_t type;
};
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE64)
#define INVALID_RSS_ATTR (RTE_ETH_RSS_L3_PRE32 | \
@@ -1207,9 +1207,9 @@ struct rss_attr_type {
RTE_ETH_RSS_L3_PRE96)
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE64, VALID_RSS_IPV6},
{INVALID_RSS_ATTR, 0}
@@ -1226,15 +1226,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88bbd40c1027..ac4db117f5cd 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -617,7 +617,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->vsi = vsi;
rxq->offloads = offloads;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index f4ae2fd6e123..2d7f6b1b2dca 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
#define IAVF_VPMD_TX_MAX_FREE_BUF 64
#define IAVF_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define IAVF_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define IAVF_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IAVF_VECTOR_PATH 0
#define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 72a4fcab04a5..b47c51b8ebe4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -906,7 +906,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -958,7 +958,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
(_mm256_castsi128_si256(raw_desc_bh0),
raw_desc_bh1, 1);
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 12375d3d80bd..b8f2f69f12fc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1141,7 +1141,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -1193,7 +1193,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
(_mm256_castsi128_si256(raw_desc_bh0),
raw_desc_bh1, 1);
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
@@ -1721,7 +1721,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index edb54991e298..1de43b9b8ee2 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -819,7 +819,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e349..7b7df5eebb6d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -835,7 +835,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
PMD_DRV_LOG(DEBUG, "RSS is not supported");
return -ENOTSUP;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
/* set all lut items to default queue */
memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ebd8ca57ef5f..1cda2db00e56 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -95,7 +95,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -582,7 +582,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -644,7 +644,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
}
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -660,8 +660,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
return 0;
}
@@ -683,27 +683,27 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -933,42 +933,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
*/
switch (hw->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = hw->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -987,11 +987,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
udp_tunnel->udp_port);
break;
@@ -1018,8 +1018,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
break;
default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 44fb38dbe7b1..b9fcfc80ad9b 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -143,28 +143,28 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -246,9 +246,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
bool enable = !!(dev_conf->rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (enable && repr->outer_vlan_info.port_vlan_ena) {
PMD_DRV_LOG(ERR,
@@ -345,7 +345,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (!ice_dcf_vlan_offload_ena(repr))
return -ENOTSUP;
- if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer VLAN in QinQ\n");
return -EINVAL;
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (repr->outer_vlan_info.stripping_ena) {
err = ice_dcf_vf_repr_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR,
"Failed to reset VLAN stripping : %d\n",
@@ -449,7 +449,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
int err;
err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index edbc74632711..6a6637a15af7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1487,9 +1487,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
TAILQ_INIT(&vsi->mac_list);
TAILQ_INIT(&vsi->vlan_list);
- /* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+ /* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
- ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+ RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
hw->func_caps.common_cap.rss_table_size;
pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
@@ -2993,14 +2993,14 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
int ret;
#define ICE_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
if (ret)
@@ -3010,7 +3010,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
cfg.symm = 0;
cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
/* Configure RSS for IPv4 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3020,7 +3020,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for IPv6 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3030,7 +3030,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3041,7 +3041,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3052,7 +3052,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3063,7 +3063,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3074,7 +3074,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -3085,7 +3085,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -3095,7 +3095,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -3105,7 +3105,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -3115,7 +3115,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3125,7 +3125,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3135,7 +3135,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3145,7 +3145,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3288,8 +3288,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_rx_queues) {
ret = ice_init_rss(pf);
@@ -3569,8 +3569,8 @@ ice_dev_start(struct rte_eth_dev *dev)
ice_set_rx_function(dev);
ice_set_tx_function(dev);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = ice_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3682,40 +3682,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->flow_type_rss_offloads = 0;
if (!is_safe_mode) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
}
dev_info->rx_queue_offload_capa = 0;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->reta_size = pf->hash_lut_size;
dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3754,24 +3754,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_align = ICE_ALIGN_RING_DESC,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_25G;
phy_type_low = hw->port_info->phy.phy_type_low;
phy_type_high = hw->port_info->phy.phy_type_high;
if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->nb_rx_queues = dev->data->nb_rx_queues;
dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3836,8 +3836,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
status = ice_aq_get_link_info(hw->port_info, enable_lse,
&link_status, NULL);
if (status != ICE_SUCCESS) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
goto out;
}
@@ -3853,55 +3853,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
goto out;
/* Full-duplex operation at all supported speeds */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* Parse the link status */
switch (link_status.link_speed) {
case ICE_AQ_LINK_SPEED_10MB:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case ICE_AQ_LINK_SPEED_100MB:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case ICE_AQ_LINK_SPEED_1000MB:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ICE_AQ_LINK_SPEED_2500MB:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case ICE_AQ_LINK_SPEED_5GB:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case ICE_AQ_LINK_SPEED_10GB:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case ICE_AQ_LINK_SPEED_20GB:
- link.link_speed = ETH_SPEED_NUM_20G;
+ link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case ICE_AQ_LINK_SPEED_25GB:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case ICE_AQ_LINK_SPEED_40GB:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case ICE_AQ_LINK_SPEED_50GB:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case ICE_AQ_LINK_SPEED_100GB:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case ICE_AQ_LINK_SPEED_UNKNOWN:
PMD_DRV_LOG(ERR, "Unknown link speed");
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
default:
PMD_DRV_LOG(ERR, "None link speed");
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
out:
ice_atomic_write_link_status(dev, &link);
@@ -4377,15 +4377,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ice_vsi_config_vlan_filter(vsi, true);
else
ice_vsi_config_vlan_filter(vsi, false);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ice_vsi_config_vlan_stripping(vsi, true);
else
ice_vsi_config_vlan_stripping(vsi, false);
@@ -4500,8 +4500,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -4550,8 +4550,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -5460,7 +5460,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
break;
default:
@@ -5484,7 +5484,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
break;
default:
@@ -5505,7 +5505,7 @@ ice_timesync_enable(struct rte_eth_dev *dev)
int ret;
if (dev->data->dev_started && !(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_TIMESTAMP)) {
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
PMD_DRV_LOG(ERR, "Rx timestamp offload not configured");
return -1;
}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 1cd3753ccc5f..599e0028f7e8 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -117,19 +117,19 @@
ICE_FLAG_VF_MAC_BY_PF)
#define ICE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/**
* The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 20a3204fab7e..35eff8b17d28 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
#define ICE_IPV4_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
#define ICE_IPV6_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE32 | \
RTE_ETH_RSS_L3_PRE48 | \
RTE_ETH_RSS_L3_PRE64)
@@ -373,87 +373,87 @@ struct ice_rss_hash_cfg eth_tmplt = {
};
/* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_IPV4_CHKSUM)
+#define ICE_RSS_TYPE_ETH_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_IPV4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_UDP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_TCP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV4_SCTP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV4 ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV4 RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG (ETH_RSS_ETH | ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_ETH_IPV6_UDP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV6_TCP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_L4_CHKSUM)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_L4_CHKSUM)
#define ICE_RSS_TYPE_ETH_IPV6_SCTP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV6 ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV6 RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* VLAN IPV4 */
#define ICE_RSS_TYPE_VLAN_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV4)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define ICE_RSS_TYPE_VLAN_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_SCTP (ICE_RSS_TYPE_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define ICE_RSS_TYPE_VLAN_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_FRAG (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_VLAN_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_SCTP (ICE_RSS_TYPE_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* GTPU IPv4 */
#define ICE_RSS_TYPE_GTPU_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define ICE_RSS_TYPE_GTPU_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* PPPOE */
-#define ICE_RSS_TYPE_PPPOE (ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE (RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
/* PPPOE IPv4 */
#define ICE_RSS_TYPE_PPPOE_IPV4 (ICE_RSS_TYPE_IPV4 | \
@@ -472,17 +472,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
ICE_RSS_TYPE_PPPOE)
/* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/* MAC */
-#define ICE_RSS_TYPE_ETH ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH RTE_ETH_RSS_ETH
/**
* Supported pattern for hash.
@@ -647,86 +647,86 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
*hash_flds &= ~ICE_FLOW_HASH_ETH;
- if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
- if (rss_type & ETH_RSS_ETH)
+ if (rss_type & RTE_ETH_RSS_ETH)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
- if (rss_type & ETH_RSS_C_VLAN)
+ if (rss_type & RTE_ETH_RSS_C_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
- else if (rss_type & ETH_RSS_S_VLAN)
+ else if (rss_type & RTE_ETH_RSS_S_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
- if (!(rss_type & ETH_RSS_PPPOE))
+ if (!(rss_type & RTE_ETH_RSS_PPPOE))
*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
}
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
}
- if (rss_type & ETH_RSS_IPV4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
}
if (rss_type & RTE_ETH_RSS_L3_PRE32) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
} else {
@@ -735,10 +735,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE48) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
} else {
@@ -747,10 +747,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE64) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
} else {
@@ -762,81 +762,81 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
}
- if (rss_type & ETH_RSS_L4_CHKSUM)
+ if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_CHKSUM);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
}
}
@@ -870,7 +870,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -892,10 +892,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -907,9 +907,9 @@ struct rss_attr_type {
};
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE32, VALID_RSS_IPV6},
{RTE_ETH_RSS_L3_PRE48, VALID_RSS_IPV6},
@@ -928,16 +928,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ff362c21d9f5..8406240d7209 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -303,7 +303,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
}
}
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
/* Register mbuf field and flag for Rx timestamp */
err = rte_mbuf_dyn_rx_timestamp_register(
&ice_timestamp_dynfield_offset,
@@ -367,7 +367,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
regval |= (0x03 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
QRXFLXP_CNTXT_RXDID_PRIO_M;
- if (ad->ptp_ena || rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (ad->ptp_ena || rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
regval |= QRXFLXP_CNTXT_TS_M;
ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
@@ -1117,7 +1117,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = vsi->base_queue + queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1624,7 +1624,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
ice_rxd_to_vlan_tci(mb, &rxdp[j]);
rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -1942,7 +1942,7 @@ ice_recv_scattered_pkts(void *rx_queue,
rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -2373,7 +2373,7 @@ ice_recv_pkts(void *rx_queue,
rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
ts_ns = ice_tstamp_convert_32b_64b(hw,
rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
if (ice_timestamp_dynflag > 0) {
@@ -2889,7 +2889,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
for (i = 0; i < txq->tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
rte_mempool_put(txep->mbuf->pool, txep->mbuf);
txep->mbuf = NULL;
@@ -3365,7 +3365,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 490693bff218..86955539bea8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -474,7 +474,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 7efe7b50a206..af23f6a34e58 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -585,7 +585,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
@@ -995,7 +995,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index f0f99265857e..b1d975b31a5a 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
}
#define ICE_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define ICE_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define ICE_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define ICE_VECTOR_PATH 0
#define ICE_VECTOR_OFFLOAD_PATH 1
@@ -287,7 +287,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
if (rxq->proto_xtr != PROTO_XTR_NONE)
return -1;
- if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
return -1;
if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD)
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b641b..7ce80a442b35 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -307,8 +307,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
@@ -318,7 +318,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (tx_mq_mode != ETH_MQ_TX_NONE)
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
PMD_INIT_LOG(WARNING,
"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
tx_mq_mode);
@@ -334,8 +334,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = igc_check_mq_mode(dev);
if (ret != 0)
@@ -473,12 +473,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (speed == SPEED_2500) {
uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -490,9 +490,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
} else {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -525,7 +525,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -972,18 +972,18 @@ eth_igc_start(struct rte_eth_dev *dev)
/* VLAN Offload Settings */
eth_igc_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
hw->mac.autoneg = 1;
} else {
int num_speeds = 0;
- if (*speeds & ETH_LINK_SPEED_FIXED) {
+ if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_DRV_LOG(ERR,
"Force speed mode currently not supported");
igc_dev_clear_queues(dev);
@@ -993,33 +993,33 @@ eth_igc_start(struct rte_eth_dev *dev)
hw->phy.autoneg_advertised = 0;
hw->mac.autoneg = 1;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_2_5G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
num_speeds++;
}
@@ -1482,14 +1482,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
- dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_vmdq_pools = 0;
dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1515,9 +1515,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2141,13 +2141,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2179,16 +2179,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->fc.requested_mode = igc_fc_none;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->fc.requested_mode = igc_fc_rx_pause;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->fc.requested_mode = igc_fc_tx_pause;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->fc.requested_mode = igc_fc_full;
break;
default:
@@ -2234,29 +2234,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* set redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta, reg;
uint16_t idx, shift;
uint8_t j, mask;
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGC_RSS_RDT_REG_SIZE_MASK);
/* if no need to update the register */
if (!mask ||
- shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+ shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
continue;
/* check mask whether need to read the register value first */
@@ -2290,29 +2290,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* read redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta;
uint16_t idx, shift;
uint8_t j, mask;
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGC_RSS_RDT_REG_SIZE_MASK);
/* if no need to read register */
if (!mask ||
- shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+ shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
continue;
/* read register and get the queue index */
@@ -2369,23 +2369,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf = 0;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf |= rss_hf;
return 0;
@@ -2514,22 +2514,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igc_vlan_hw_strip_enable(dev);
else
igc_vlan_hw_strip_disable(dev);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igc_vlan_hw_filter_enable(dev);
else
igc_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return igc_vlan_hw_extend_enable(dev);
else
return igc_vlan_hw_extend_disable(dev);
@@ -2547,7 +2547,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
uint32_t reg_val;
/* only outer TPID of double VLAN can be configured*/
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg_val = IGC_READ_REG(hw, IGC_VET);
reg_val = (reg_val & (~IGC_VET_EXT)) |
((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 5e6c2ff30157..f56cad79e939 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -66,37 +66,37 @@ extern "C" {
#define IGC_TX_MAX_MTU_SEG UINT8_MAX
#define IGC_RX_OFFLOAD_ALL ( \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IGC_TX_OFFLOAD_ALL ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_UDP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_UDP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define IGC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IGC_MAX_ETQF_FILTERS 3 /* etqf(3) is used for 1588 */
#define IGC_ETQF_FILTER_1588 3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 56132e8c6cd6..1d34ae2e1b15 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
};
/** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
/**< Start context position for transmit queue. */
struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
static inline uint64_t
@@ -847,23 +847,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
}
@@ -1037,10 +1037,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
}
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igc_rss_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/*
* configure RSS register for following,
* then disable the RSS logic
@@ -1111,7 +1111,7 @@ igc_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0;
bus_addr = rxq->rx_ring_phys_addr;
@@ -1177,7 +1177,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
if (dev->data->scattered_rx) {
@@ -1221,20 +1221,20 @@ igc_rx_init(struct rte_eth_dev *dev)
rxcsum |= IGC_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= IGC_RXCSUM_IPOFL;
else
rxcsum &= ~IGC_RXCSUM_IPOFL;
if (offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rxcsum |= IGC_RXCSUM_TUOFL;
- offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
} else {
rxcsum &= ~IGC_RXCSUM_TUOFL;
}
- if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
rxcsum |= IGC_RXCSUM_CRCOFL;
else
rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1242,7 +1242,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1279,12 +1279,12 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
dvmolr |= IGC_DVMOLR_STRVLAN;
else
dvmolr &= ~IGC_DVMOLR_STRVLAN;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
dvmolr &= ~IGC_DVMOLR_STRCRC;
else
dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2253,10 +2253,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
if (on) {
reg_val |= IGC_DVMOLR_STRVLAN;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index f94a1fed0a38..c688c3735c06 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(link));
if (adapter->idev.port_info->config.an_enable) {
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
}
if (!adapter->link_up ||
!(lif->state & IONIC_LIF_F_UP)) {
/* Interface is down */
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else {
/* Interface is up */
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (adapter->link_speed) {
case 10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -387,17 +387,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
dev_info->speed_capa =
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/*
* Per-queue capabilities
* RTE does not support disabling a feature on a queue if it is
* enabled globally on the device. Thus the driver does not advertise
- * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+ * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
* though the driver would be otherwise capable of disabling it on
* a per-queue basis.
*/
@@ -411,24 +411,24 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
*/
dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
0;
dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
0;
dev_info->rx_desc_lim = rx_desc_lim;
@@ -463,9 +463,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
fc_conf->autoneg = 0;
if (idev->port_info->config.pause_type)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -487,14 +487,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
break;
- case RTE_FC_RX_PAUSE:
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
return -ENOTSUP;
}
@@ -545,12 +545,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = tbl_sz / RTE_RETA_GROUP_SIZE;
+ num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if (reta_conf[i].mask & ((uint64_t)1 << j)) {
- index = (i * RTE_RETA_GROUP_SIZE) + j;
+ index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
}
}
@@ -585,12 +585,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = reta_size / RTE_RETA_GROUP_SIZE;
+ num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
memcpy(reta_conf->reta,
- &lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
- RTE_RETA_GROUP_SIZE);
+ &lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+ RTE_ETH_RETA_GROUP_SIZE);
reta_conf++;
}
@@ -618,17 +618,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
IONIC_RSS_HASH_KEY_SIZE);
if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
rss_conf->rss_hf = rss_hf;
@@ -660,17 +660,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (!lif->rss_ind_tbl)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
rss_types |= IONIC_RSS_TYPE_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
rss_types |= IONIC_RSS_TYPE_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -842,15 +842,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
static inline uint32_t
ionic_parse_link_speeds(uint16_t link_speeds)
{
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
return 100000;
- else if (link_speeds & ETH_LINK_SPEED_50G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
return 50000;
- else if (link_speeds & ETH_LINK_SPEED_40G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
return 40000;
- else if (link_speeds & ETH_LINK_SPEED_25G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
return 25000;
- else if (link_speeds & ETH_LINK_SPEED_10G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
return 10000;
else
return 0;
@@ -874,12 +874,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
IONIC_PRINT_CALL();
allowed_speeds =
- ETH_LINK_SPEED_FIXED |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_FIXED |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
if (dev_conf->link_speeds & ~allowed_speeds) {
IONIC_PRINT(ERR, "Invalid link setting");
@@ -896,7 +896,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure link */
- an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
ionic_dev_cmd_port_autoneg(idev, an_enable);
err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
#include <rte_ethdev.h>
#define IONIC_ETH_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index a1f9ce2d81cb..5e8fdf3893ad 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
/*
* IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
- * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+ * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
*/
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
else
lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
/*
* NB: While it is true that RSS_HASH is always enabled on ionic,
* setting this flag unconditionally causes problems in DTS.
- * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
*/
/* RX per-port */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
lif->features |= IONIC_ETH_HW_RX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_RX_CSUM;
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
lif->features |= IONIC_ETH_HW_RX_SG;
lif->eth_dev->data->scattered_rx = 1;
} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
}
/* Covers VLAN_STRIP */
- ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+ ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
/* TX per-port */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
lif->features |= IONIC_ETH_HW_TX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_TX_CSUM;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
else
lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
lif->features |= IONIC_ETH_HW_TX_SG;
else
lif->features &= ~IONIC_ETH_HW_TX_SG;
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
lif->features |= IONIC_ETH_HW_TSO;
lif->features |= IONIC_ETH_HW_TSO_IPV6;
lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 4d16a39c6b6d..e3df7c56debe 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -203,11 +203,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
txq->flags |= IONIC_QCQ_F_DEFERRED;
/* Convert the offload flags into queue flags */
- if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_L3;
- if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_TCP;
- if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_UDP;
eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -743,11 +743,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
/*
* Note: the interface does not currently support
- * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
* when the adapter will be able to keep the CRC and subtract
* it to the length for all received packets:
* if (eth_dev->data->dev_conf.rxmode.offloads &
- * DEV_RX_OFFLOAD_KEEP_CRC)
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC)
* rxq->crc_len = ETHER_CRC_LEN;
*/
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f7f..17088585757f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->speed_capa =
(hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
- ETH_LINK_SPEED_10G :
+ RTE_ETH_LINK_SPEED_10G :
((hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
- ETH_LINK_SPEED_25G :
- ETH_LINK_SPEED_AUTONEG);
+ RTE_ETH_LINK_SPEED_25G :
+ RTE_ETH_LINK_SPEED_AUTONEG);
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
@@ -67,30 +67,30 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
};
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
@@ -2399,10 +2399,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
(uint64_t *)&link_speed);
switch (link_speed) {
case IFPGA_RAWDEV_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case IFPGA_RAWDEV_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2460,9 +2460,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2518,9 +2518,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 46c95425adfb..7fd2c539e002 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1857,7 +1857,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= IXGBE_DMATXCTL_GDV;
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (qinq) {
reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1872,7 +1872,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
" by single VLAN");
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (qinq) {
/* Only the high 16-bits is valid */
IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1959,10 +1959,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -2083,7 +2083,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
if (hw->mac.type == ixgbe_mac_82598EB) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
ctrl |= IXGBE_VLNCTRL_VME;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2100,7 +2100,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl |= IXGBE_RXDCTL_VME;
on = TRUE;
} else {
@@ -2122,17 +2122,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct ixgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -2143,19 +2143,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
ixgbe_vlan_hw_strip_config(dev);
- }
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ixgbe_vlan_hw_filter_enable(dev);
else
ixgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
ixgbe_vlan_hw_extend_enable(dev);
else
ixgbe_vlan_hw_extend_disable(dev);
@@ -2194,10 +2193,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -2221,18 +2220,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2242,12 +2241,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -2256,12 +2255,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -2276,13 +2275,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2291,15 +2290,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2308,39 +2307,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -2349,7 +2348,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB) {
if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
PMD_INIT_LOG(ERR,
@@ -2373,8 +2372,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = ixgbe_check_mq_mode(dev);
@@ -2619,15 +2618,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
ixgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -2704,17 +2703,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G | ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G | RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- allowed_speeds = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
break;
default:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
}
link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2728,7 +2727,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2746,17 +2745,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
speed = IXGBE_LINK_SPEED_82599_AUTONEG;
}
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= IXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= IXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= IXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= IXGBE_LINK_SPEED_100_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= IXGBE_LINK_SPEED_10_FULL;
}
@@ -3832,7 +3831,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB)
dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
}
@@ -3842,9 +3841,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3883,21 +3882,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
if (hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X540_vf ||
hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550_vf) {
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
}
if (hw->mac.type == ixgbe_mac_X550) {
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
}
/* Driver-preferred Rx/Tx parameters */
@@ -3966,9 +3965,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
@@ -4211,11 +4210,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
u32 esdp_reg;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
hw->mac.get_link_status = true;
@@ -4237,8 +4236,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
if (diag != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -4274,37 +4273,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case IXGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
case IXGBE_LINK_SPEED_10_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case IXGBE_LINK_SPEED_100_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case IXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case IXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case IXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case IXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -4521,7 +4520,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4740,13 +4739,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -5044,8 +5043,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IXGBE_4_BIT_MASK);
if (!mask)
@@ -5092,8 +5091,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IXGBE_4_BIT_MASK);
if (!mask)
@@ -5255,22 +5254,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -5330,8 +5329,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
ixgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5568,10 +5567,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
ixgbevf_vlan_strip_queue_set(dev, i, on);
}
}
@@ -5702,12 +5701,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
}
@@ -5721,15 +5720,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= IXGBE_VMOLR_AUPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= IXGBE_VMOLR_ROMPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= IXGBE_VMOLR_ROPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= IXGBE_VMOLR_BAM;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= IXGBE_VMOLR_MPE;
return new_val;
@@ -6724,15 +6723,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = IXGBE_INCVAL_100;
shift = IXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = IXGBE_INCVAL_1GB;
shift = IXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = IXGBE_INCVAL_10GB;
shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7143,16 +7142,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- return ETH_RSS_RETA_SIZE_512;
+ return RTE_ETH_RSS_RETA_SIZE_512;
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
- return ETH_RSS_RETA_SIZE_64;
+ return RTE_ETH_RSS_RETA_SIZE_64;
case ixgbe_mac_X540_vf:
case ixgbe_mac_82599_vf:
return 0;
default:
- return ETH_RSS_RETA_SIZE_128;
+ return RTE_ETH_RSS_RETA_SIZE_128;
}
}
@@ -7162,10 +7161,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- if (reta_idx < ETH_RSS_RETA_SIZE_128)
+ if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
return IXGBE_RETA(reta_idx >> 2);
else
- return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+ return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
@@ -7221,7 +7220,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -7232,7 +7231,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled*/
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -7256,9 +7255,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled*/
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7271,7 +7270,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7524,7 +7523,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -7556,7 +7555,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -7653,12 +7652,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
@@ -7690,11 +7689,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 950fb2d2450c..876b670f2682 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -114,15 +114,15 @@
#define IXGBE_FDIR_NVGRE_TUNNEL_TYPE 0x0
#define IXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IXGBE_VF_IRQ_ENABLE_MASK 3 /* vf irq enable mask */
#define IXGBE_VF_MAXMSIVECTOR 1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
uint32_t key);
static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_input *input, uint8_t queue,
uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
{
*fdirctrl = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 27322ab9038a..bdc9d4796c02 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
IXGBE_SECTXCTRL_STORE_FORWARD);
reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 295e5a39b245..9f1bd0a62ba4 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -104,15 +104,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -263,15 +263,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
gpie |= IXGBE_GPIE_VTMODE_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
gpie |= IXGBE_GPIE_VTMODE_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
gpie |= IXGBE_GPIE_VTMODE_16;
break;
@@ -674,29 +674,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = &dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a51450fe5b82..aa3a406c204d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2592,26 +2592,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2780,7 +2780,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/*
@@ -3021,7 +3021,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hw->mac.type != ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return offloads;
}
@@ -3032,19 +3032,19 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
uint64_t offloads;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (ixgbe_is_vf(dev) == 0)
- offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
/*
* RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3054,20 +3054,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X550) &&
!RTE_ETH_DEV_SRIOV(dev).active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -3122,7 +3122,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -3507,23 +3507,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
}
@@ -3605,23 +3605,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -3697,12 +3697,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
ixgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* RXPBSIZE
@@ -3727,7 +3727,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3736,7 +3736,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
}
/* MRQC: enable vmdq and dcb */
- mrqc = (num_pools == ETH_16_POOLS) ?
+ mrqc = (num_pools == RTE_ETH_16_POOLS) ?
IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
@@ -3752,7 +3752,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3776,7 +3776,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 16 or 32 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*
* MPSAR - allow pools to read specific mac addresses
@@ -3858,7 +3858,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
if (hw->mac.type != ixgbe_mac_82598EB)
/*PF VF Transmit Enable*/
IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
- vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3874,12 +3874,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3889,7 +3889,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3907,12 +3907,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3922,7 +3922,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3949,7 +3949,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3976,7 +3976,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4145,7 +4145,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
if (hw->mac.type != ixgbe_mac_82598EB) {
config_dcb_rx = DCB_RX_CONFIG;
@@ -4158,8 +4158,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_configure(dev);
}
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4172,7 +4172,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -4183,7 +4183,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4199,15 +4199,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
- if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -4257,9 +4257,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
- }
}
if (config_dcb_tx) {
/* Only support an equally distributed
@@ -4273,7 +4272,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
}
@@ -4309,7 +4308,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/*
@@ -4323,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = ixgbe_dcb_pfc_enabled;
}
ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -4344,12 +4343,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -4405,7 +4404,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 64 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
/*
@@ -4526,11 +4525,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
mrqc &= ~IXGBE_MRQC_MRQE_MASK;
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS64EN;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS32EN;
break;
@@ -4551,17 +4550,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQEN);
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT4TCEN);
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT8TCEN);
break;
@@ -4588,21 +4587,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
ixgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
ixgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
ixgbe_rss_disable(dev);
@@ -4613,18 +4612,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
ixgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4658,7 +4657,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
ixgbe_vmdq_tx_hw_configure(hw);
else {
mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4671,13 +4670,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
IXGBE_MTQC_8TC_8TQ;
break;
@@ -4885,7 +4884,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
rxq->rx_using_sse = rx_using_sse;
#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
#endif
}
}
@@ -4913,10 +4912,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4924,8 +4923,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
/*
* According to chapter of 4.6.7.2.1 of the Spec Rev.
* 3.0 RSC configuration requires HW CRC stripping being
@@ -4939,7 +4938,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RFCTL configuration */
rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
- if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~IXGBE_RFCTL_RSC_DIS;
else
rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4948,7 +4947,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set RDRXCTL.RSCACKC bit */
@@ -5070,7 +5069,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
else
hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5107,7 +5106,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5116,7 +5115,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -5158,11 +5157,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/* It adds dual VLAN length for supporting dual VLAN */
if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -5177,7 +5176,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
rxcsum |= IXGBE_RXCSUM_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= IXGBE_RXCSUM_IPPCSE;
else
rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5187,7 +5186,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540) {
rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
else
rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5393,9 +5392,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY) ||
+ RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY)) {
+ RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = ixgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -5681,7 +5680,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5730,7 +5729,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
@@ -5738,8 +5737,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index a1764f2b08af..668a5b9814f6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
uint8_t rx_udp_csum_zero_err;
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -227,7 +227,7 @@ struct ixgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 005e60668a8b..cd34d4098785 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -277,7 +277,7 @@ static inline int
ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
/* no fdir support */
if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ae03ea6e9db3..ac8976062fa7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index 9fa75984fb31..bd528ff346c7 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
/**< Maximum number of MAC addresses. */
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/**< Device RX offload capabilities. */
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/**< Device TX offload capabilities. */
dev_info->speed_capa =
representor->pf_ethdev->data->dev_link.link_speed;
- /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
dev_info->switch_info.name =
representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
*/
if (hw->mac.type == ixgbe_mac_82598EB)
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_16_POOLS;
+ RTE_ETH_16_POOLS;
else
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_64_POOLS;
+ RTE_ETH_64_POOLS;
for (q = 0; q < queues_per_pool; q++)
(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
* @param rx_mask
* The RX mode mask, which is one or more of accepting Untagged Packets,
* packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-* ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-* ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+* RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+* RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
* in rx_mode.
* @param on
* 1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index cb9f7c8e8200..c428caf44189 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static int is_kni_initialized;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 0fc3f0ab66a9..90ffe31b9fda 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
break;
/* CN23xx 25G cards */
case PCI_SUBSYS_DEV_ID_CN2350_225:
case PCI_SUBSYS_DEV_ID_CN2360_225:
- devinfo->speed_capa = ETH_LINK_SPEED_25G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
break;
default:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
lio_dev_err(lio_dev,
"Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->max_mac_addrs = 1;
- devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
- devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+ devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
+ devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
devinfo->rx_desc_lim = lio_rx_desc_lim;
devinfo->tx_desc_lim = lio_tx_desc_lim;
devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_EX |
- ETH_RSS_IPV6_TCP_EX);
+ devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_IPV6_TCP_EX);
return 0;
}
@@ -519,10 +519,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
- for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
- index = (i * RTE_RETA_GROUP_SIZE) + j;
+ index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
rss_state->itable[index] = reta_conf[i].reta[j];
}
}
@@ -562,12 +562,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = reta_size / RTE_RETA_GROUP_SIZE;
+ num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
memcpy(reta_conf->reta,
- &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
- RTE_RETA_GROUP_SIZE);
+ &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+ RTE_ETH_RETA_GROUP_SIZE);
reta_conf++;
}
@@ -595,17 +595,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
if (rss_state->ip)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (rss_state->tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (rss_state->ipv6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (rss_state->ipv6_tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (rss_state->ipv6_ex)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (rss_state->ipv6_tcp_ex_hash)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
rss_conf->rss_hf = rss_hf;
@@ -673,42 +673,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_state->hash_disable)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
hashinfo |= LIO_RSS_HASH_IPV4;
rss_state->ip = 1;
} else {
rss_state->ip = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV4;
rss_state->tcp_hash = 1;
} else {
rss_state->tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
hashinfo |= LIO_RSS_HASH_IPV6;
rss_state->ipv6 = 1;
} else {
rss_state->ipv6 = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6;
rss_state->ipv6_tcp_hash = 1;
} else {
rss_state->ipv6_tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
hashinfo |= LIO_RSS_HASH_IPV6_EX;
rss_state->ipv6_ex = 1;
} else {
rss_state->ipv6_ex = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
rss_state->ipv6_tcp_ex_hash = 1;
} else {
@@ -757,7 +757,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -814,7 +814,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -912,10 +912,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
/* Initialize */
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
/* Return what we found */
if (lio_dev->linfo.link.s.link_up == 0) {
@@ -923,18 +923,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
return rte_eth_linkstatus_set(eth_dev, &link);
}
- link.link_status = ETH_LINK_UP; /* Interface is up */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (lio_dev->linfo.link.s.speed) {
case LIO_LINK_SPEED_10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case LIO_LINK_SPEED_25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
}
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1086,8 +1086,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
i % eth_dev->data->nb_rx_queues : 0);
- conf_idx = i / RTE_RETA_GROUP_SIZE;
- reta_idx = i % RTE_RETA_GROUP_SIZE;
+ conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
reta_conf[conf_idx].reta[reta_idx] = q_idx;
reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
}
@@ -1103,10 +1103,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
struct rte_eth_rss_conf rss_conf;
switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
lio_dev_rss_configure(eth_dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode. */
default:
memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1484,7 +1484,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -1505,11 +1505,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 0;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
lio_dev_err(lio_dev, "Unable to set Link Down\n");
return -1;
}
@@ -1721,9 +1721,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Inform firmware about change in number of queues to use.
* Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65c1..8533e39f6957 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
int i;
int ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e86..9deb7a5f1360 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
#define MEMIF_MP_SEND_REGION "memif_mp_send_region"
@@ -199,7 +199,7 @@ memif_dev_info(struct rte_eth_dev *dev __rte_unused, struct rte_eth_dev_info *de
dev_info->max_rx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
dev_info->max_tx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -1219,7 +1219,7 @@ memif_connect(struct rte_eth_dev *dev)
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
}
MIF_LOG(INFO, "Connected.");
return 0;
@@ -1381,10 +1381,10 @@ memif_link_update(struct rte_eth_dev *dev,
if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
proc_private = dev->process_private;
- if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
proc_private->regions_num == 0) {
memif_mp_request_regions(dev);
- } else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+ } else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
proc_private->regions_num > 0) {
memif_free_regions(dev);
}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->if_index = priv->if_index;
info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_56G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_56G;
info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
dev_link.link_speed = link_speed;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
dev->data->dev_link = dev_link;
return 0;
}
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
ret = 0;
out:
MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
};
static const uint64_t dpdk[] = {
[INNER] = 0,
- [IPV4] = ETH_RSS_IPV4,
- [IPV4_1] = ETH_RSS_FRAG_IPV4,
- [IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
- [IPV6] = ETH_RSS_IPV6,
- [IPV6_1] = ETH_RSS_FRAG_IPV6,
- [IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
- [IPV6_3] = ETH_RSS_IPV6_EX,
+ [IPV4] = RTE_ETH_RSS_IPV4,
+ [IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+ [IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IPV6] = RTE_ETH_RSS_IPV6,
+ [IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+ [IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IPV6_3] = RTE_ETH_RSS_IPV6_EX,
[TCP] = 0,
[UDP] = 0,
- [IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
- [IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
- [IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
- [IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
- [IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
- [IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+ [IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ [IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ [IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+ [IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+ [IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ [IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
};
static const uint64_t verbs[RTE_DIM(dpdk)] = {
[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
* - MAC flow rules are generated from @p dev->data->mac_addrs
* (@p priv->mac array).
* - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
* is enabled and VLAN filters are configured.
*
* @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
struct rte_ether_addr *rule_mac = ð_spec.dst;
rte_be16_t *rule_vlan =
(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!ETH_DEV(priv)->data->promiscuous ?
&vlan_spec.tci :
NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
static void
mlx4_link_status_alarm(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
};
uint32_t caught[RTE_DIM(type)] = { 0 };
struct ibv_async_event event;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
unsigned int i;
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
int
mlx4_intr_install(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
int rc;
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
int
mlx4_rxq_intr_enable(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index ee2d2b75e59a..781ee256df71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,12 +682,12 @@ mlx4_rxq_detach(struct rxq *rxq)
uint64_t
mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_RSS_HASH;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
- offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return offloads;
}
@@ -703,7 +703,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
uint64_t
mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
(void)priv;
return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
/* By default, FCS (CRC) is stripped by hardware. */
crc_present = 0;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (priv->hw_fcs_strip) {
crc_present = 1;
} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts = elts,
/* Toggle Rx checksum offload if hardware supports it. */
.csum = priv->hw_csum &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.csum_l2tun = priv->hw_csum_l2tun &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.crc_present = crc_present,
.l2tun_offload = priv->hw_csum_l2tun,
.stats = {
@@ -832,7 +832,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 7d8c4f2a2223..0db2e55befd3 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
uint64_t
mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+ uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (priv->hw_csum) {
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
}
if (priv->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (priv->hw_csum_l2tun) {
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (priv->tso)
- offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
}
return offloads;
}
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts_comp_cd_init =
RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
.csum = priv->hw_csum &&
- (offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM)),
+ (offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
.csum_l2tun = priv->hw_csum_l2tun &&
(offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
/* Enable Tx loopback for VF devices. */
.lb = !!priv->vf,
.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
dev_link.link_speed = link_speed;
priv->link_speed_capa = 0;
if (edata.supported & (SUPPORTED_1000baseT_Full |
SUPPORTED_1000baseKX_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (edata.supported & SUPPORTED_10000baseKR_Full)
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (edata.supported & (SUPPORTED_40000baseKR4_Full |
SUPPORTED_40000baseCR4_Full |
SUPPORTED_40000baseSR4_Full |
SUPPORTED_40000baseLR4_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
return ret;
}
dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
- ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+ RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
sc = ecmd->link_mode_masks[0] |
((uint64_t)ecmd->link_mode_masks[1] << 32);
priv->link_speed_capa = 0;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
sc = ecmd->link_mode_masks[2] |
((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
MLX5_BITSHIFT
(ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 111a7597317a..23d9e0a476ac 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1310,8 +1310,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1594,7 +1594,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
/*
* If HW has bug working with tunnel packet decapsulation and
* scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
- * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+ * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
*/
if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 7263d354b180..3a9b716e438c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1704,10 +1704,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_udp_tunnel *udp_tunnel)
{
MLX5_ASSERT(udp_tunnel != NULL);
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
udp_tunnel->udp_port == 4789)
return 0;
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
udp_tunnel->udp_port == 4790)
return 0;
return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 42cacd0bbe3b..52f03ada2ced 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1233,7 +1233,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
struct mlx5_flow_rss_desc {
uint32_t level;
uint32_t queue_num; /**< Number of entries in @p queue. */
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint64_t hash_fields; /* Verbs Hash fields. */
uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
#define MLX5_VPMD_DESCS_PER_LOOP 4
/* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
/* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
MLX5_RSS_SRC_DST_ONLY))
/* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
}
if ((dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+ RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->default_txportconf.ring_size = 256;
info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
- if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
- (priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+ if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+ (priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
info->default_rxportconf.nb_queues = 16;
info->default_txportconf.nb_queues = 16;
if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 002449e993e7..d645fd48647e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
uint64_t rss_types;
/**<
* RSS types bit-field associated with this node
- * (see ETH_RSS_* definitions).
+ * (see RTE_ETH_RSS_* definitions).
*/
uint64_t node_flags;
/**<
@@ -298,7 +298,7 @@ mlx5_flow_expand_rss_skip_explicit(const struct mlx5_flow_expand_node graph[],
* @param[in] pattern
* User flow pattern.
* @param[in] types
- * RSS types to expand (see ETH_RSS_* definitions).
+ * RSS types to expand (see RTE_ETH_RSS_* definitions).
* @param[in] graph
* Input graph to expand @p pattern according to @p types.
* @param[in] graph_root_index
@@ -560,8 +560,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV6),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -569,11 +569,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_OUTER_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -584,8 +584,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_GRE,
MLX5_EXPANSION_NVGRE),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -593,11 +593,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_VXLAN] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -659,32 +659,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
MLX5_EXPANSION_IPV4_TCP),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_IPV4_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
MLX5_EXPANSION_IPV6_TCP,
MLX5_EXPANSION_IPV6_FRAG_EXT),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_IPV6_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1100,7 +1100,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
* @param[in] tunnel
* 1 when the hash field is for a tunnel item.
* @param[in] layer_types
- * ETH_RSS_* types.
+ * RTE_ETH_RSS_* types.
* @param[in] hash_fields
* Item hash fields.
*
@@ -1653,14 +1653,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
&rss->types,
"some RSS protocols are not"
" supported");
- if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
- !(rss->types & ETH_RSS_IP))
+ if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+ !(rss->types & RTE_ETH_RSS_IP))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L3 partial RSS requested but L3 RSS"
" type not specified");
- if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
- !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+ if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+ !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L4 partial RSS requested but L4 RSS"
@@ -6427,8 +6427,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
* mlx5_flow_hashfields_adjust() in advance.
*/
rss_desc->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
}
flow->dev_handles = 0;
if (rss && rss->types) {
@@ -7126,7 +7126,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
if (!priv->reta_idx_n || !priv->rxqs_n) {
return 0;
}
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
queue[i] = (*priv->reta_idx)[i];
@@ -8794,7 +8794,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF,
NULL, "invalid port configuration");
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
ctx->action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f1a83d537d0c..4a16f30fb7a6 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -331,18 +331,18 @@ enum mlx5_feature_name {
/* Valid layer type for IPV4 RSS. */
#define MLX5_IPV4_LAYER_TYPES \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
/* IBV hash source bits for IPV4. */
#define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
/* Valid layer type for IPV6 RSS. */
#define MLX5_IPV6_LAYER_TYPES \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
/* IBV hash source bits for IPV6. */
#define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 5bd90bfa2818..c4a5706532a9 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10862,9 +10862,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
else
dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10872,9 +10872,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
else
dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10888,11 +10888,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
return;
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
- if (rss_types & ETH_RSS_UDP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_UDP;
else
@@ -10900,11 +10900,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
}
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
- if (rss_types & ETH_RSS_TCP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_TCP;
else
@@ -14444,9 +14444,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4:
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV4;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV4;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV4;
else
*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14455,9 +14455,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV6:
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV6;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV6;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV6;
else
*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14466,11 +14466,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_UDP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_UDP:
- if (rss_types & ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_UDP) {
*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
else
*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14479,11 +14479,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_TCP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_TCP:
- if (rss_types & ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_TCP) {
*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
else
*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14631,8 +14631,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
origin = &shared_rss->origin;
origin->func = rss->func;
origin->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 892abcb65779..f9010a674d7f 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1824,7 +1824,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_TCP,
+ (rss_desc, tunnel, RTE_ETH_RSS_TCP,
(IBV_RX_HASH_SRC_PORT_TCP |
IBV_RX_HASH_DST_PORT_TCP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1837,7 +1837,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_UDP,
+ (rss_desc, tunnel, RTE_ETH_RSS_UDP,
(IBV_RX_HASH_SRC_PORT_UDP |
IBV_RX_HASH_DST_PORT_UDP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
--git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
if (!(*priv->rxqs)[i])
continue;
(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
- !!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+ !!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
++idx;
}
return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
}
/* Fill each entry of the table even if its bit is not set. */
for (idx = 0, i = 0; (i != reta_size); ++i) {
- idx = i / RTE_RETA_GROUP_SIZE;
- reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
(*priv->reta_idx)[i];
}
return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
return ret;
for (idx = 0, i = 0; (i != reta_size); ++i) {
- idx = i / RTE_RETA_GROUP_SIZE;
- pos = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ pos = i % RTE_ETH_RETA_GROUP_SIZE;
if (((reta_conf[idx].mask >> i) & 0x1) == 0)
continue;
MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 60673d014d02..14b9991c5fa8 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,22 +333,22 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *config = &priv->config;
- uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_RSS_HASH);
+ uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
if (config->hw_fcs_strip)
- offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (config->hw_csum)
- offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
if (config->hw_vlan_strip)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (MLX5_LRO_SUPPORTED(dev))
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
return offloads;
}
@@ -362,7 +362,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
uint64_t
mlx5_get_rx_port_offloads(void)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
return offloads;
}
@@ -694,7 +694,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->dev_conf.rxmode.offloads;
/* The offloads should be checked on rte_eth_dev layer. */
- MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+ MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
DRV_LOG(ERR, "port %u queue index %u split "
"offload not configured",
@@ -1336,7 +1336,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
- unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+ unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
@@ -1439,7 +1439,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
MLX5_ASSERT(tmpl->rxq.rxseg_n &&
tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
- if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
@@ -1485,7 +1485,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
config->mprq.stride_size_n : mprq_stride_size;
tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
tmpl->rxq.strd_scatter_en =
- !!(offloads & DEV_RX_OFFLOAD_SCATTER);
+ !!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
max_lro_size = RTE_MIN(max_rx_pktlen,
@@ -1500,7 +1500,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
max_lro_size = max_rx_pktlen;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
if (lro_on_queue && first_mb_free_size <
@@ -1561,9 +1561,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
/* Toggle RX checksum offload if hardware supports it. */
- tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+ tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* Configure Rx timestamp. */
- tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+ tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
tmpl->rxq.timestamp_rx_flag = 0;
if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
&tmpl->rxq.timestamp_offset,
@@ -1572,11 +1572,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
goto error;
}
/* Configure VLAN stripping. */
- tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
/* By default, FCS (CRC) is stripped by hardware. */
tmpl->rxq.crc_present = 0;
tmpl->rxq.lro = lro_on_queue;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (config->hw_fcs_strip) {
/*
* RQs used for LRO-enabled TIRs should not be
@@ -1606,7 +1606,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
tmpl->rxq.crc_present << 2);
/* Save port ID. */
tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
- (!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+ (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
tmpl->rxq.port_id = dev->data->port_id;
tmpl->priv = priv;
tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
/* HW checksum offload capabilities of vectorized Tx. */
#define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
/*
* Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
unsigned int diff = 0, olx = 0, i, m;
MLX5_ASSERT(priv);
- if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
/* We should support Multi-Segment Packets. */
olx |= MLX5_TXOFF_CONFIG_MULTI;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
/* We should support TCP Send Offload. */
olx |= MLX5_TXOFF_CONFIG_TSO;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support Software Parser for Tunnels. */
olx |= MLX5_TXOFF_CONFIG_SWP;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support IP/TCP/UDP Checksums. */
olx |= MLX5_TXOFF_CONFIG_CSUM;
}
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
/* We should support VLAN insertion. */
olx |= MLX5_TXOFF_CONFIG_VLAN;
}
- if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
rte_mbuf_dynflag_lookup
(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 1f92250f5edd..02bb9307ae61 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,42 +98,42 @@ uint64_t
mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
- uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
struct mlx5_dev_config *config = &priv->config;
if (config->hw_csum)
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if (config->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (config->tx_pp)
- offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+ offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
if (config->swp) {
if (config->swp & MLX5_SW_PARSING_CSUM_CAP)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->swp & MLX5_SW_PARSING_TSO_CAP)
- offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
}
if (config->tunnel_en) {
if (config->hw_csum)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->tso) {
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)
- offloads |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_GRE_CAP)
- offloads |= DEV_TX_OFFLOAD_GRE_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO;
if (config->tunnel_en &
MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)
- offloads |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
}
}
if (!config->mprq.enabled)
- offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
return offloads;
}
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
unsigned int inlen_mode; /* Minimal required Inline data. */
unsigned int txqs_inline; /* Min Tx queues to enable inline. */
uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
- bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
bool vlan_inline;
unsigned int temp;
txq_ctrl->txq.fast_free =
- !!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
- !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+ !!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
!config->mprq.enabled);
if (config->txqs_inline == MLX5_ARG_UNSET)
txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
* tx_burst routine.
*/
txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
- vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+ vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
!config->hw_vlan_insert;
/*
* If there are few Tx queues it is prioritized
@@ -978,19 +978,19 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
MLX5_MAX_TSO_HEADER);
txq_ctrl->txq.tso_en = 1;
}
- if (((DEV_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
+ if (((RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) |
- ((DEV_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
+ ((RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) |
- ((DEV_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
+ ((RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
(config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) |
(config->swp & MLX5_SW_PARSING_TSO_CAP))
txq_ctrl->txq.tunnel_en = 1;
- txq_ctrl->txq.swp_en = (((DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO) &
+ txq_ctrl->txq.swp_en = (((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO) &
txq_ctrl->txq.offloads) && (config->swp &
MLX5_SW_PARSING_TSO_CAP)) |
- ((DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM &
+ ((RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM &
txq_ctrl->txq.offloads) && (config->swp &
MLX5_SW_PARSING_CSUM_CAP));
}
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct mlx5_priv *priv = dev->data->dev_private;
unsigned int i;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (!priv->config.hw_vlan_strip) {
DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 31c4d3276053..9a9069da7572 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -485,8 +485,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
if (config->hw_padding) {
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2a0288087357..10fe6d828ccd 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
struct mvneta_priv *priv = dev->data->dev_private;
struct neta_ppio_params *ppio_params;
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
if (dev->data->nb_rx_queues > 1)
@@ -126,7 +126,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ppio_params = &priv->ppio_params;
@@ -151,10 +151,10 @@ static int
mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_dev_info *info)
{
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G;
info->max_rx_queues = MRVL_NETA_RXQ_MAX;
info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -503,28 +503,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
neta_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index 126a9a0c11b9..ccb87d518d83 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,14 +54,14 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 9836bb071a82..62d8aa586dae 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -734,7 +734,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9b5..d0746b0d1215 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,15 +58,15 @@
#define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
/** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
@@ -442,14 +442,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
if (rss_conf->rss_hf == 0) {
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
- } else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_2_TUPLE;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 1;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 0;
@@ -483,8 +483,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
- dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+ dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
return -EINVAL;
@@ -502,7 +502,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -524,7 +524,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return ret;
if (dev->data->nb_rx_queues == 1 &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
priv->configured = 1;
@@ -623,7 +623,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -644,7 +644,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -664,14 +664,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
ret = pp2_ppio_disable(priv->ppio);
if (ret)
return ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -893,7 +893,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
if (dev->data->all_multicast == 1)
mrvl_allmulticast_enable(dev);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = mrvl_populate_vlan_table(dev, 1);
if (ret) {
MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -929,11 +929,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
priv->flow_ctrl = 0;
}
- if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
ret = mrvl_dev_set_link_up(dev);
if (ret) {
MRVL_LOG(ERR, "Failed to set link up");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
}
@@ -1202,30 +1202,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case SPEED_10000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
pp2_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
@@ -1709,11 +1709,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
{
struct mrvl_priv *priv = dev->data->dev_private;
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
info->max_rx_queues = MRVL_PP2_RXQ_MAX;
info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1733,9 +1733,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
info->tx_offload_capa = MRVL_TX_OFFLOADS;
info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
- info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP;
+ info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP;
/* By default packets are dropped if no descriptors are available */
info->default_rxconf.rx_drop_en = 1;
@@ -1864,13 +1864,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
int ret;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
MRVL_LOG(ERR, "VLAN stripping is not supported\n");
return -ENOTSUP;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = mrvl_populate_vlan_table(dev, 1);
else
ret = mrvl_populate_vlan_table(dev, 0);
@@ -1879,7 +1879,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return ret;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
MRVL_LOG(ERR, "Extend VLAN not supported\n");
return -ENOTSUP;
}
@@ -2022,7 +2022,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
- rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+ rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2182,7 +2182,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
return ret;
}
- fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+ fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
if (ret) {
@@ -2191,10 +2191,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
if (en) {
- if (fc_conf->mode == RTE_FC_NONE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
}
return 0;
@@ -2240,19 +2240,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
rx_en = 1;
tx_en = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
rx_en = 0;
tx_en = 1;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
rx_en = 1;
tx_en = 0;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
rx_en = 0;
tx_en = 0;
break;
@@ -2329,11 +2329,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hash_type == PP2_PPIO_HASH_T_NONE)
rss_conf->rss_hf = 0;
else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
- rss_conf->rss_hf = ETH_RSS_IPV4;
+ rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
return 0;
}
@@ -3152,7 +3152,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
eth_dev->dev_ops = &mrvl_ops;
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_eth_dev_probing_finish(eth_dev);
return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
#include "hn_nvs.h"
#include "ndis.h"
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NETVSC_ARG_LATENCY "latency"
#define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
hn_rndis_get_linkspeed(hv);
link = (struct rte_eth_link) {
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_autoneg = ETH_LINK_SPEED_FIXED,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
.link_speed = hv->link_speed / 10000,
};
if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
if (old.link_status == link.link_status)
return 0;
PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
- (link.link_status == ETH_LINK_UP) ? "up" : "down");
+ (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
return rte_eth_linkstatus_set(dev, &link);
}
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
struct hn_data *hv = dev->data->dev_private;
int rc;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = HN_MAX_XFER_LEN;
dev_info->max_mac_addrs = 1;
dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
dev_info->flow_type_rss_offloads = hv->rss_offloads;
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->max_rx_queues = hv->max_queues;
dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < NDIS_HASH_INDCNT; i++) {
- uint16_t idx = i / RTE_RETA_GROUP_SIZE;
- uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+ uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
uint64_t mask = (uint64_t)1 << shift;
if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < NDIS_HASH_INDCNT; i++) {
- uint16_t idx = i / RTE_RETA_GROUP_SIZE;
- uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+ uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
uint64_t mask = (uint64_t)1 << shift;
if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
/* Convert from DPDK RSS hash flags to NDIS hash flags */
hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
hv->rss_hash |= NDIS_HASH_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
hv->rss_hash |= NDIS_HASH_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
hv->rss_hash |= NDIS_HASH_IPV6_EX;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
if (hv->rss_hash & NDIS_HASH_IPV4)
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (hv->rss_hash & NDIS_HASH_IPV6)
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
if (hv->rss_hash & NDIS_HASH_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
return 0;
}
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = hn_rndis_conf_offload(hv, txmode->offloads,
rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 62ba39636cd8..1b63b27e0c3e 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
hv->rss_offloads = 0;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
- hv->rss_offloads |= ETH_RSS_IPV4
- | ETH_RSS_NONFRAG_IPV4_TCP
- | ETH_RSS_NONFRAG_IPV4_UDP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV4
+ | RTE_ETH_RSS_NONFRAG_IPV4_TCP
+ | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
- hv->rss_offloads |= ETH_RSS_IPV6
- | ETH_RSS_NONFRAG_IPV6_TCP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6
+ | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
- hv->rss_offloads |= ETH_RSS_IPV6_EX
- | ETH_RSS_IPV6_TCP_EX;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+ | RTE_ETH_RSS_IPV6_TCP_EX;
/* Commit! */
*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
== NDIS_RXCSUM_CAP_TCP4)
params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
== NDIS_TXCSUM_CAP_IP4)
params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
else
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
else
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
return error;
}
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
== HN_NDIS_TXCSUM_CAP_IP4)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
== HN_NDIS_TXCSUM_CAP_TCP4 &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
== HN_NDIS_TXCSUM_CAP_TCP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
(hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
== HN_NDIS_LSOV2_CAP_IP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
return 0;
}
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 99d93ebf4667..3c39937816a4 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = dev->data->nb_rx_queues;
dev_info->max_tx_queues = dev->data->nb_tx_queues;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
status.speed = MAC_SPEED_UNKNOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_SPEED_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
if (internals->rxmac[0] != NULL) {
nc_rxmac_read_status(internals->rxmac[0], &status);
switch (status.speed) {
case MAC_SPEED_10G:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case MAC_SPEED_40G:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case MAC_SPEED_100G:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
nc_rxmac_read_status(internals->rxmac[i], &status);
if (status.enabled && status.link_up) {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
break;
}
}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 3ebb332ae46c..f76e2ba64621 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
}
/* Timestamps are enabled when there is
* key-value pair: enable_timestamp=1
- * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+ * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
*/
if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 0003fd54dde5..3ea697c54462 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Checking TX mode */
if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
}
/* Checking RX mode */
- if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
!(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
PMD_INIT_LOG(INFO, "RSS not supported");
return -EINVAL;
@@ -359,19 +359,19 @@ nfp_check_offloads(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
hw->mtu = dev->data->mtu;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
/* L2 broadcast */
@@ -383,13 +383,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_L2MC;
/* TX checksum offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
/* LSO offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hw->cap & NFP_NET_CFG_CTRL_LSO)
ctrl |= NFP_NET_CFG_CTRL_LSO;
else
@@ -397,7 +397,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
/* RX gather */
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
ctrl |= NFP_NET_CFG_CTRL_GATHER;
return ctrl;
@@ -485,14 +485,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
int ret;
static const uint32_t ls_to_ethtool[] = {
- [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_1G] = ETH_SPEED_NUM_1G,
- [NFP_NET_CFG_STS_LINK_RATE_10G] = ETH_SPEED_NUM_10G,
- [NFP_NET_CFG_STS_LINK_RATE_25G] = ETH_SPEED_NUM_25G,
- [NFP_NET_CFG_STS_LINK_RATE_40G] = ETH_SPEED_NUM_40G,
- [NFP_NET_CFG_STS_LINK_RATE_50G] = ETH_SPEED_NUM_50G,
- [NFP_NET_CFG_STS_LINK_RATE_100G] = ETH_SPEED_NUM_100G,
+ [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_1G] = RTE_ETH_SPEED_NUM_1G,
+ [NFP_NET_CFG_STS_LINK_RATE_10G] = RTE_ETH_SPEED_NUM_10G,
+ [NFP_NET_CFG_STS_LINK_RATE_25G] = RTE_ETH_SPEED_NUM_25G,
+ [NFP_NET_CFG_STS_LINK_RATE_40G] = RTE_ETH_SPEED_NUM_40G,
+ [NFP_NET_CFG_STS_LINK_RATE_50G] = RTE_ETH_SPEED_NUM_50G,
+ [NFP_NET_CFG_STS_LINK_RATE_100G] = RTE_ETH_SPEED_NUM_100G,
};
PMD_DRV_LOG(DEBUG, "Link update");
@@ -504,15 +504,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
memset(&link, 0, sizeof(struct rte_eth_link));
if (nn_link_status & NFP_NET_CFG_STS_LINK)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
NFP_NET_CFG_STS_LINK_RATE_MASK;
if (nn_link_status >= RTE_DIM(ls_to_ethtool))
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
link.link_speed = ls_to_ethtool[nn_link_status];
@@ -701,26 +701,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = 1;
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -757,22 +757,22 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
};
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_UDP;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP;
dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
}
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -843,7 +843,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
if (link.link_status)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -973,12 +973,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
new_ctrl = 0;
/* Enable vlan strip if it is not configured yet */
- if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
!(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
/* Disable vlan strip just if it is configured */
- if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
@@ -1018,8 +1018,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
*/
for (i = 0; i < reta_size; i += 4) {
/* Handling 4 RSS entries per loop */
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
if (!mask)
@@ -1099,8 +1099,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
*/
for (i = 0; i < reta_size; i += 4) {
/* Handling 4 RSS entries per loop */
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
if (!mask)
@@ -1138,22 +1138,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
rss_hf = rss_conf->rss_hf;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1223,22 +1223,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
/* Propagate current RSS hash functions to caller */
rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8c7..e08e594b04fe 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -141,7 +141,7 @@ nfp_net_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0c9..817fe64dbceb 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -103,7 +103,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = link_up;
link_speeds = &dev->data->dev_conf.link_speeds;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
negotiate = true;
err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
allowed_speeds = 0;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
- allowed_speeds |= ETH_LINK_SPEED_1G;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_100M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_10M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
if (*link_speeds & ~allowed_speeds) {
PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = hw->mac.default_speeds;
} else {
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= NGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= NGBE_LINK_SPEED_100M_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= NGBE_LINK_SPEED_10M_FULL;
}
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_10M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_10M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ~ETH_LINK_SPEED_AUTONEG);
+ ~RTE_ETH_LINK_SPEED_AUTONEG);
hw->mac.get_link_status = true;
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case NGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
case NGBE_LINK_SPEED_10M_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
lan_speed = 0;
break;
case NGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
lan_speed = 1;
break;
case NGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
lan_speed = 2;
break;
}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
- if (link.link_status == ETH_LINK_UP) {
+ if (link.link_status == RTE_ETH_LINK_UP) {
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
ngbe_dev_link_update(dev, 0);
/* likely to up */
- if (link.link_status != ETH_LINK_UP)
+ if (link.link_status != RTE_ETH_LINK_UP)
/* handle it 1 sec later, wait it being stable */
timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 25b9e5b1ce1b..ca03469d0e6d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
rte_spinlock_t rss_lock;
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
- RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+ RTE_ETH_RETA_GROUP_SIZE];
uint8_t rss_key[40]; /**< 40-byte hash key. */
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
if (dev == NULL)
return -EINVAL;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
if (dev == NULL)
return 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -391,9 +391,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
rte_spinlock_lock(&internal->rss_lock);
/* Copy RETA table */
- for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
internal->reta_conf[i].mask = reta_conf[i].mask;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
}
@@ -416,8 +416,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
rte_spinlock_lock(&internal->rss_lock);
/* Copy RETA table */
- for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
}
@@ -548,8 +548,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
internals->port_id = eth_dev->data->port_id;
rte_eth_random_addr(internals->eth_addr.addr_bytes);
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
- internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
+ internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
rte_memcpy(internals->rss_key, default_rss_key, 40);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index f578123ed00b..5b8cbec67b5d 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
(eth_dev->data->port_id),
link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
switch (nic->speed) {
case OCTEONTX_LINK_SPEED_SGMII:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case OCTEONTX_LINK_SPEED_XAUI:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_RXAUI:
case OCTEONTX_LINK_SPEED_10G_R:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_QSGMII:
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case OCTEONTX_LINK_SPEED_40G_R:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case OCTEONTX_LINK_SPEED_RESERVE1:
case OCTEONTX_LINK_SPEED_RESERVE2:
default:
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
octeontx_log_err("incorrect link speed %d", nic->speed);
break;
}
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= OCCTX_TX_MULTI_SEG_F;
return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
flags |= OCCTX_RX_MULTI_SEG_F;
eth_dev->data->scattered_rx = 1;
/* If scatter mode is enabled, TX should also be in multi
* seg mode, else memory leak will occur
*/
- nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
- if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+ if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
- txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+ txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
octeontx_log_err("setting link speed/duplex not supported");
return -EINVAL;
}
@@ -530,13 +530,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
octeontx_log_err("Scatter mode is disabled");
return -EINVAL;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -571,7 +571,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
/* Setup scatter mode if needed by jumbo */
if (data->mtu > buffsz) {
- nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
}
@@ -843,10 +843,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
struct octeontx_nic *nic = octeontx_pmd_priv(dev);
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_40G;
/* Min/Max MTU supported */
dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1356,7 +1356,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
nic->ev_ports = 1;
nic->print_flag = -1;
- data->dev_link.link_status = ETH_LINK_DOWN;
+ data->dev_link.link_status = RTE_ETH_LINK_DOWN;
data->dev_started = 0;
data->promiscuous = 0;
data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index 3a02824e3948..c493fa7a03ed 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,23 +55,23 @@
#define OCCTX_MAX_MTU (OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
#define OCTEONTX_RX_OFFLOADS ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
static inline struct octeontx_nic *
octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = octeontx_vlan_hw_filter(nic, true);
if (rc)
goto done;
- nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
} else {
rc = octeontx_vlan_hw_filter(nic, false);
if (rc)
goto done;
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
}
}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
TAILQ_INIT(&nic->vlan_info.fltr_tbl);
- rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
if (rc)
octeontx_log_err("Failed to set vlan offload rc=%d", rc);
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
return rc;
if (conf.rx_pause && conf.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (conf.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (conf.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
/* low_water & high_water values are in Bytes */
fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
conf.high_water = fc_conf->high_water;
conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index f491e20e95c1..060d267f5de5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
if (otx2_dev_is_vf(dev) ||
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
/* TSO not supported for earlier chip revisions */
if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
return capa;
}
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
req->npa_func = otx2_npa_pf_func_get();
req->sso_func = otx2_sso_pf_func_get();
req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
aq->rq.sso_ena = 0;
- if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
aq->rq.ipsech_ena = 1;
aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -665,7 +665,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev)) {
rc = otx2_nix_raw_clock_tsc_conv(dev);
if (rc) {
@@ -692,7 +692,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -707,29 +707,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
if (!dev->ptype_disable)
@@ -768,43 +768,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
- conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if (conf & DEV_TX_OFFLOAD_SECURITY)
+ if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
@@ -914,8 +914,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Setting up the rx[tx]_offload_flags due to change
* in rx[tx]_offloads.
@@ -1848,21 +1848,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
otx2_err("Outer IP and SCTP checksum unsupported");
goto fail_configure;
}
@@ -2235,7 +2235,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled in PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_enable(eth_dev);
else
@@ -2563,8 +2563,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
rc = otx2_eth_sec_ctx_create(eth_dev);
if (rc)
goto free_mac_addrs;
- dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+ dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+ dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
/* Initialize rte-flow */
rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4557a0ee1945..a5282c6c1231 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,43 +117,43 @@
#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
#define CQ_TIMER_THRESH_MAX 255
-#define NIX_RSS_L3_L4_SRC_DST (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
- | ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+ | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
- ETH_RSS_TCP | ETH_RSS_SCTP | \
- ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
- ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+ NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_C_VLAN)
#define NIX_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
#define NIX_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_QINQ_STRIP | \
- DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
- val = ETH_RSS_RETA_SIZE_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
- val = ETH_RSS_RETA_SIZE_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
- val = ETH_RSS_RETA_SIZE_256;
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+ val = RTE_ETH_RSS_RETA_SIZE_64;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+ val = RTE_ETH_RSS_RETA_SIZE_128;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+ val = RTE_ETH_RSS_RETA_SIZE_256;
else
val = NIX_RSS_RETA_SIZE;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba45..d5caaa326a5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -26,11 +26,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
return -EINVAL;
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * NIX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -568,17 +568,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
};
/* Auto negotiation disabled */
- devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
/* 50G and 100G to be supported for board version C0
* and above.
*/
if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index 7bd1ed6da043..4d40184de46d 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -869,8 +869,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
!RTE_IS_POWER_OF_2(sa_width));
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return 0;
if (rte_security_dynfield_register() < 0)
@@ -912,8 +912,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
uint16_t port = eth_dev->data->port_id;
char name[RTE_MEMZONE_NAMESIZE];
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
goto err_exit;
}
- if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
rc = flow_update_sec_tt(dev, actions);
if (rc != 0) {
rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
int rc;
if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
goto done;
if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rsp->rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (rsp->tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
done:
return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (otx2_dev_is_Ax(dev) &&
(dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+ (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_conf.mode =
- (fc_conf.mode == RTE_FC_FULL ||
- fc_conf.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_conf.mode == RTE_ETH_FC_FULL ||
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
return 0;
memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 79b92fda8a4a..91267bbb8182 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
attr, "No support of RSS in egress");
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
act, "multi-queue mode is disabled");
@@ -1186,7 +1186,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
*FLOW_KEY_ALG index. So, till we update the action with
*flow_key_alg index, set the action to drop.
*/
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
flow->npc_action = NIX_RX_ACTIONOP_DROP;
else
flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
otx2_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
eth_link.link_status = link->link_up;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
static int
lbk_link_update(struct rte_eth_link *link)
{
- link->link_status = ETH_LINK_UP;
- link->link_speed = ETH_SPEED_NUM_100G;
- link->link_autoneg = ETH_LINK_FIXED;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = RTE_ETH_LINK_UP;
+ link->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
link->link_status = rsp->link_info.link_up;
link->link_speed = rsp->link_info.speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (rsp->link_info.full_duplex)
link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
/* 50G and 100G to be supported for board version C0 and above */
if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
link_speed = 100000;
- if (link_speeds & ETH_LINK_SPEED_50G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
link_speed = 50000;
}
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed = 40000;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed = 25000;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed = 20000;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed = 10000;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
link_speed = 5000;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed = 1000;
return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
static inline uint8_t
nix_parse_eth_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
return cgx_change_mode(dev, &cfg);
}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
/* System time should be already on by default */
nix_start_timecounters(eth_dev);
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
return -EINVAL;
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
rss->ind_tbl[idx] = reta_conf[i].reta[j];
idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = rss->ind_tbl[j];
}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
}
#define RSS_IPV4_ENABLE ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE ( \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE ( \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
dev->rss_info.nix_rss = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
otx2_nix_rss_set_key(dev, rss_conf->rss_key,
(uint32_t)rss_conf->rss_key_len);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
int rc;
/* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
return 0;
/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
}
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index 0d85c898bfe7..2c18483b98fd 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
/* For PTP enabled, scalar rx function should be chosen as most of the
* PTP apps are implemented to rx burst 1 pkt.
*/
- if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
pick_rx_func(eth_dev, nix_eth_rx_burst);
else
pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ad704d745b04..135615580bbf 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
else
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
* Take offset from LA since in case of untagged packet,
* lbptr is zero.
*/
- if (type == ETH_VLAN_TYPE_OUTER) {
+ if (type == RTE_ETH_VLAN_TYPE_OUTER) {
vtag_action.act.vtag0_def = vtag_index;
vtag_action.act.vtag0_lid = NPC_LID_LA;
vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
if (vlan->strip_on ||
(vlan->qinq_on && !vlan->qinq_before_def)) {
if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- ETH_MQ_RX_RSS)
+ RTE_ETH_MQ_RX_RSS)
vlan->def_rx_mcam_ent.action |=
NIX_RX_ACTIONOP_RSS;
else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
rxmode = ð_dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, true);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, false);
}
if (rc)
goto done;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, true, 0);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, false, 0);
}
if (rc)
goto done;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
if (!dev->vlan_info.qinq_on) {
- offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, true);
if (rc)
goto done;
}
} else {
if (dev->vlan_info.qinq_on) {
- offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, false);
if (rc)
goto done;
}
}
- if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP)) {
+ if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
dev->rx_offloads |= offloads;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
tpid_cfg->tpid = tpid;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
else
tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
if (rc)
return rc;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
dev->vlan_info.outer_vlan_tpid = tpid;
else
dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
vlan->outer_vlan_idx = 0;
}
- rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+ rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
vtag_index, on);
if (rc < 0) {
printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
} else {
/* Reinstall all mcam entries now if filter offload is set */
if (eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
nix_vlan_reinstall_vlan_filters(eth_dev);
}
mask =
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
rc = otx2_nix_vlan_offload_set(eth_dev, mask);
if (rc) {
otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 698d22e22685..74dc36a17648 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,14 +33,14 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
otx_epvf = OTX_EP_DEV(eth_dev);
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
devinfo->max_rx_queues = otx_epvf->max_rx_queues;
devinfo->max_tx_queues = otx_epvf->max_tx_queues;
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
- devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+ devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index aa4dcd33cc79..9338b30672ec 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)
@@ -954,7 +954,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l4_len = hdr_lens.l4_len;
if (droq_pkt->nb_segs > 1 &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
goto oq_read_fail;
}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index d695c5eef7b0..ec29fd6bc53c 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -136,10 +136,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -659,7 +659,7 @@ eth_dev_start(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -714,7 +714,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 4cc002ee8fab..047010e15ed0 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
static struct pfe *g_pfe;
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
/* TODO: make pfe_svr a runtime option.
* Driver should be able to get the SVR
@@ -601,9 +601,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
}
link.link_status = lstatus;
- link.link_speed = ETH_LINK_SPEED_1G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_speed = RTE_ETH_LINK_SPEED_1G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
pfe_eth_atomic_write_link_status(dev, &link);
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t; /* In DWORDS !!! */
struct eth_phy_cfg {
/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
u32 speed;
-#define ETH_SPEED_AUTONEG 0
-#define ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG 0
+#define RTE_ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
u32 pause; /* bitmask */
#define ETH_PAUSE_NONE 0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc74e..c907d7fd8312 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
}
use_tx_offload = !!(tx_offloads &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
- DEV_TX_OFFLOAD_TCP_TSO | /* tso */
- DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
if (use_tx_offload) {
DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
(void)qede_vlan_stripping(eth_dev, 1);
else
(void)qede_vlan_stripping(eth_dev, 0);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN filtering kicks in when a VLAN is added */
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
qede_vlan_filter_set(eth_dev, 0, 1);
} else {
if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
* enabled
*/
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
qede_vlan_filter_set(eth_dev, 0, 0);
}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
/* Configure default RETA */
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
- id = i / RTE_RETA_GROUP_SIZE;
- pos = i % RTE_RETA_GROUP_SIZE;
+ id = i / RTE_ETH_RETA_GROUP_SIZE;
+ pos = i % RTE_ETH_RETA_GROUP_SIZE;
q = i % QEDE_RSS_COUNT(eth_dev);
reta_conf[id].reta[pos] = q;
}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure TPA parameters */
- if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (qede_enable_tpa(eth_dev, true))
return -EINVAL;
/* Enable scatter mode for LRO */
if (!eth_dev->data->scattered_rx)
- rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
}
/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
* Also, we would like to retain similar behavior in PF case, so we
* don't do PF/VF specific check here.
*/
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
if (qede_config_rss(eth_dev))
goto err;
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE(edev);
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* We need to have min 1 RX queue.There is no min check in
* rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DP_NOTICE(edev, false,
"Invalid devargs supplied, requested change will not take effect\n");
- if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
- rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+ if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
DP_ERR(edev, "Unsupported multi-queue mode\n");
return -ENOTSUP;
}
@@ -1312,7 +1312,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1321,8 +1321,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
qdev->mtu = eth_dev->data->mtu;
/* Enable VLAN offloads by default */
- ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -1385,34 +1385,34 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
- dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
dev_info->rx_queue_offload_capa = 0;
/* TX offloads are on a per-packet basis, so it is applicable
* to both at port and queue levels.
*/
- dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
dev_info->default_txconf = (struct rte_eth_txconf) {
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
};
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1424,17 +1424,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(struct qed_link_output));
qdev->ops->common->get_link(edev, &link);
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
- speed_cap |= ETH_LINK_SPEED_1G;
+ speed_cap |= RTE_ETH_LINK_SPEED_1G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
- speed_cap |= ETH_LINK_SPEED_10G;
+ speed_cap |= RTE_ETH_LINK_SPEED_10G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
- speed_cap |= ETH_LINK_SPEED_25G;
+ speed_cap |= RTE_ETH_LINK_SPEED_25G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
- speed_cap |= ETH_LINK_SPEED_40G;
+ speed_cap |= RTE_ETH_LINK_SPEED_40G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
- speed_cap |= ETH_LINK_SPEED_50G;
+ speed_cap |= RTE_ETH_LINK_SPEED_50G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
- speed_cap |= ETH_LINK_SPEED_100G;
+ speed_cap |= RTE_ETH_LINK_SPEED_100G;
dev_info->speed_capa = speed_cap;
return 0;
@@ -1461,10 +1461,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
/* Link Mode */
switch (q_link.duplex) {
case QEDE_DUPLEX_HALF:
- link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case QEDE_DUPLEX_FULL:
- link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case QEDE_DUPLEX_UNKNOWN:
default:
@@ -1473,11 +1473,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
link.link_duplex = link_duplex;
/* Link Status */
- link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
/* AN */
link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
link.link_speed, link.link_duplex,
@@ -2012,12 +2012,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
/* Pause is assumed to be supported (SUPPORTED_Pause) */
- if (fc_conf->mode == RTE_FC_FULL)
+ if (fc_conf->mode == RTE_ETH_FC_FULL)
params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
QED_LINK_PAUSE_RX_ENABLE);
- if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
- if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
params.link_up = true;
@@ -2041,13 +2041,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
QED_LINK_PAUSE_TX_ENABLE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2088,14 +2088,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
{
*rss_caps = 0;
- *rss_caps |= (hf & ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
}
int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2221,7 +2221,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
uint8_t entry;
int rc = 0;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported by hardware\n",
reta_size);
return -EINVAL;
@@ -2245,8 +2245,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
for_each_hwfn(edev, i) {
for (j = 0; j < reta_size; j++) {
- idx = j / RTE_RETA_GROUP_SIZE;
- shift = j % RTE_RETA_GROUP_SIZE;
+ idx = j / RTE_ETH_RETA_GROUP_SIZE;
+ shift = j % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift)) {
entry = reta_conf[idx].reta[shift];
fid = entry * edev->num_hwfns + i;
@@ -2282,15 +2282,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
uint16_t i, idx, shift;
uint8_t entry;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported\n",
reta_size);
return -EINVAL;
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift)) {
entry = qdev->rss_ind_table[i];
reta_conf[idx].reta[shift] = entry;
@@ -2718,16 +2718,16 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
adapter->ipgre.num_filters = 0;
if (is_vf) {
adapter->vxlan.enable = true;
- adapter->vxlan.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->vxlan.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
adapter->vxlan.udp_port = QEDE_VXLAN_DEF_PORT;
adapter->geneve.enable = true;
- adapter->geneve.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->geneve.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
adapter->geneve.udp_port = QEDE_GENEVE_DEF_PORT;
adapter->ipgre.enable = true;
- adapter->ipgre.filter_type = ETH_TUNNEL_FILTER_IMAC |
- ETH_TUNNEL_FILTER_IVLAN;
+ adapter->ipgre.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+ RTE_ETH_TUNNEL_FILTER_IVLAN;
} else {
adapter->vxlan.enable = false;
adapter->geneve.enable = false;
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..440440423a32 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -20,97 +20,97 @@ const struct _qede_udp_tunn_types {
const char *string;
} qede_tunn_types[] = {
{
- ETH_TUNNEL_FILTER_OMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC,
ECORE_FILTER_MAC,
ECORE_TUNN_CLSS_MAC_VLAN,
"outer-mac"
},
{
- ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_TENID,
ECORE_FILTER_VNI,
ECORE_TUNN_CLSS_MAC_VNI,
"vni"
},
{
- ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_INNER_MAC,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-mac"
},
{
- ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_INNER_VLAN,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-vlan"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID,
ECORE_FILTER_MAC_VNI_PAIR,
ECORE_TUNN_CLSS_MAC_VNI,
"outer-mac and vni"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-mac and inner-mac"
},
{
- ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-mac and inner-vlan"
},
{
- ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IMAC,
ECORE_FILTER_INNER_MAC_VNI_PAIR,
ECORE_TUNN_CLSS_INNER_MAC_VNI,
"vni and inner-mac",
},
{
- ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"vni and inner-vlan",
},
{
- ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
ECORE_FILTER_INNER_PAIR,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
"inner-mac and inner-vlan",
},
{
- ETH_TUNNEL_FILTER_OIP,
+ RTE_ETH_TUNNEL_FILTER_OIP,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"outer-IP"
},
{
- ETH_TUNNEL_FILTER_IIP,
+ RTE_ETH_TUNNEL_FILTER_IIP,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"inner-IP"
},
{
- RTE_TUNNEL_FILTER_IMAC_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_IVLAN"
},
{
- RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID,
+ RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_IVLAN_TENID"
},
{
- RTE_TUNNEL_FILTER_IMAC_TENID,
+ RTE_ETH_TUNNEL_FILTER_IMAC_TENID,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"IMAC_TENID"
},
{
- RTE_TUNNEL_FILTER_OMAC_TENID_IMAC,
+ RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC,
ECORE_FILTER_UNUSED,
MAX_ECORE_TUNN_CLSS,
"OMAC_TENID_IMAC"
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
/* check FDIR modes */
switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
ECORE_TUNN_CLSS_MAC_VLAN, false);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
qdev->vxlan.udp_port = udp_port;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c2263787b4ec..d585db8b61e8 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
- if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
#define QEDE_MAX_ETHER_HDR_LEN (RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
#define QEDE_ETH_MAX_LEN (RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
-#define QEDE_RSS_OFFLOAD_ALL (ETH_RSS_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP |\
- ETH_RSS_NONFRAG_IPV4_UDP |\
- ETH_RSS_IPV6 |\
- ETH_RSS_NONFRAG_IPV6_TCP |\
- ETH_RSS_NONFRAG_IPV6_UDP |\
- ETH_RSS_VXLAN |\
- ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL (RTE_ETH_RSS_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |\
+ RTE_ETH_RSS_IPV6 |\
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |\
+ RTE_ETH_RSS_VXLAN |\
+ RTE_ETH_RSS_GENEVE)
#define QEDE_RXTX_MAX(qdev) \
(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 0440019e07e1..db10f035dfcb 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -110,21 +110,21 @@ static int
eth_dev_stop(struct rte_eth_dev *dev)
{
dev->data->dev_started = 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_down(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_up(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = 1;
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 431c42f508d0..9c1be10ac93d 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -106,13 +106,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
{
uint32_t phy_caps = 0;
- if (~speeds & ETH_LINK_SPEED_FIXED) {
+ if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
phy_caps |= (1 << EFX_PHY_CAP_AN);
/*
* If no speeds are specified in the mask, any supported
* may be negotiated
*/
- if (speeds == ETH_LINK_SPEED_AUTONEG)
+ if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
phy_caps |=
(1 << EFX_PHY_CAP_1000FDX) |
(1 << EFX_PHY_CAP_10000FDX) |
@@ -121,17 +121,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
(1 << EFX_PHY_CAP_50000FDX) |
(1 << EFX_PHY_CAP_100000FDX);
}
- if (speeds & ETH_LINK_SPEED_1G)
+ if (speeds & RTE_ETH_LINK_SPEED_1G)
phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
- if (speeds & ETH_LINK_SPEED_10G)
+ if (speeds & RTE_ETH_LINK_SPEED_10G)
phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
- if (speeds & ETH_LINK_SPEED_25G)
+ if (speeds & RTE_ETH_LINK_SPEED_25G)
phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
- if (speeds & ETH_LINK_SPEED_40G)
+ if (speeds & RTE_ETH_LINK_SPEED_40G)
phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
- if (speeds & ETH_LINK_SPEED_50G)
+ if (speeds & RTE_ETH_LINK_SPEED_50G)
phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
- if (speeds & ETH_LINK_SPEED_100G)
+ if (speeds & RTE_ETH_LINK_SPEED_100G)
phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
return phy_caps;
@@ -401,10 +401,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
tx_offloads |= txq_info->offloads;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
else
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -899,7 +899,7 @@ sfc_attach(struct sfc_adapter *sa)
sa->priv.shared->tunnel_encaps =
encp->enc_tunnel_encapsulations_supported;
- if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso)
@@ -908,8 +908,8 @@ sfc_attach(struct sfc_adapter *sa)
if (sa->tso &&
(sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
- (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+ (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d958fd642fb1..eeb73a7530ef 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -979,11 +979,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
SFC_DP_RX_FEAT_INTR |
SFC_DP_RX_FEAT_STATS,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.get_dev_info = sfc_ef100_rx_get_dev_info,
.qsize_up_rings = sfc_ef100_rx_qsize_up_rings,
.qcreate = sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index e166fda888b1..67980a587fe4 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -971,16 +971,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
.features = SFC_DP_TX_FEAT_MULTI_PROCESS |
SFC_DP_TX_FEAT_STATS,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef100_get_dev_info,
.qsize_up_rings = sfc_ef100_tx_qsize_up_rings,
.qcreate = sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
},
.features = SFC_DP_RX_FEAT_FLOW_FLAG |
SFC_DP_RX_FEAT_FLOW_MARK,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.queue_offload_capa = 0,
.get_dev_info = sfc_ef10_essb_rx_get_dev_info,
.pool_ops_supported = sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
},
.features = SFC_DP_RX_FEAT_MULTI_PROCESS |
SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.get_dev_info = sfc_ef10_rx_get_dev_info,
.qsize_up_rings = sfc_ef10_rx_qsize_up_rings,
.qcreate = sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
if (txq->sw_ring == NULL)
goto fail_sw_ring_alloc;
- if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+ if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
info->txq_entries,
SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_EF10,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
.type = SFC_DP_TX,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f5986b610fff..833d833a0408 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -105,19 +105,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = sa->sriov.num_vfs;
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->max_rx_queues = sa->rxq_max;
dev_info->max_tx_queues = sa->txq_max;
@@ -145,8 +145,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
dev_info->tx_queue_offload_capa;
- if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->default_txconf.offloads |= txq_offloads_def;
@@ -989,16 +989,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
switch (link_fc) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case EFX_FCNTL_RESPOND:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case EFX_FCNTL_GENERATE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
default:
sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -1029,16 +1029,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
fcntl = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
fcntl = EFX_FCNTL_RESPOND;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
fcntl = EFX_FCNTL_GENERATE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
break;
default:
@@ -1313,7 +1313,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
- qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
qinfo->scattered_rx = 1;
}
qinfo->nb_desc = rxq_info->entries;
@@ -1523,9 +1523,9 @@ static efx_tunnel_protocol_t
sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
{
switch (rte_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
return EFX_TUNNEL_PROTOCOL_VXLAN;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
return EFX_TUNNEL_PROTOCOL_GENEVE;
default:
return EFX_TUNNEL_NPROTOS;
@@ -1652,7 +1652,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
/*
* Mapping of hash configuration between RTE and EFX is not one-to-one,
- * hence, conversion is done here to derive a correct set of ETH_RSS
+ * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
* flags which corresponds to the active EFX configuration stored
* locally in 'sfc_adapter' and kept up-to-date
*/
@@ -1778,8 +1778,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
for (entry = 0; entry < reta_size; entry++) {
- int grp = entry / RTE_RETA_GROUP_SIZE;
- int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+ int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+ int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[grp].mask >> grp_idx) & 1)
reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1828,10 +1828,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
for (entry = 0; entry < reta_size; entry++) {
- int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+ int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
struct rte_eth_rss_reta_entry64 *grp;
- grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+ grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
if (grp->mask & (1ull << grp_idx)) {
if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 8096af56739f..be2dfe778a0d 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -392,7 +392,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
const struct rte_flow_item_vlan *spec = NULL;
const struct rte_flow_item_vlan *mask = NULL;
const struct rte_flow_item_vlan supp_mask = {
- .tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+ .tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
.inner_type = RTE_BE16(0xffff),
};
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index 5320d8903dac..27b02b1119fb 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -573,66 +573,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
memset(link_info, 0, sizeof(*link_info));
if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
- link_info->link_status = ETH_LINK_DOWN;
+ link_info->link_status = RTE_ETH_LINK_DOWN;
else
- link_info->link_status = ETH_LINK_UP;
+ link_info->link_status = RTE_ETH_LINK_UP;
switch (link_mode) {
case EFX_LINK_10HDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_10FDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100HDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_100FDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_1000HDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_1000FDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_10000FDX:
- link_info->link_speed = ETH_SPEED_NUM_10G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_25000FDX:
- link_info->link_speed = ETH_SPEED_NUM_25G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_25G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_40000FDX:
- link_info->link_speed = ETH_SPEED_NUM_40G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_40G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_50000FDX:
- link_info->link_speed = ETH_SPEED_NUM_50G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_50G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100000FDX:
- link_info->link_speed = ETH_SPEED_NUM_100G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
SFC_ASSERT(B_FALSE);
/* FALLTHROUGH */
case EFX_LINK_UNKNOWN:
case EFX_LINK_DOWN:
- link_info->link_speed = ETH_SPEED_NUM_NONE;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_NONE;
link_info->link_duplex = 0;
break;
}
- link_info->link_autoneg = ETH_LINK_AUTONEG;
+ link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
int
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 2500b14cb006..9d88d554c1ba 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -405,7 +405,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
}
switch (conf->rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
if (nb_rx_queues != 1) {
sfcr_err(sr, "Rx RSS is not supported with %u queues",
nb_rx_queues);
@@ -420,7 +420,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
ret = -EINVAL;
}
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
break;
default:
sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
@@ -428,7 +428,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
break;
}
- if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+ if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
sfcr_err(sr, "Tx mode MQ modes not supported");
ret = -EINVAL;
}
@@ -553,8 +553,8 @@ sfc_repr_dev_link_update(struct rte_eth_dev *dev,
sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
} else {
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_UP;
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
}
return rte_eth_linkstatus_set(dev, &link);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c60ef17a922a..23df27c8f45a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -648,9 +648,9 @@ struct sfc_dp_rx sfc_efx_rx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_RX_EFX,
},
.features = SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.qsize_up_rings = sfc_efx_rx_qsize_up_rings,
.qcreate = sfc_efx_rx_qcreate,
.qdestroy = sfc_efx_rx_qdestroy,
@@ -931,7 +931,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (encp->enc_tunnel_encapsulations_supported == 0)
- no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
return ~no_caps;
}
@@ -1140,7 +1140,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
encp->enc_rx_prefix_size,
- (offloads & DEV_RX_OFFLOAD_SCATTER),
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
encp->enc_rx_scatter_max,
&error)) {
sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1166,15 +1166,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
rxq_info->type_flags |=
- (offloads & DEV_RX_OFFLOAD_SCATTER) ?
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
if ((encp->enc_tunnel_encapsulations_supported != 0) &&
(sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
if ((sa->negotiated_rx_metadata & RTE_ETH_RX_METADATA_USER_FLAG) != 0)
@@ -1211,7 +1211,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->refill_mb_pool = mb_pool;
if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
- (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
else
rxq_info->rxq_flags = 0;
@@ -1313,19 +1313,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
* Mapping between RTE RSS hash functions and their EFX counterparts.
*/
static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
- { ETH_RSS_NONFRAG_IPV4_TCP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP,
EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV4_UDP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP,
EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
- { ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE) },
- { ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_IPV6_EX,
+ { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_EX,
EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
EFX_RX_HASH(IPV6, 2TUPLE) }
};
@@ -1645,10 +1645,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
int rc = 0;
switch (rxmode->mq_mode) {
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* No special checks are required */
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
sfc_err(sa, "RSS is not available");
rc = EINVAL;
@@ -1665,16 +1665,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
* so unsupported offloads cannot be added as the result of
* below check.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
- (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+ (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
- rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
}
- if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
- rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
}
return rc;
@@ -1820,7 +1820,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
}
configure_rss:
- rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+ rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 13392cdd5a09..0273788c20ce 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (!encp->enc_hw_tx_insert_vlan_enabled)
- no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (!encp->enc_tunnel_encapsulations_supported)
- no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (!sa->tso)
- no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
- no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
- no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
return ~no_caps;
}
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
}
/* We either perform both TCP and UDP offload, or no offload at all */
- if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
- ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+ if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+ ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
sfc_err(sa, "TCP and UDP offloads can't be set independently");
rc = EINVAL;
}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
int rc = 0;
switch (txmode->mq_mode) {
- case ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_NONE:
break;
default:
sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -529,23 +529,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
if (rc != 0)
goto fail_ev_qstart;
- if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_IPV4;
- if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_IPV4;
- if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
flags |= EFX_TXQ_CKSUM_TCPUDP;
- if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
}
- if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
flags |= EFX_TXQ_FATSOV2;
rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -876,9 +876,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/*
* Here VLAN TCI is expected to be zero in case if no
- * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
* if the calling app ignores the absence of
- * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
* TX_ERROR will occur
*/
pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1242,13 +1242,13 @@ struct sfc_dp_tx sfc_efx_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_TX_EFX,
},
.features = 0,
- .dev_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
.qsize_up_rings = sfc_efx_tx_qsize_up_rings,
.qcreate = sfc_efx_tx_qcreate,
.qdestroy = sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
return status;
/* Link UP */
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
struct pmd_internals *p = dev->data->dev_private;
/* Link DOWN */
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
/* Firmware */
softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
/* dev->data */
dev->data->dev_private = dev_private;
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
dev->data->mac_addrs = ð_addr;
dev->data->promiscuous = 1;
dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 3c6a285e3c5e..6a084e3e1b1b 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
eth_dev_configure(struct rte_eth_dev *dev)
{
struct rte_eth_dev_data *data = dev->data;
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
dev->rx_pkt_burst = eth_szedata2_rx_scattered;
data->scattered_rx = 1;
} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_queues = internals->max_rx_queues;
dev_info->max_tx_queues = internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
dev_info->tx_offload_capa = 0;
dev_info->rx_queue_offload_capa = 0;
dev_info->tx_queue_offload_capa = 0;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1202,10 +1202,10 @@ eth_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_status = ETH_LINK_UP;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad45219e..5d5350d78e03 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
#define TAP_IOV_DEFAULT_MAX 1024
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
static int tap_devices_count;
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
static volatile uint32_t tap_trigger; /* Rx trigger */
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
len = readv(process_private->rxq_fds[rxq->queue_id],
*rxq->iovecs,
- 1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+ 1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
rxq->nb_rx_desc : 1));
if (len < (int)sizeof(struct tun_pi))
break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
seg->next = NULL;
mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
RTE_PTYPE_ALL_MASK);
- if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
tap_verify_csum(mbuf);
/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
}
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
}
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
uint32_t speed = pmd_link.link_speed;
uint32_t capa = 0;
- if (speed >= ETH_SPEED_NUM_10M)
- capa |= ETH_LINK_SPEED_10M;
- if (speed >= ETH_SPEED_NUM_100M)
- capa |= ETH_LINK_SPEED_100M;
- if (speed >= ETH_SPEED_NUM_1G)
- capa |= ETH_LINK_SPEED_1G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_2_5G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_5G;
- if (speed >= ETH_SPEED_NUM_10G)
- capa |= ETH_LINK_SPEED_10G;
- if (speed >= ETH_SPEED_NUM_20G)
- capa |= ETH_LINK_SPEED_20G;
- if (speed >= ETH_SPEED_NUM_25G)
- capa |= ETH_LINK_SPEED_25G;
- if (speed >= ETH_SPEED_NUM_40G)
- capa |= ETH_LINK_SPEED_40G;
- if (speed >= ETH_SPEED_NUM_50G)
- capa |= ETH_LINK_SPEED_50G;
- if (speed >= ETH_SPEED_NUM_56G)
- capa |= ETH_LINK_SPEED_56G;
- if (speed >= ETH_SPEED_NUM_100G)
- capa |= ETH_LINK_SPEED_100G;
+ if (speed >= RTE_ETH_SPEED_NUM_10M)
+ capa |= RTE_ETH_LINK_SPEED_10M;
+ if (speed >= RTE_ETH_SPEED_NUM_100M)
+ capa |= RTE_ETH_LINK_SPEED_100M;
+ if (speed >= RTE_ETH_SPEED_NUM_1G)
+ capa |= RTE_ETH_LINK_SPEED_1G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_2_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_10G)
+ capa |= RTE_ETH_LINK_SPEED_10G;
+ if (speed >= RTE_ETH_SPEED_NUM_20G)
+ capa |= RTE_ETH_LINK_SPEED_20G;
+ if (speed >= RTE_ETH_SPEED_NUM_25G)
+ capa |= RTE_ETH_LINK_SPEED_25G;
+ if (speed >= RTE_ETH_SPEED_NUM_40G)
+ capa |= RTE_ETH_LINK_SPEED_40G;
+ if (speed >= RTE_ETH_SPEED_NUM_50G)
+ capa |= RTE_ETH_LINK_SPEED_50G;
+ if (speed >= RTE_ETH_SPEED_NUM_56G)
+ capa |= RTE_ETH_LINK_SPEED_56G;
+ if (speed >= RTE_ETH_SPEED_NUM_100G)
+ capa |= RTE_ETH_LINK_SPEED_100G;
return capa;
}
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
if (!(ifr.ifr_flags & IFF_UP) ||
!(ifr.ifr_flags & IFF_RUNNING)) {
- dev_link->link_status = ETH_LINK_DOWN;
+ dev_link->link_status = RTE_ETH_LINK_DOWN;
return 0;
}
}
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
dev_link->link_status =
((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
- ETH_LINK_UP :
- ETH_LINK_DOWN);
+ RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN);
return 0;
}
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
int ret;
/* initialize GSO context */
- gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!pmd->gso_ctx_mp) {
/*
* Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->csum = !!(offloads &
- (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM));
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
if (ret == -1)
@@ -1760,7 +1760,7 @@ static int
tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1768,7 +1768,7 @@ static int
tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- if (fc_conf->mode != RTE_FC_NONE)
+ if (fc_conf->mode != RTE_ETH_FC_NONE)
return -ENOTSUP;
return 0;
}
@@ -2262,7 +2262,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
}
}
}
- pmd_link.link_speed = ETH_SPEED_NUM_10G;
+ pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
@@ -2436,7 +2436,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
return 0;
}
- speed = ETH_SPEED_NUM_10G;
+ speed = RTE_ETH_SPEED_NUM_10G;
/* use tap%d which causes kernel to choose next available */
strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
--git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
#define TAP_RSS_HASH_KEY_SIZE 40
/* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
/* hashed fields for RSS */
enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 8ce9a99dc074..762647e3b6ee 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
if (nic->duplex == NICVF_HALF_DUPLEX)
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else if (nic->duplex == NICVF_FULL_DUPLEX)
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_speed = nic->speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* rte_eth_link_get() might need to wait up to 9 seconds */
for (i = 0; i < MAX_CHECK_TIME; i++) {
nicvf_link_status_update(nic, &link);
- if (link.link_status == ETH_LINK_UP)
+ if (link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
}
@@ -390,35 +390,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
{
uint64_t nic_rss = 0;
- if (ethdev_rss & ETH_RSS_IPV4)
+ if (ethdev_rss & RTE_ETH_RSS_IPV4)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_IPV6)
+ if (ethdev_rss & RTE_ETH_RSS_IPV6)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
nic_rss |= RSS_TUN_VXLAN_ENA;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
nic_rss |= RSS_TUN_GENEVE_ENA;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
nic_rss |= RSS_TUN_NVGRE_ENA;
}
@@ -431,28 +431,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic, uint64_t nic_rss)
uint64_t ethdev_rss = 0;
if (nic_rss & RSS_IP_ENA)
- ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+ ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP);
if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
- ethdev_rss |= ETH_RSS_PORT;
+ ethdev_rss |= RTE_ETH_RSS_PORT;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
if (nic_rss & RSS_TUN_VXLAN_ENA)
- ethdev_rss |= ETH_RSS_VXLAN;
+ ethdev_rss |= RTE_ETH_RSS_VXLAN;
if (nic_rss & RSS_TUN_GENEVE_ENA)
- ethdev_rss |= ETH_RSS_GENEVE;
+ ethdev_rss |= RTE_ETH_RSS_GENEVE;
if (nic_rss & RSS_TUN_NVGRE_ENA)
- ethdev_rss |= ETH_RSS_NVGRE;
+ ethdev_rss |= RTE_ETH_RSS_NVGRE;
}
return ethdev_rss;
}
@@ -479,8 +479,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
return ret;
/* Copy RETA table */
- for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = tbl[j];
}
@@ -509,8 +509,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
return ret;
/* Copy RETA table */
- for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
tbl[j] = reta_conf[i].reta[j];
}
@@ -807,9 +807,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
dev->data->nb_rx_queues,
dev->data->dev_conf.lpbk_mode, rsshf);
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
ret = nicvf_rss_term(nic);
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
if (ret)
PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -870,7 +870,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
multiseg = true;
break;
}
@@ -992,7 +992,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->offloads = offloads;
- is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+ is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
/* Choose optimum free threshold value for multipool case */
if (!is_single_pool) {
@@ -1382,11 +1382,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
PMD_INIT_FUNC_TRACE();
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1415,10 +1415,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
};
return 0;
@@ -1582,8 +1582,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
/* Configure VLAN Strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = nicvf_vlan_offload_config(dev, mask);
/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1711,7 +1711,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
/* Setup scatter mode if needed by jumbo */
if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
/* Setup MTU */
@@ -1896,8 +1896,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!rte_eal_has_hugepages()) {
PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1909,8 +1909,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
@@ -1920,7 +1920,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
return -EINVAL;
}
@@ -1955,7 +1955,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic->offload_cksum = 1;
PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2032,8 +2032,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct nicvf *nic = nicvf_pmd_priv(dev);
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
nicvf_vlan_hw_strip(nic, true);
else
nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 5d38750d6313..cb474e26b81e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,32 +16,32 @@
#define NICVF_UNKNOWN_DUPLEX 0xff
#define NICVF_RSS_OFFLOAD_PASS1 ( \
- ETH_RSS_PORT | \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define NICVF_RSS_OFFLOAD_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
#define NICVF_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define NICVF_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NICVF_DEFAULT_RX_FREE_THRESH 224
#define NICVF_DEFAULT_TX_FREE_THRESH 224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb68635..0b0f9db7cb2a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -998,7 +998,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
restart = (rxcfg & TXGBE_RXCFG_ENA) &&
!(rxcfg & TXGBE_RXCFG_VLAN);
rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1033,7 +1033,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (vlan_ext) {
wr32m(hw, TXGBE_VLANCTL,
TXGBE_VLANCTL_TPID_MASK,
@@ -1053,7 +1053,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
TXGBE_TAGTPID_LSB(tpid));
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (vlan_ext) {
/* Only the high 16-bits is valid */
wr32m(hw, TXGBE_EXTAG,
@@ -1138,10 +1138,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -1240,7 +1240,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
txgbe_vlan_strip_queue_set(dev, i, 1);
else
txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1254,17 +1254,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct txgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -1275,25 +1275,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
txgbe_vlan_hw_strip_config(dev);
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
txgbe_vlan_hw_filter_enable(dev);
else
txgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
txgbe_vlan_hw_extend_enable(dev);
else
txgbe_vlan_hw_extend_disable(dev);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
txgbe_qinq_hw_strip_enable(dev);
else
txgbe_qinq_hw_strip_disable(dev);
@@ -1331,10 +1331,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -1357,18 +1357,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1378,13 +1378,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode =
- ETH_MQ_RX_VMDQ_ONLY;
+ RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -1393,13 +1393,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
dev->data->dev_conf.txmode.mq_mode =
- ETH_MQ_TX_VMDQ_ONLY;
+ RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -1414,13 +1414,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1429,15 +1429,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1446,39 +1446,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -1495,8 +1495,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multiple queue mode checking */
ret = txgbe_check_mq_mode(dev);
@@ -1694,15 +1694,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
txgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1763,8 +1763,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
if (err)
goto error;
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
link_speeds = &dev->data->dev_conf.link_speeds;
if (((*link_speeds) >> 1) & ~(allowed_speeds >> 1)) {
@@ -1773,20 +1773,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = (TXGBE_LINK_SPEED_100M_FULL |
TXGBE_LINK_SPEED_1GB_FULL |
TXGBE_LINK_SPEED_10GB_FULL);
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= TXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= TXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= TXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= TXGBE_LINK_SPEED_100M_FULL;
}
@@ -2601,7 +2601,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2634,11 +2634,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_desc_lim = tx_desc_lim;
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -2695,11 +2695,11 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_AUTONEG);
hw->mac.get_link_status = true;
@@ -2713,8 +2713,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -2733,34 +2733,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
}
intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case TXGBE_LINK_SPEED_UNKNOWN:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case TXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case TXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case TXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -2990,7 +2990,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3221,13 +3221,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3359,16 +3359,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
return -ENOTSUP;
}
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += 4) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
if (!mask)
continue;
@@ -3400,16 +3400,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += 4) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
if (!mask)
continue;
@@ -3576,12 +3576,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
wr32(hw, TXGBE_UCADDRTBL(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
wr32(hw, TXGBE_UCADDRTBL(i), 0);
}
@@ -3605,15 +3605,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= TXGBE_POOLETHCTL_UTA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= TXGBE_POOLETHCTL_MCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= TXGBE_POOLETHCTL_UCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= TXGBE_POOLETHCTL_BCA;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= TXGBE_POOLETHCTL_MCP;
return new_val;
@@ -4264,15 +4264,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = TXGBE_INCVAL_100;
shift = TXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = TXGBE_INCVAL_1GB;
shift = TXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = TXGBE_INCVAL_10GB;
shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4628,7 +4628,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -4639,7 +4639,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled */
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -4663,9 +4663,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled */
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4678,7 +4678,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4908,7 +4908,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -4939,7 +4939,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -4979,7 +4979,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -4987,7 +4987,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
ret = -EINVAL;
@@ -4995,7 +4995,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
ret = -EINVAL;
@@ -5003,7 +5003,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -5035,7 +5035,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5045,7 +5045,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, 0);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5055,7 +5055,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, 0);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5065,7 +5065,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, 0);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index fd65d89ffe7d..8304b68292da 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -60,15 +60,15 @@
#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
#define TXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define TXGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define TXGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b75..283b52e8f3db 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -486,14 +486,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -574,22 +574,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -647,8 +647,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
txgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -891,10 +891,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
txgbevf_vlan_strip_queue_set(dev, i, on);
}
}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
uint32_t *fdirctrl, uint32_t *flex)
{
*fdirctrl = 0;
*flex = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_signature_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= TXGBE_SECRXCTL_CRCSTRIP;
wr32(hw, TXGBE_SECRXCTL, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
reg = rd32(hw, TXGBE_SECTXCTL);
if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index a48972b1a381..30be2873307a 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -101,15 +101,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct txgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -256,13 +256,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
break;
}
@@ -611,29 +611,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = ð_dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = TXGBE_DEV_HW(eth_dev);
vmvir = rd32(hw, TXGBE_POOLTAG(vf));
vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 7e18dcce0a86..1204dc5499a5 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1960,7 +1960,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
uint64_t
txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
{
- return DEV_RX_OFFLOAD_VLAN_STRIP;
+ return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
uint64_t
@@ -1970,34 +1970,34 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
if (!txgbe_is_vf(dev))
- offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_VLAN_EXTEND);
+ offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
/*
* RSC is only supported by PF devices in a non-SR-IOV
* mode.
*/
if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == txgbe_mac_raptor)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -2222,32 +2222,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_UDP_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (!txgbe_is_vf(dev))
- tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2349,7 +2349,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/* Modification to set tail pointer for virtual function
@@ -2599,7 +2599,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2900,20 +2900,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_VFPLCFG_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
if (rss_hf)
@@ -2930,20 +2930,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
} else {
mrqc = rd32(hw, TXGBE_RACTL);
mrqc &= ~TXGBE_RACTL_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_RACTL_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_RACTL_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_RACTL_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_RACTL_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6UDP;
if (rss_hf)
@@ -2984,39 +2984,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
rss_hf = 0;
} else {
mrqc = rd32(hw, TXGBE_RACTL);
if (mrqc & TXGBE_RACTL_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_RACTL_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_RACTL_RSSENA))
rss_hf = 0;
}
@@ -3046,7 +3046,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
*/
if (adapter->rss_reta_updated == 0) {
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == dev->data->nb_rx_queues)
j = 0;
reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3083,12 +3083,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
txgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* split rx buffer up into sections, each for 1 traffic class
@@ -3103,7 +3103,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
rxpbsize &= (~(0x3FF << 10));
@@ -3111,7 +3111,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
- if (num_pools == ETH_16_POOLS) {
+ if (num_pools == RTE_ETH_16_POOLS) {
mrqc = TXGBE_PORTCTL_NUMTC_8;
mrqc |= TXGBE_PORTCTL_NUMVT_16;
} else {
@@ -3130,7 +3130,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_POOLCTL, vt_ctl);
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3151,7 +3151,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
wr32(hw, TXGBE_POOLRXENA(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
wr32(hw, TXGBE_ETHADDRIDX, 0);
wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3221,7 +3221,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
/*PF VF Transmit Enable*/
wr32(hw, TXGBE_POOLTXENA(0),
vmdq_tx_conf->nb_queue_pools ==
- ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3237,12 +3237,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3252,7 +3252,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3270,12 +3270,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3285,7 +3285,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3312,7 +3312,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3339,7 +3339,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3475,7 +3475,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_rx = DCB_RX_CONFIG;
/*
@@ -3486,8 +3486,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
/*Configure general VMDQ and DCB RX parameters*/
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3500,7 +3500,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -3511,7 +3511,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3527,15 +3527,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
- if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -3576,7 +3576,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
wr32(hw, TXGBE_PBRXSIZE(i), 0);
}
if (config_dcb_tx) {
@@ -3592,7 +3592,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
wr32(hw, TXGBE_PBTXSIZE(i), 0);
wr32(hw, TXGBE_PBTXDMATH(i), 0);
}
@@ -3634,7 +3634,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/* If the TC count is 8,
@@ -3648,7 +3648,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = txgbe_dcb_pfc_enabled;
}
txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -3719,12 +3719,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -3780,7 +3780,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* pool enabling for receive - 64 */
wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
/*
@@ -3904,11 +3904,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
@@ -3931,15 +3931,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -3962,21 +3962,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
txgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
txgbe_rss_disable(dev);
@@ -3987,18 +3987,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
txgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4028,7 +4028,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
txgbe_vmdq_tx_hw_configure(hw);
else
wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4038,13 +4038,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -4107,10 +4107,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4118,22 +4118,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
"is disabled");
return -EINVAL;
}
rfctl = rd32(hw, TXGBE_PSRCTL);
- if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~TXGBE_PSRCTL_RSCDIA;
else
rfctl |= TXGBE_PSRCTL_RSCDIA;
wr32(hw, TXGBE_PSRCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set PSRCTL.RSCACK bit */
@@ -4273,7 +4273,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
}
#endif
}
@@ -4316,7 +4316,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4344,7 +4344,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4354,7 +4354,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -4391,11 +4391,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -4410,7 +4410,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = rd32(hw, TXGBE_PSRCTL);
rxcsum |= TXGBE_PSRCTL_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= TXGBE_PSRCTL_L4CSUM;
else
rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4419,7 +4419,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == txgbe_mac_raptor) {
rdrxctl = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4542,8 +4542,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
txgbe_setup_loopback_link_raptor(hw);
#ifdef RTE_LIB_SECURITY
- if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
- (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+ if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+ (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = txgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -4851,7 +4851,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Set PSR type for VF RSS according to max Rx queue */
psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4903,7 +4903,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
*/
wr32(hw, TXGBE_RXCFG(i), srrctl);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4912,8 +4912,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/*
@@ -5084,7 +5084,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
* little-endian order.
*/
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == conf->conf.queue_num)
j = 0;
reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
uint8_t rx_deferred_start; /**< not in global dev start. */
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 86498365e149..17b6a1a1ceec 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
static struct rte_eth_link pmd_link = {
.link_speed = 10000,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN
};
struct rte_vhost_vring_state {
@@ -817,7 +817,7 @@ new_device(int vid)
rte_vhost_get_mtu(vid, ð_dev->data->mtu);
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_atomic32_set(&internal->dev_attached, 1);
update_queuing_status(eth_dev);
@@ -852,7 +852,7 @@ destroy_device(int vid)
rte_atomic32_set(&internal->dev_attached, 0);
update_queuing_status(eth_dev);
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1118,7 +1118,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
if (vhost_driver_setup(dev) < 0)
return -1;
- internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -1267,9 +1267,9 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_tx_queues = internal->max_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return 0;
}
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index ddf0e26ab4db..94120b349023 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -712,7 +712,7 @@ int
virtio_dev_close(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "virtio_dev_close");
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1774,7 +1774,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
- if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+ if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
config = &local_config;
virtio_read_dev_config(hw,
@@ -1788,7 +1788,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
}
}
if (hw->duplex == DUPLEX_UNKNOWN)
- hw->duplex = ETH_LINK_FULL_DUPLEX;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
hw->speed, hw->duplex);
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1887,7 +1887,7 @@ int
eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
{
struct virtio_hw *hw = eth_dev->data->dev_private;
- uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+ uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
int vectorized = 0;
int ret;
@@ -1958,22 +1958,22 @@ static uint32_t
virtio_dev_speed_capa_get(uint32_t speed)
{
switch (speed) {
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -2089,14 +2089,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "configure");
req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
- if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Rx multi queue mode %d",
rxmode->mq_mode);
return -EINVAL;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Tx multi queue mode %d",
txmode->mq_mode);
@@ -2114,20 +2114,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
req_features |=
(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
- if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_CSUM);
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
req_features |=
(1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2139,15 +2139,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+ if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
PMD_DRV_LOG(ERR,
"rx checksum not available on this host");
return -ENOTSUP;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
PMD_DRV_LOG(ERR,
@@ -2159,12 +2159,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
virtio_dev_cq_start(dev);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
hw->vlan_strip = 1;
- hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+ hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
- if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(ERR,
"vlan filtering not available on this host");
@@ -2217,7 +2217,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(INFO,
"disabled packed ring vectorized rx for TCP_LRO enabled");
hw->use_vec_rx = 0;
@@ -2244,10 +2244,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN_STRIP)) {
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
PMD_DRV_LOG(INFO,
"disabled split ring vectorized rx for offloading enabled");
hw->use_vec_rx = 0;
@@ -2440,7 +2440,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
struct rte_eth_link link;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "stop");
dev->data->dev_started = 0;
@@ -2481,28 +2481,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
memset(&link, 0, sizeof(link));
link.link_duplex = hw->duplex;
link.link_speed = hw->speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (!hw->started) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
PMD_INIT_LOG(DEBUG, "Get link status from hw");
virtio_read_dev_config(hw,
offsetof(struct virtio_net_config, status),
&status, sizeof(status));
if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
PMD_INIT_LOG(DEBUG, "Port %d is down",
dev->data->port_id);
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
PMD_INIT_LOG(DEBUG, "Port %d is up",
dev->data->port_id);
}
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2515,8 +2515,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct virtio_hw *hw = dev->data->dev_private;
uint64_t offloads = rxmode->offloads;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(NOTICE,
@@ -2526,8 +2526,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (mask & ETH_VLAN_STRIP_MASK)
- hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
+ hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -2549,32 +2549,32 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mtu = hw->max_mtu;
host_features = VIRTIO_OPS(hw)->get_features(hw);
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
}
if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
}
tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
/*
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a19895af1f17..26d9edf5319c 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,20 +41,20 @@
#define VMXNET3_TX_MAX_SEG UINT8_MAX
#define VMXNET3_TX_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define VMXNET3_RX_OFFLOAD_CAP \
- (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
@@ -398,9 +398,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
/* set the initial link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(eth_dev, &link);
return 0;
@@ -486,8 +486,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -547,7 +547,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
hw->queueDescPA = mz->iova;
hw->queue_desc_len = (uint16_t)size;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Allocate memory structure for UPT1_RSSConf and configure */
mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
"rss_conf", rte_socket_id(),
@@ -843,15 +843,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
devRead->rxFilterConf.rxMode = 0;
/* Setting up feature flags */
- if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
devRead->misc.uptFeatures |= VMXNET3_F_LRO;
devRead->misc.maxNumRxSG = 0;
}
- if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
ret = vmxnet3_rss_configure(dev);
if (ret != VMXNET3_SUCCESS)
return ret;
@@ -863,7 +863,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
}
ret = vmxnet3_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -930,7 +930,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
}
if (VMXNET3_VERSION_GE_4(hw) &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Check for additional RSS */
ret = vmxnet3_v4_rss_configure(dev);
if (ret != VMXNET3_SUCCESS) {
@@ -1039,9 +1039,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clear recorded link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
hw->adapter_stopped = 1;
@@ -1365,7 +1365,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
dev_info->min_mtu = VMXNET3_MIN_MTU;
dev_info->max_mtu = VMXNET3_MAX_MTU;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1447,10 +1447,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
if (ret & 0x1)
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -1503,7 +1503,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1573,8 +1573,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint32_t *vf_table = devRead->rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
else
devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1583,8 +1583,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
VMXNET3_CMD_UPDATE_FEATURE);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 8950175460f0..ef858ac9512f 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
VMXNET3_MAX_RX_QUEUES + 1)
#define VMXNET3_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define VMXNET3_V4_RSS_MASK ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define VMXNET3_MANDATORY_V4_RSS ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
/* RSS configuration structure - shared with device through GPA */
typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b01c4c01f9c9..870100fa4f11 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
rss_hf = port_rss_conf->rss_hf &
(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
/* loading hashType */
dev_rss_conf->hashType = 0;
rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index a26076b312e5..ecafc5e4f1a9 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -70,11 +70,11 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -327,7 +327,7 @@ check_port_link_status(uint16_t port_id)
if (link_get_err >= 0 && link.link_status) {
const char *dp = (link.link_duplex ==
- ETH_LINK_FULL_DUPLEX) ?
+ RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex";
printf("\nPort %u Link Up - speed %s - %s\n",
port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index fd8fd767c811..1087b0dad125 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -114,17 +114,17 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -148,9 +148,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
@@ -240,9 +240,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
BOND_PORT, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
if (retval != 0)
rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 8c4a8feec0c2..c681e237ea46 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,15 +80,15 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
}
},
};
@@ -126,9 +126,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 1bc675962bf3..cdd9e9b60bd8 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
int ret;
memset(&cfg_port, 0, sizeof(cfg_port));
- cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+ cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
pause_param->tx_pause = 0;
pause_param->rx_pause = 0;
switch (fc_conf.mode) {
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
pause_param->rx_pause = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
pause_param->tx_pause = 1;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_param->rx_pause = 1;
pause_param->tx_pause = 1;
default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
if (pause_param->tx_pause) {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_FULL;
+ fc_conf.mode = RTE_ETH_FC_FULL;
else
- fc_conf.mode = RTE_FC_TX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
} else {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_RX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf.mode = RTE_FC_NONE;
+ fc_conf.mode = RTE_ETH_FC_NONE;
}
status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
for (vf = 0; vf < num_vfs; vf++) {
#ifdef RTE_NET_IXGBE
rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
- ETH_VMDQ_ACCEPT_UNTAG, 0);
+ RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
#endif
}
/* Enable Rx vlan filter, VF unspport status is discard */
- ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+ ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
if (ret != 0)
return ret;
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index e26be8edf28f..193a16463449 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,13 +283,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -311,12 +311,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 476b147bdfcc..1b841d46ad93 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,13 +614,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -642,9 +642,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 8a43f6ac0f92..6185b340600c 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -212,9 +212,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index dd8a33d036ee..bfc1949c8428 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
memset(&link, 0, sizeof(link));
do {
link_get_err = rte_eth_link_get(port_id, &link);
- if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+ if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
if (link_get_err < 0)
rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
rte_strerror(-link_get_err));
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
}
@@ -138,12 +138,12 @@ init_port(void)
},
.txmode = {
.offloads =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
},
};
struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ccfee585f850..b1aa2767a0af 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,12 +819,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
/* Configuring port to use RSS for multiple RX queues. 8< */
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_PROTO_MASK,
+ .rss_hf = RTE_ETH_RSS_PROTO_MASK,
}
}
};
@@ -852,9 +852,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index d51133199c42..4ffe997baf23 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -148,13 +148,13 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER),
+ .offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER),
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -623,7 +623,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 9ba02e687adb..0290767af473 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < reta_size; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < reta_size; i++) {
- uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
- uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+ uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
uint32_t rss_qs_pos = i % rss->n_queues;
reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -267,5 +267,5 @@ link_is_up(const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 06dc42799314..41e35593867b 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -160,22 +160,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -737,7 +737,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -1095,9 +1095,9 @@ main(int argc, char **argv)
n_tx_queue = nb_lcores;
if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
n_tx_queue = MAX_TX_QUEUE_PER_PORT;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index a10e330f5003..1c60ac28e317 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -233,19 +233,19 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1444,10 +1444,10 @@ print_usage(const char *prgname)
" \"parallel\" : Parallel\n"
" --" CMD_LINE_OPT_RX_OFFLOAD
": bitmask of the RX HW offload capabilities to enable/use\n"
- " (DEV_RX_OFFLOAD_*)\n"
+ " (RTE_ETH_RX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_TX_OFFLOAD
": bitmask of the TX HW offload capabilities to enable/use\n"
- " (DEV_TX_OFFLOAD_*)\n"
+ " (RTE_ETH_TX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_REASSEMBLE " NUM"
": max number of entries in reassemble(fragment) table\n"
" (zero (default value) disables reassembly)\n"
@@ -1898,7 +1898,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2201,8 +2201,8 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2225,12 +2225,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
portid, local_port_conf.txmode.offloads,
dev_info.tx_offload_capa);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
printf("port %u configurng rx_offloads=0x%" PRIx64
", tx_offloads=0x%" PRIx64 "\n",
@@ -2288,7 +2288,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
/* Pre-populate pkt offloads based on capabilities */
qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
- if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
tx_queueid++;
@@ -2649,7 +2649,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
struct rte_flow *flow;
int ret;
- if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
if (inbound) {
if ((dev_info.rx_offload_capa &
- DEV_RX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware RX IPSec offload is not supported\n");
return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
} else { /* outbound */
if ((dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware TX IPSec offload is not supported\n");
return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+ *rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
}
/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+ *tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
}
return 0;
}
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 73391ce1a96d..bdcaa3bcd1ca 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -114,8 +114,8 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
},
};
@@ -619,7 +619,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 69a0afced6cc..d324ee224109 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -94,7 +94,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
/* Options for configuring ethernet port */
static struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -607,9 +607,9 @@ init_port(uint16_t port)
"Error during getting device (port %u) info: %s\n",
port, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -687,7 +687,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 6e2016752fca..04a3bdace20c 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -215,11 +215,11 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1807,7 +1807,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2631,9 +2631,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (retval < 0) {
printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 9040be5ed9b6..cf3d1b8aaf40 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -14,7 +14,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
uint16_t nb_ports_available = 0;
@@ -22,9 +22,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
int ret;
if (rsrc->event_mode) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+ port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
}
/* Initialise each port */
@@ -60,9 +60,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
local_port_conf.rx_adv_conf.rss_conf.rss_hf);
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queue. 8< */
ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index 62981663ea78..d8eabe4c869e 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -93,7 +93,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -725,7 +725,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -868,9 +868,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index af59d51b3ec4..78fc48f781fc 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -82,7 +82,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -477,7 +477,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -649,9 +649,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 8feb50e0f542..c9d8d4918a34 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -605,7 +605,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -791,9 +791,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the number of queues for a port. */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 410ec94b4131..1fb180723582 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -123,19 +123,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1935,7 +1935,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2003,7 +2003,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -2087,9 +2087,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 05385807e83e..7f00c65609ed 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,17 +111,17 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -607,7 +607,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* Clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -731,7 +731,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -828,9 +828,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 39624993b081..21c79567b1f7 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -249,18 +249,18 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_UDP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
@@ -2196,7 +2196,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2509,7 +2509,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -2637,9 +2637,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
rte_panic("Error during getting device (port %u) info:"
"%s\n", port_id, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 202ef78b6e95..5dd3e4136ea1 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -119,18 +119,18 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -902,7 +902,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -987,7 +987,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -1052,15 +1052,15 @@ l3fwd_poll_resource_setup(void)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
if (dev_info.max_rx_queues == 1)
- local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index ce8ae059d789..551f0524da79 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -82,7 +82,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.intr_conf = {
.lsc = 1, /**< lsc interrupt feature enabled */
@@ -146,7 +146,7 @@ print_stats(void)
link_get_err < 0 ? "0" :
rte_eth_link_speed_to_str(link.link_speed),
link_get_err < 0 ? "Link get failed" :
- (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex"),
port_statistics[portid].tx,
port_statistics[portid].rx,
@@ -506,7 +506,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -633,9 +633,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index be669c2bcc06..a4d7a3e5436a 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -93,7 +93,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS
+ .mq_mode = RTE_ETH_MQ_RX_RSS
}
};
const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -212,7 +212,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index a66328ba0caf..b35886a77b00 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -175,18 +175,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
{
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -217,9 +217,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
info.default_rxconf.rx_drop_en = 1;
- if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -391,7 +391,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
static struct rte_eth_conf eth_port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index 4f6982bc1289..b01ac60fd196 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
return ret;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
if (ret != 0)
return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 74e016e1d20d..3a6a33bda3b0 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -306,18 +306,18 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_TCP,
+ .rss_hf = RTE_ETH_RSS_TCP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -3437,7 +3437,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3490,7 +3490,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
conf->rxmode.mtu = max_pkt_len - overhead_len;
if (conf->rxmode.mtu > RTE_ETHER_MTU)
- conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -3589,9 +3589,9 @@ main(int argc, char **argv)
"Invalid max packet length: %u (port %u)\n",
max_pkt_len, portid);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 4f20dfc4be06..569207a79d62 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < reta_size; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < reta_size; i++) {
- uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
- uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+ uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
uint32_t rss_qs_pos = i % rss->n_queues;
reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 229a277032cb..979d9eb9e9d0 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -193,14 +193,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Force full Tx path in the driver, required for IEEE1588 */
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index c32d2e12e633..743bae2da50a 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,18 +51,18 @@ static struct rte_mempool *pool = NULL;
***/
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -332,8 +332,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_rx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -378,8 +378,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_tx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1367569c65db..9b34e4a76b1b 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -60,7 +60,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -105,9 +105,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6845c396b8d9..1903d8b095a1 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -141,17 +141,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (hw_timestamping) {
- if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+ if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
printf("\nERROR: Port %u does not support hardware timestamping\n"
, port);
return -1;
}
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
if (hwts_dynfield_offset < 0) {
printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index a19934dbe0c8..0e5e3b5a9815 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -95,7 +95,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
};
const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -114,9 +114,9 @@ init_port(uint16_t port_num)
if (retval != 0)
return retval;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/*
* Standard DPDK port initialisation - config port, then set up
@@ -276,7 +276,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index fd7207aee758..16435ee3ccc2 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -49,9 +49,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 97218917067e..44376417f83d 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -110,23 +110,23 @@ static int nb_sockets;
/* empty vmdq configuration structure. Filled in programatically */
static struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
/*
* VLAN strip is necessary for 1G NIC such as I350,
* this fixes bug of ipv4 forwarding in guest can't
* forward pakets from one virtio dev to another virtio dev.
*/
- .offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+ .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO),
},
.rx_adv_conf = {
/*
@@ -134,7 +134,7 @@ static struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -291,9 +291,9 @@ port_init(uint16_t port)
return -1;
rx_rings = (uint16_t)dev_info.max_rx_queues;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0) {
@@ -557,8 +557,8 @@ us_vhost_parse_args(int argc, char **argv)
case 'P':
promiscuous = 1;
vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
- ETH_VMDQ_ACCEPT_BROADCAST |
- ETH_VMDQ_ACCEPT_MULTICAST;
+ RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+ RTE_ETH_VMDQ_ACCEPT_MULTICAST;
break;
case OPT_VM2VM_NUM:
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e19d79a40802..b159291d77ce 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -73,9 +73,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -270,7 +270,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index 85996bf864b7..feee642f594d 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -65,12 +65,12 @@ static uint8_t rss_enable;
/* empty vmdq configuration structure. Filled in programatically */
static const struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
/*
@@ -78,7 +78,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -156,11 +156,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
(void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf,
sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -258,9 +258,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
if (retval != 0)
return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index be0179fdeaf0..d2218f2cf741 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -59,8 +59,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
static unsigned num_ports;
/* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs num_tcs = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs num_tcs = RTE_ETH_4_TCS;
static uint16_t num_queues, num_vmdq_queues;
static uint16_t vmdq_pool_base, vmdq_queue_base;
static uint8_t rss_enable;
@@ -68,11 +68,11 @@ static uint8_t rss_enable;
/* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
static const struct rte_eth_conf vmdq_dcb_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
},
/*
* should be overridden separately in code with
@@ -80,7 +80,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
*/
.rx_adv_conf = {
.vmdq_dcb_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -88,12 +88,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
.dcb_tc = {0},
},
.dcb_rx_conf = {
- .nb_tcs = ETH_4_TCS,
+ .nb_tcs = RTE_ETH_4_TCS,
/** Traffic class each UP mapped to. */
.dcb_tc = {0},
},
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -102,7 +102,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
},
.tx_adv_conf = {
.vmdq_dcb_tx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.dcb_tc = {0},
},
},
@@ -156,7 +156,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
conf.pool_map[i].pools = 1UL << i;
vmdq_conf.pool_map[i].pools = 1UL << i;
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
conf.dcb_tc[i] = i % num_tcs;
dcb_conf.dcb_tc[i] = i % num_tcs;
tx_conf.dcb_tc[i] = i % num_tcs;
@@ -172,11 +172,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
(void)(rte_memcpy(ð_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
sizeof(tx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -270,9 +270,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -381,9 +381,9 @@ vmdq_parse_num_pools(const char *q_arg)
if (n != 16 && n != 32)
return -1;
if (n == 16)
- num_pools = ETH_16_POOLS;
+ num_pools = RTE_ETH_16_POOLS;
else
- num_pools = ETH_32_POOLS;
+ num_pools = RTE_ETH_32_POOLS;
return 0;
}
@@ -403,9 +403,9 @@ vmdq_parse_num_tcs(const char *q_arg)
if (n != 4 && n != 8)
return -1;
if (n == 4)
- num_tcs = ETH_4_TCS;
+ num_tcs = RTE_ETH_4_TCS;
else
- num_tcs = ETH_8_TCS;
+ num_tcs = RTE_ETH_8_TCS;
return 0;
}
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index b530ac6e320a..dcbffd4265fa 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -114,7 +114,7 @@ struct rte_eth_dev_data {
/** Device Ethernet link address. @see rte_eth_dev_release_port() */
struct rte_ether_addr *mac_addrs;
/** Bitmap associating MAC addresses to pools */
- uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+ uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
/**
* Device Ethernet MAC addresses of hash filtering.
* @see rte_eth_dev_release_port()
@@ -1700,23 +1700,23 @@ struct rte_eth_syn_filter {
/**
* filter type of tunneling packet
*/
-#define ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr */
-#define ETH_TUNNEL_FILTER_OIP 0x02 /**< filter by outer IP Addr */
-#define ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
-#define ETH_TUNNEL_FILTER_IMAC 0x08 /**< filter by inner MAC addr */
-#define ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
-#define ETH_TUNNEL_FILTER_IIP 0x20 /**< filter by inner IP addr */
-
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_IVLAN)
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_IVLAN | \
- ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_IMAC_TENID (ETH_TUNNEL_FILTER_IMAC | \
- ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_OMAC_TENID_IMAC (ETH_TUNNEL_FILTER_OMAC | \
- ETH_TUNNEL_FILTER_TENID | \
- ETH_TUNNEL_FILTER_IMAC)
+#define RTE_ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_OIP 0x02 /**< filter by outer IP Addr */
+#define RTE_ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
+#define RTE_ETH_TUNNEL_FILTER_IMAC 0x08 /**< filter by inner MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
+#define RTE_ETH_TUNNEL_FILTER_IIP 0x20 /**< filter by inner IP addr */
+
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_IVLAN)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_IVLAN | \
+ RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+ RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC (RTE_ETH_TUNNEL_FILTER_OMAC | \
+ RTE_ETH_TUNNEL_FILTER_TENID | \
+ RTE_ETH_TUNNEL_FILTER_IMAC)
/**
* Select IPv4 or IPv6 for tunnel filters.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4ea5a657e003..9b6007803dd8 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -101,9 +101,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
#define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
#define RTE_RX_OFFLOAD_BIT2STR(_name) \
- { DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name) \
{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
static const struct {
@@ -128,14 +125,14 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
- RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+ RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
};
#undef RTE_RX_OFFLOAD_BIT2STR
#undef RTE_ETH_RX_OFFLOAD_BIT2STR
#define RTE_TX_OFFLOAD_BIT2STR(_name) \
- { DEV_TX_OFFLOAD_##_name, #_name }
+ { RTE_ETH_TX_OFFLOAD_##_name, #_name }
static const struct {
uint64_t offload;
@@ -1182,32 +1179,32 @@ uint32_t
rte_eth_speed_bitflag(uint32_t speed, int duplex)
{
switch (speed) {
- case ETH_SPEED_NUM_10M:
- return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
- case ETH_SPEED_NUM_100M:
- return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
- case ETH_SPEED_NUM_1G:
- return ETH_LINK_SPEED_1G;
- case ETH_SPEED_NUM_2_5G:
- return ETH_LINK_SPEED_2_5G;
- case ETH_SPEED_NUM_5G:
- return ETH_LINK_SPEED_5G;
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10M:
+ return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+ case RTE_ETH_SPEED_NUM_100M:
+ return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+ case RTE_ETH_SPEED_NUM_1G:
+ return RTE_ETH_LINK_SPEED_1G;
+ case RTE_ETH_SPEED_NUM_2_5G:
+ return RTE_ETH_LINK_SPEED_2_5G;
+ case RTE_ETH_SPEED_NUM_5G:
+ return RTE_ETH_LINK_SPEED_5G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -1528,7 +1525,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
uint32_t max_rx_pktlen;
uint32_t overhead_len;
@@ -1585,12 +1582,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
- if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
- (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+ (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
RTE_ETHDEV_LOG(ERR,
"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
port_id,
- rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+ rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
ret = -EINVAL;
goto rollback;
}
@@ -2213,7 +2210,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* size is supported by the configured device.
*/
/* Get the real Ethernet overhead length */
- if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
uint32_t overhead_len;
uint32_t max_rx_pktlen;
int ret;
@@ -2793,21 +2790,21 @@ const char *
rte_eth_link_speed_to_str(uint32_t link_speed)
{
switch (link_speed) {
- case ETH_SPEED_NUM_NONE: return "None";
- case ETH_SPEED_NUM_10M: return "10 Mbps";
- case ETH_SPEED_NUM_100M: return "100 Mbps";
- case ETH_SPEED_NUM_1G: return "1 Gbps";
- case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
- case ETH_SPEED_NUM_5G: return "5 Gbps";
- case ETH_SPEED_NUM_10G: return "10 Gbps";
- case ETH_SPEED_NUM_20G: return "20 Gbps";
- case ETH_SPEED_NUM_25G: return "25 Gbps";
- case ETH_SPEED_NUM_40G: return "40 Gbps";
- case ETH_SPEED_NUM_50G: return "50 Gbps";
- case ETH_SPEED_NUM_56G: return "56 Gbps";
- case ETH_SPEED_NUM_100G: return "100 Gbps";
- case ETH_SPEED_NUM_200G: return "200 Gbps";
- case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+ case RTE_ETH_SPEED_NUM_NONE: return "None";
+ case RTE_ETH_SPEED_NUM_10M: return "10 Mbps";
+ case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+ case RTE_ETH_SPEED_NUM_1G: return "1 Gbps";
+ case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+ case RTE_ETH_SPEED_NUM_5G: return "5 Gbps";
+ case RTE_ETH_SPEED_NUM_10G: return "10 Gbps";
+ case RTE_ETH_SPEED_NUM_20G: return "20 Gbps";
+ case RTE_ETH_SPEED_NUM_25G: return "25 Gbps";
+ case RTE_ETH_SPEED_NUM_40G: return "40 Gbps";
+ case RTE_ETH_SPEED_NUM_50G: return "50 Gbps";
+ case RTE_ETH_SPEED_NUM_56G: return "56 Gbps";
+ case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+ case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+ case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
default: return "Invalid";
}
}
@@ -2831,14 +2828,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
return -EINVAL;
}
- if (eth_link->link_status == ETH_LINK_DOWN)
+ if (eth_link->link_status == RTE_ETH_LINK_DOWN)
return snprintf(str, len, "Link down");
else
return snprintf(str, len, "Link up at %s %s %s",
rte_eth_link_speed_to_str(eth_link->link_speed),
- (eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"FDX" : "HDX",
- (eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+ (eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
"Autoneg" : "Fixed");
}
@@ -3745,7 +3742,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
dev = &rte_eth_devices[port_id];
if (!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
RTE_ETHDEV_LOG(ERR, "Port %u: VLAN-filtering disabled\n",
port_id);
return -ENOSYS;
@@ -3832,44 +3829,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
dev_offloads = orig_offloads;
/* check which option changed by application */
- cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
- mask |= ETH_VLAN_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ mask |= RTE_ETH_VLAN_STRIP_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
- mask |= ETH_VLAN_FILTER_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+ mask |= RTE_ETH_VLAN_FILTER_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+ cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
- mask |= ETH_VLAN_EXTEND_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+ mask |= RTE_ETH_VLAN_EXTEND_MASK;
}
- cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+ cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
- mask |= ETH_QINQ_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+ mask |= RTE_ETH_QINQ_STRIP_MASK;
}
/*no change*/
@@ -3914,17 +3911,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
dev = &rte_eth_devices[port_id];
dev_offloads = &dev->data->dev_conf.rxmode.offloads;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- ret |= ETH_VLAN_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- ret |= ETH_VLAN_FILTER_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
- ret |= ETH_VLAN_EXTEND_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+ ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
- ret |= ETH_QINQ_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+ ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
return ret;
}
@@ -4001,7 +3998,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
return -EINVAL;
}
- if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+ if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
return -EINVAL;
}
@@ -4019,7 +4016,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
{
uint16_t i, num;
- num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+ num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
if (reta_conf[i].mask)
return 0;
@@ -4041,8 +4038,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & RTE_BIT64(shift)) &&
(reta_conf[idx].reta[shift] >= max_rxq)) {
RTE_ETHDEV_LOG(ERR,
@@ -4198,7 +4195,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4224,7 +4221,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4365,8 +4362,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
port_id);
return -EINVAL;
}
- if (pool >= ETH_64_POOLS) {
- RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", ETH_64_POOLS - 1);
+ if (pool >= RTE_ETH_64_POOLS) {
+ RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", RTE_ETH_64_POOLS - 1);
return -EINVAL;
}
@@ -6275,7 +6272,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
rte_tel_data_add_dict_string(d, status_str, "UP");
rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
rte_tel_data_add_dict_string(d, "duplex",
- (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex");
return 0;
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index fa4a68532db1..ff608afa960e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -250,7 +250,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
* field is not supported, its value is 0.
* All byte-related statistics do not include Ethernet FCS regardless
* of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
*/
struct rte_eth_stats {
uint64_t ipackets; /**< Total number of successfully received packets. */
@@ -281,43 +281,75 @@ struct rte_eth_stats {
/**@{@name Link speed capabilities
* Device supported speeds bitmap flags
*/
-#define ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */
-#define ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */
-#define ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
-#define ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
-#define ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
-#define ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
-#define ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
-#define ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
-#define ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */
+#define ETH_LINK_SPEED_1G RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */
+#define ETH_LINK_SPEED_5G RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
+#define ETH_LINK_SPEED_10G RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
+#define ETH_LINK_SPEED_20G RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
+#define ETH_LINK_SPEED_25G RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
+#define ETH_LINK_SPEED_40G RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
+#define ETH_LINK_SPEED_50G RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
+#define ETH_LINK_SPEED_56G RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G RTE_ETH_LINK_SPEED_200G
/**@}*/
/**@{@name Link speed
* Ethernet numeric link speeds in Mbps
*/
-#define ETH_SPEED_NUM_NONE 0 /**< Not defined */
-#define ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
-#define ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
-#define ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
-#define ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
-#define ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
-#define ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
-#define ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
-#define ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
-#define ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
-#define ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
+#define ETH_SPEED_NUM_10M RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
+#define ETH_SPEED_NUM_1G RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
+#define ETH_SPEED_NUM_5G RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
+#define ETH_SPEED_NUM_10G RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
+#define ETH_SPEED_NUM_20G RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
+#define ETH_SPEED_NUM_25G RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
+#define ETH_SPEED_NUM_40G RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
+#define ETH_SPEED_NUM_50G RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
+#define ETH_SPEED_NUM_56G RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN RTE_ETH_SPEED_NUM_UNKNOWN
/**@}*/
/**
@@ -325,21 +357,27 @@ struct rte_eth_stats {
*/
__extension__
struct rte_eth_link {
- uint32_t link_speed; /**< ETH_SPEED_NUM_ */
- uint16_t link_duplex : 1; /**< ETH_LINK_[HALF/FULL]_DUPLEX */
- uint16_t link_autoneg : 1; /**< ETH_LINK_[AUTONEG/FIXED] */
- uint16_t link_status : 1; /**< ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /**< RTE_ETH_SPEED_NUM_ */
+ uint16_t link_duplex : 1; /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint16_t link_autoneg : 1; /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint16_t link_status : 1; /**< RTE_ETH_LINK_[DOWN/UP] */
} __rte_aligned(8); /**< aligned for atomic64 read/write */
/**@{@name Link negotiation
* Constants used in link management.
*/
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP 1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG RTE_ETH_LINK_AUTONEG
#define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
/**@}*/
@@ -356,9 +394,12 @@ struct rte_eth_thresh {
/**@{@name Multi-queue mode
* @see rte_eth_conf.rxmode.mq_mode.
*/
-#define ETH_MQ_RX_RSS_FLAG 0x1 /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_DCB_FLAG 0x2 /**< Enable DCB. */
-#define ETH_MQ_RX_VMDQ_FLAG 0x4 /**< Enable VMDq. */
+#define RTE_ETH_MQ_RX_RSS_FLAG 0x1
+#define ETH_MQ_RX_RSS_FLAG RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG 0x2
+#define ETH_MQ_RX_DCB_FLAG RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG RTE_ETH_MQ_RX_VMDQ_FLAG
/**@}*/
/**
@@ -367,50 +408,49 @@ struct rte_eth_thresh {
*/
enum rte_eth_rx_mq_mode {
/** None of DCB, RSS or VMDq mode */
- ETH_MQ_RX_NONE = 0,
+ RTE_ETH_MQ_RX_NONE = 0,
/** For Rx side, only RSS is on */
- ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+ RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
/** For Rx side,only DCB is on. */
- ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
/** Both DCB and RSS enable */
- ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Only VMDq, no RSS nor DCB */
- ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
/** RSS mode with VMDq */
- ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
/** Use VMDq+DCB to route traffic to queues */
- ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Enable both VMDq and DCB in VMDq */
- ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
- ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+ RTE_ETH_MQ_RX_VMDQ_FLAG,
};
-/**
- * for Rx mq mode backward compatible
- */
-#define ETH_RSS ETH_MQ_RX_RSS
-#define VMDQ_DCB ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_ETH_MQ_RX_VMDQ_DCB_RSS
/**
* A set of values to identify what method is to be used to transmit
* packets using multi-TCs.
*/
enum rte_eth_tx_mq_mode {
- ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
- ETH_MQ_TX_DCB, /**< For Tx side,only DCB is on. */
- ETH_MQ_TX_VMDQ_DCB, /**< For Tx side,both DCB and VT is on. */
- ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
+ RTE_ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
+ RTE_ETH_MQ_TX_DCB, /**< For Tx side,only DCB is on. */
+ RTE_ETH_MQ_TX_VMDQ_DCB, /**< For Tx side,both DCB and VT is on. */
+ RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
-
-/**
- * for Tx mq mode backward compatible
- */
-#define ETH_DCB_NONE ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY RTE_ETH_MQ_TX_VMDQ_ONLY
/**
* A structure used to configure the Rx features of an Ethernet port.
@@ -423,7 +463,7 @@ struct rte_eth_rxmode {
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
- * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -438,12 +478,17 @@ struct rte_eth_rxmode {
* Note that single VLAN is treated the same as inner VLAN.
*/
enum rte_vlan_type {
- ETH_VLAN_TYPE_UNKNOWN = 0,
- ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
- ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
- ETH_VLAN_TYPE_MAX,
+ RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+ RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+ RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+ RTE_ETH_VLAN_TYPE_MAX,
};
+#define ETH_VLAN_TYPE_UNKNOWN RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX RTE_ETH_VLAN_TYPE_MAX
+
/**
* A structure used to describe a VLAN filter.
* If the bit corresponding to a VID is set, such VID is on.
@@ -514,38 +559,70 @@ struct rte_eth_rss_conf {
* Below macros are defined for RSS offload types, they can be used to
* fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
*/
-#define ETH_RSS_IPV4 RTE_BIT64(2)
-#define ETH_RSS_FRAG_IPV4 RTE_BIT64(3)
-#define ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4)
-#define ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
-#define ETH_RSS_IPV6 RTE_BIT64(8)
-#define ETH_RSS_FRAG_IPV6 RTE_BIT64(9)
-#define ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10)
-#define ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
-#define ETH_RSS_L2_PAYLOAD RTE_BIT64(14)
-#define ETH_RSS_IPV6_EX RTE_BIT64(15)
-#define ETH_RSS_IPV6_TCP_EX RTE_BIT64(16)
-#define ETH_RSS_IPV6_UDP_EX RTE_BIT64(17)
-#define ETH_RSS_PORT RTE_BIT64(18)
-#define ETH_RSS_VXLAN RTE_BIT64(19)
-#define ETH_RSS_GENEVE RTE_BIT64(20)
-#define ETH_RSS_NVGRE RTE_BIT64(21)
-#define ETH_RSS_GTPU RTE_BIT64(23)
-#define ETH_RSS_ETH RTE_BIT64(24)
-#define ETH_RSS_S_VLAN RTE_BIT64(25)
-#define ETH_RSS_C_VLAN RTE_BIT64(26)
-#define ETH_RSS_ESP RTE_BIT64(27)
-#define ETH_RSS_AH RTE_BIT64(28)
-#define ETH_RSS_L2TPV3 RTE_BIT64(29)
-#define ETH_RSS_PFCP RTE_BIT64(30)
-#define ETH_RSS_PPPOE RTE_BIT64(31)
-#define ETH_RSS_ECPRI RTE_BIT64(32)
-#define ETH_RSS_MPLS RTE_BIT64(33)
-#define ETH_RSS_IPV4_CHKSUM RTE_BIT64(34)
+#define RTE_ETH_RSS_IPV4 RTE_BIT64(2)
+#define ETH_RSS_IPV4 RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4 RTE_BIT64(3)
+#define ETH_RSS_FRAG_IPV4 RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4)
+#define ETH_RSS_NONFRAG_IPV4_TCP RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5)
+#define ETH_RSS_NONFRAG_IPV4_UDP RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6 RTE_BIT64(8)
+#define ETH_RSS_IPV6 RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6 RTE_BIT64(9)
+#define ETH_RSS_FRAG_IPV6 RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10)
+#define ETH_RSS_NONFRAG_IPV6_TCP RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11)
+#define ETH_RSS_NONFRAG_IPV6_UDP RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD RTE_BIT64(14)
+#define ETH_RSS_L2_PAYLOAD RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX RTE_BIT64(15)
+#define ETH_RSS_IPV6_EX RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX RTE_BIT64(16)
+#define ETH_RSS_IPV6_TCP_EX RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX RTE_BIT64(17)
+#define ETH_RSS_IPV6_UDP_EX RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT RTE_BIT64(18)
+#define ETH_RSS_PORT RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN RTE_BIT64(19)
+#define ETH_RSS_VXLAN RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE RTE_BIT64(20)
+#define ETH_RSS_GENEVE RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE RTE_BIT64(21)
+#define ETH_RSS_NVGRE RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU RTE_BIT64(23)
+#define ETH_RSS_GTPU RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH RTE_BIT64(24)
+#define ETH_RSS_ETH RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN RTE_BIT64(25)
+#define ETH_RSS_S_VLAN RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN RTE_BIT64(26)
+#define ETH_RSS_C_VLAN RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP RTE_BIT64(27)
+#define ETH_RSS_ESP RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH RTE_BIT64(28)
+#define ETH_RSS_AH RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3 RTE_BIT64(29)
+#define ETH_RSS_L2TPV3 RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP RTE_BIT64(30)
+#define ETH_RSS_PFCP RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE RTE_BIT64(31)
+#define ETH_RSS_PPPOE RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI RTE_BIT64(32)
+#define ETH_RSS_ECPRI RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS RTE_BIT64(33)
+#define ETH_RSS_MPLS RTE_ETH_RSS_MPLS
+#define RTE_ETH_RSS_IPV4_CHKSUM RTE_BIT64(34)
+#define ETH_RSS_IPV4_CHKSUM RTE_ETH_RSS_IPV4_CHKSUM
/**
* The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
@@ -554,41 +631,48 @@ struct rte_eth_rss_conf {
* checksum type for constructing the use of RSS offload bits.
*
* Due to above reason, some old APIs (and configuration) don't support
- * ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
+ * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
*
* For the case that checksum is not used in an UDP header,
* it takes the reserved value 0 as input for the hash function.
*/
-#define ETH_RSS_L4_CHKSUM RTE_BIT64(35)
+#define RTE_ETH_RSS_L4_CHKSUM RTE_BIT64(35)
+#define ETH_RSS_L4_CHKSUM RTE_ETH_RSS_L4_CHKSUM
/*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
* more specific input set selection. These bits are defined starting
* from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
* both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
* the same level are used simultaneously, it is the same case as none of
* them are added.
*/
-#define ETH_RSS_L3_SRC_ONLY RTE_BIT64(63)
-#define ETH_RSS_L3_DST_ONLY RTE_BIT64(62)
-#define ETH_RSS_L4_SRC_ONLY RTE_BIT64(61)
-#define ETH_RSS_L4_DST_ONLY RTE_BIT64(60)
-#define ETH_RSS_L2_SRC_ONLY RTE_BIT64(59)
-#define ETH_RSS_L2_DST_ONLY RTE_BIT64(58)
+#define RTE_ETH_RSS_L3_SRC_ONLY RTE_BIT64(63)
+#define ETH_RSS_L3_SRC_ONLY RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY RTE_BIT64(62)
+#define ETH_RSS_L3_DST_ONLY RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY RTE_BIT64(61)
+#define ETH_RSS_L4_SRC_ONLY RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY RTE_BIT64(60)
+#define ETH_RSS_L4_DST_ONLY RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY RTE_BIT64(59)
+#define ETH_RSS_L2_SRC_ONLY RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY RTE_BIT64(58)
+#define ETH_RSS_L2_DST_ONLY RTE_ETH_RSS_L2_DST_ONLY
/*
* Only select IPV6 address prefix as RSS input set according to
- * https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * https:tools.ietf.org/html/rfc6052
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
*/
-#define RTE_ETH_RSS_L3_PRE32 RTE_BIT64(57)
-#define RTE_ETH_RSS_L3_PRE40 RTE_BIT64(56)
-#define RTE_ETH_RSS_L3_PRE48 RTE_BIT64(55)
-#define RTE_ETH_RSS_L3_PRE56 RTE_BIT64(54)
-#define RTE_ETH_RSS_L3_PRE64 RTE_BIT64(53)
-#define RTE_ETH_RSS_L3_PRE96 RTE_BIT64(52)
+#define RTE_ETH_RSS_L3_PRE32 RTE_BIT64(57)
+#define RTE_ETH_RSS_L3_PRE40 RTE_BIT64(56)
+#define RTE_ETH_RSS_L3_PRE48 RTE_BIT64(55)
+#define RTE_ETH_RSS_L3_PRE56 RTE_BIT64(54)
+#define RTE_ETH_RSS_L3_PRE64 RTE_BIT64(53)
+#define RTE_ETH_RSS_L3_PRE96 RTE_BIT64(52)
/*
* Use the following macros to combine with the above layers
@@ -603,22 +687,27 @@ struct rte_eth_rss_conf {
* It basically stands for the innermost encapsulation level RSS
* can be performed on according to PMD and device capabilities.
*/
-#define ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_ETH_RSS_LEVEL_PMD_DEFAULT
/**
* level 1, requests RSS to be performed on the outermost packet
* encapsulation level.
*/
-#define ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST RTE_ETH_RSS_LEVEL_OUTERMOST
/**
* level 2, requests RSS to be performed on the specified inner packet
* encapsulation level, from outermost to innermost (lower to higher values).
*/
-#define ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK RTE_ETH_RSS_LEVEL_MASK
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf) RTE_ETH_RSS_LEVEL(rss_hf)
/**
* For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -633,217 +722,275 @@ struct rte_eth_rss_conf {
static inline uint64_t
rte_eth_rss_hf_refine(uint64_t rss_hf)
{
- if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
- rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+ rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
- if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
- rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+ rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
return rss_hf;
}
-#define ETH_RSS_IPV6_PRE32 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32 RTE_ETH_RSS_IPV6_PRE32
-#define ETH_RSS_IPV6_PRE40 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40 RTE_ETH_RSS_IPV6_PRE40
-#define ETH_RSS_IPV6_PRE48 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48 RTE_ETH_RSS_IPV6_PRE48
-#define ETH_RSS_IPV6_PRE56 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56 RTE_ETH_RSS_IPV6_PRE56
-#define ETH_RSS_IPV6_PRE64 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64 RTE_ETH_RSS_IPV6_PRE64
-#define ETH_RSS_IPV6_PRE96 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96 RTE_ETH_RSS_IPV6_PRE96
-#define ETH_RSS_IPV6_PRE32_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP RTE_ETH_RSS_IPV6_PRE32_UDP
-#define ETH_RSS_IPV6_PRE40_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP RTE_ETH_RSS_IPV6_PRE40_UDP
-#define ETH_RSS_IPV6_PRE48_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP RTE_ETH_RSS_IPV6_PRE48_UDP
-#define ETH_RSS_IPV6_PRE56_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP RTE_ETH_RSS_IPV6_PRE56_UDP
-#define ETH_RSS_IPV6_PRE64_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP RTE_ETH_RSS_IPV6_PRE64_UDP
-#define ETH_RSS_IPV6_PRE96_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP RTE_ETH_RSS_IPV6_PRE96_UDP
-#define ETH_RSS_IPV6_PRE32_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP RTE_ETH_RSS_IPV6_PRE32_TCP
-#define ETH_RSS_IPV6_PRE40_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP RTE_ETH_RSS_IPV6_PRE40_TCP
-#define ETH_RSS_IPV6_PRE48_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP RTE_ETH_RSS_IPV6_PRE48_TCP
-#define ETH_RSS_IPV6_PRE56_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP RTE_ETH_RSS_IPV6_PRE56_TCP
-#define ETH_RSS_IPV6_PRE64_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP RTE_ETH_RSS_IPV6_PRE64_TCP
-#define ETH_RSS_IPV6_PRE96_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP RTE_ETH_RSS_IPV6_PRE96_TCP
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP RTE_ETH_RSS_IPV6_PRE32_SCTP
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP RTE_ETH_RSS_IPV6_PRE40_SCTP
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP RTE_ETH_RSS_IPV6_PRE48_SCTP
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP RTE_ETH_RSS_IPV6_PRE56_SCTP
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP RTE_ETH_RSS_IPV6_PRE64_SCTP
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
- ETH_RSS_S_VLAN | \
- ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+ RTE_ETH_RSS_S_VLAN | \
+ RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN RTE_ETH_RSS_VLAN
/** Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | \
- ETH_RSS_PORT | \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE | \
- ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | \
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE | \
+ RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK RTE_ETH_RSS_PROTO_MASK
/*
* Definitions used for redirection table entry size.
* Some RSS RETA sizes may not be supported by some drivers, check the
* documentation or the description of relevant functions for more details.
*/
-#define ETH_RSS_RETA_SIZE_64 64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE 64
+#define RTE_ETH_RSS_RETA_SIZE_64 64
+#define ETH_RSS_RETA_SIZE_64 RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128 RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256 RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512 RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE 64
+#define RTE_RETA_GROUP_SIZE RTE_ETH_RETA_GROUP_SIZE
/**@{@name VMDq and DCB maximums */
-#define ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDq VLAN filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDq DCB queues. */
-#define ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDq VLAN filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDq DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES RTE_ETH_DCB_NUM_QUEUES
/**@}*/
/**@{@name DCB capabilities */
-#define ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT RTE_ETH_DCB_PFC_SUPPORT
/**@}*/
/**@{@name VLAN offload bits */
-#define ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
-
-#define ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
-#define ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
-#define ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
-#define ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
-#define ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
+#define ETH_VLAN_STRIP_MASK RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
+#define ETH_VLAN_FILTER_MASK RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
+#define ETH_VLAN_EXTEND_MASK RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
+#define ETH_QINQ_STRIP_MASK RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX RTE_ETH_VLAN_ID_MAX
/**@}*/
/* Definitions used for receive MAC address */
-#define ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR RTE_ETH_NUM_RECEIVE_MAC_ADDR
/* Definitions used for unicast hash */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
/**@{@name VMDq Rx mode
* @see rte_eth_vmdq_rx_conf.rx_mode
*/
-#define ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST RTE_ETH_VMDQ_ACCEPT_MULTICAST
/**@}*/
/**
@@ -856,7 +1003,7 @@ struct rte_eth_rss_reta_entry64 {
/** Mask bits indicate which entries need to be updated/queried. */
uint64_t mask;
/** Group of 64 redirection table entries. */
- uint16_t reta[RTE_RETA_GROUP_SIZE];
+ uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
};
/**
@@ -864,38 +1011,44 @@ struct rte_eth_rss_reta_entry64 {
* in DCB configurations
*/
enum rte_eth_nb_tcs {
- ETH_4_TCS = 4, /**< 4 TCs with DCB. */
- ETH_8_TCS = 8 /**< 8 TCs with DCB. */
+ RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+ RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */
};
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
/**
* This enum indicates the possible number of queue pools
* in VMDq configurations.
*/
enum rte_eth_nb_pools {
- ETH_8_POOLS = 8, /**< 8 VMDq pools. */
- ETH_16_POOLS = 16, /**< 16 VMDq pools. */
- ETH_32_POOLS = 32, /**< 32 VMDq pools. */
- ETH_64_POOLS = 64 /**< 64 VMDq pools. */
+ RTE_ETH_8_POOLS = 8, /**< 8 VMDq pools. */
+ RTE_ETH_16_POOLS = 16, /**< 16 VMDq pools. */
+ RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */
+ RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */
};
+#define ETH_8_POOLS RTE_ETH_8_POOLS
+#define ETH_16_POOLS RTE_ETH_16_POOLS
+#define ETH_32_POOLS RTE_ETH_32_POOLS
+#define ETH_64_POOLS RTE_ETH_64_POOLS
/* This structure may be extended in future. */
struct rte_eth_dcb_rx_conf {
enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_vmdq_dcb_tx_conf {
enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_dcb_tx_conf {
enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_vmdq_tx_conf {
@@ -921,9 +1074,9 @@ struct rte_eth_vmdq_dcb_conf {
struct {
uint16_t vlan_id; /**< The VLAN ID of the received frame */
uint64_t pools; /**< Bitmask of pools for packet Rx */
- } pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
+ } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
/** Selects a queue in a pool */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
/**
@@ -933,7 +1086,7 @@ struct rte_eth_vmdq_dcb_conf {
* Using this feature, packets are routed to a pool of queues. By default,
* the pool selection is based on the MAC address, the VLAN ID in the
* VLAN tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
* selection using only the MAC address. MAC address to pool mapping is done
* using the rte_eth_dev_mac_addr_add function, with the pool parameter
* corresponding to the pool ID.
@@ -954,7 +1107,7 @@ struct rte_eth_vmdq_rx_conf {
struct {
uint16_t vlan_id; /**< The VLAN ID of the received frame */
uint64_t pools; /**< Bitmask of pools for packet Rx */
- } pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
+ } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
};
/**
@@ -963,7 +1116,7 @@ struct rte_eth_vmdq_rx_conf {
struct rte_eth_txmode {
enum rte_eth_tx_mq_mode mq_mode; /**< Tx multi-queues mode. */
/**
- * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -1055,7 +1208,7 @@ struct rte_eth_rxconf {
uint16_t share_group;
uint16_t share_qid; /**< Shared Rx queue ID in group */
/**
- * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_queue_offload_capa or rx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1084,7 +1237,7 @@ struct rte_eth_txconf {
uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
/**
- * Per-queue Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_queue_offload_capa or tx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1195,12 +1348,17 @@ struct rte_eth_desc_lim {
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
- RTE_FC_NONE = 0, /**< Disable flow control. */
- RTE_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */
- RTE_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
- RTE_FC_FULL /**< Enable flow control on both side. */
+ RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+ RTE_ETH_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */
+ RTE_ETH_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
+ RTE_ETH_FC_FULL /**< Enable flow control on both side. */
};
+#define RTE_FC_NONE RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL RTE_ETH_FC_FULL
+
/**
* A structure used to configure Ethernet flow control parameter.
* These parameters will be configured into the register of the NIC.
@@ -1231,18 +1389,29 @@ struct rte_eth_pfc_conf {
* @see rte_eth_udp_tunnel
*/
enum rte_eth_tunnel_type {
- RTE_TUNNEL_TYPE_NONE = 0,
- RTE_TUNNEL_TYPE_VXLAN,
- RTE_TUNNEL_TYPE_GENEVE,
- RTE_TUNNEL_TYPE_TEREDO,
- RTE_TUNNEL_TYPE_NVGRE,
- RTE_TUNNEL_TYPE_IP_IN_GRE,
- RTE_L2_TUNNEL_TYPE_E_TAG,
- RTE_TUNNEL_TYPE_VXLAN_GPE,
- RTE_TUNNEL_TYPE_ECPRI,
- RTE_TUNNEL_TYPE_MAX,
+ RTE_ETH_TUNNEL_TYPE_NONE = 0,
+ RTE_ETH_TUNNEL_TYPE_VXLAN,
+ RTE_ETH_TUNNEL_TYPE_GENEVE,
+ RTE_ETH_TUNNEL_TYPE_TEREDO,
+ RTE_ETH_TUNNEL_TYPE_NVGRE,
+ RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+ RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+ RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+ RTE_ETH_TUNNEL_TYPE_ECPRI,
+ RTE_ETH_TUNNEL_TYPE_MAX,
};
+#define RTE_TUNNEL_TYPE_NONE RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX RTE_ETH_TUNNEL_TYPE_MAX
+
/* Deprecated API file for rte_eth_dev_filter_* functions */
#include "rte_eth_ctrl.h"
@@ -1250,11 +1419,16 @@ enum rte_eth_tunnel_type {
* Memory space that can be configured to store Flow Director filters
* in the board memory.
*/
-enum rte_fdir_pballoc_type {
- RTE_FDIR_PBALLOC_64K = 0, /**< 64k. */
- RTE_FDIR_PBALLOC_128K, /**< 128k. */
- RTE_FDIR_PBALLOC_256K, /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+ RTE_ETH_FDIR_PBALLOC_64K = 0, /**< 64k. */
+ RTE_ETH_FDIR_PBALLOC_128K, /**< 128k. */
+ RTE_ETH_FDIR_PBALLOC_256K, /**< 256k. */
};
+#define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K RTE_ETH_FDIR_PBALLOC_256K
/**
* Select report mode of FDIR hash information in Rx descriptors.
@@ -1271,9 +1445,9 @@ enum rte_fdir_status_mode {
*
* If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
*/
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
enum rte_fdir_mode mode; /**< Flow Director mode. */
- enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+ enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
enum rte_fdir_status_mode status; /**< How to report FDIR hash. */
/** Rx queue of packets matching a "drop" filter in perfect mode. */
uint8_t drop_queue;
@@ -1282,6 +1456,8 @@ struct rte_fdir_conf {
struct rte_eth_fdir_flex_conf flex_conf;
};
+#define rte_fdir_conf rte_eth_fdir_conf
+
/**
* UDP tunneling configuration.
*
@@ -1299,7 +1475,7 @@ struct rte_eth_udp_tunnel {
/**
* A structure used to enable/disable specific device interrupts.
*/
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
uint32_t lsc:1;
/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1308,18 +1484,20 @@ struct rte_intr_conf {
uint32_t rmv:1;
};
+#define rte_intr_conf rte_eth_intr_conf
+
/**
* A structure used to configure an Ethernet port.
* Depending upon the Rx multi-queue mode, extra advanced
* configuration settings may be needed.
*/
struct rte_eth_conf {
- uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
- used. ETH_LINK_SPEED_FIXED disables link
+ uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+ used. RTE_ETH_LINK_SPEED_FIXED disables link
autonegotiation, and a unique speed shall be
set. Otherwise, the bitmap defines the set of
speeds to be advertised. If the special value
- ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+ RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
supported are advertised. */
struct rte_eth_rxmode rxmode; /**< Port Rx configuration. */
struct rte_eth_txmode txmode; /**< Port Tx configuration. */
@@ -1346,47 +1524,67 @@ struct rte_eth_conf {
struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
} tx_adv_conf; /**< Port Tx DCB configuration (union). */
/** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
- is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+ is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT. */
uint32_t dcb_capability_en;
- struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
- struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+ struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+ struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
};
/**
* Rx offload capabilities of a device.
*/
-#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_SCATTER 0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP 0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP 0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP 0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT 0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER 0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND 0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_SCATTER 0x00002000
+#define DEV_RX_OFFLOAD_SCATTER RTE_ETH_RX_OFFLOAD_SCATTER
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
-#define DEV_RX_OFFLOAD_SECURITY 0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC 0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM 0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
-#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
-
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP 0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY 0x00008000
+#define DEV_RX_OFFLOAD_SECURITY RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC 0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM 0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH 0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
+#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
+
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN RTE_ETH_RX_OFFLOAD_VLAN
/*
* If new Rx offload capabilities are defined, they also must be
@@ -1396,54 +1594,76 @@ struct rte_eth_conf {
/**
* Tx offload capabilities of a device.
*/
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT 0x00002000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM 0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO 0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO 0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT 0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
/**
* Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
* Tx queue without SW lock.
*/
-#define DEV_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
/** Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_ETH_TX_OFFLOAD_MULTI_SEGS
/**
* Device supports optimization for fast release of mbufs.
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
-#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
+#define RTE_ETH_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_TX_OFFLOAD_SECURITY RTE_ETH_TX_OFFLOAD_SECURITY
/**
* Device supports generic UDP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
/**
* Device supports generic IP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
/** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
/**
* Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1591,7 +1811,7 @@ struct rte_eth_dev_info {
uint16_t vmdq_pool_base; /**< First ID of VMDq pools. */
struct rte_eth_desc_lim rx_desc_lim; /**< Rx descriptors limits */
struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptors limits */
- uint32_t speed_capa; /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ uint32_t speed_capa; /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
/** Configured number of Rx/Tx queues */
uint16_t nb_rx_queues; /**< Number of Rx queues. */
uint16_t nb_tx_queues; /**< Number of Tx queues. */
@@ -1695,8 +1915,10 @@ struct rte_eth_xstat_name {
char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
};
-#define ETH_DCB_NUM_TCS 8
-#define ETH_MAX_VMDQ_POOL 64
+#define RTE_ETH_DCB_NUM_TCS 8
+#define ETH_DCB_NUM_TCS RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL 64
+#define ETH_MAX_VMDQ_POOL RTE_ETH_MAX_VMDQ_POOL
/**
* A structure used to get the information of queue and
@@ -1707,12 +1929,12 @@ struct rte_eth_dcb_tc_queue_mapping {
struct {
uint16_t base;
uint16_t nb_queue;
- } tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+ } tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
/** Rx queues assigned to tc per Pool */
struct {
uint16_t base;
uint16_t nb_queue;
- } tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+ } tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
};
/**
@@ -1721,8 +1943,8 @@ struct rte_eth_dcb_tc_queue_mapping {
*/
struct rte_eth_dcb_info {
uint8_t nb_tcs; /**< number of TCs */
- uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
- uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */
+ uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+ uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */
/** Rx queues assigned to tc */
struct rte_eth_dcb_tc_queue_mapping tc_queue;
};
@@ -1746,7 +1968,7 @@ enum rte_eth_fec_mode {
/* A structure used to get capabilities per link speed */
struct rte_eth_fec_capa {
- uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+ uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
uint32_t capa; /**< FEC capabilities bitmask */
};
@@ -2075,14 +2297,14 @@ uint16_t rte_eth_dev_count_total(void);
* @param speed
* Numerical speed value in Mbps
* @param duplex
- * ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ * RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
* @return
* 0 if the speed cannot be mapped
*/
uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
/**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2092,7 +2314,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
const char *rte_eth_dev_rx_offload_name(uint64_t offload);
/**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2200,7 +2422,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
* of the Prefetch, Host, and Write-Back threshold registers of the receive
* ring.
* In addition it contains the hardware offloads features to activate using
- * the DEV_RX_OFFLOAD_* flags.
+ * the RTE_ETH_RX_OFFLOAD_* flags.
* If an offloading set in rx_conf->offloads
* hasn't been set in the input argument eth_conf->rxmode.offloads
* to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2777,7 +2999,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
*
* @param str
* A pointer to a string to be filled with textual representation of
- * device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ * device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
* store default link status text.
* @param len
* Length of available memory at 'str' string.
@@ -3323,10 +3545,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
* The port identifier of the Ethernet device.
* @param offload_mask
* The VLAN Offload bit mask can be mixed use with "OR"
- * ETH_VLAN_STRIP_OFFLOAD
- * ETH_VLAN_FILTER_OFFLOAD
- * ETH_VLAN_EXTEND_OFFLOAD
- * ETH_QINQ_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_FILTER_OFFLOAD
+ * RTE_ETH_VLAN_EXTEND_OFFLOAD
+ * RTE_ETH_QINQ_STRIP_OFFLOAD
* @return
* - (0) if successful.
* - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3342,10 +3564,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
* The port identifier of the Ethernet device.
* @return
* - (>0) if successful. Bit mask to indicate
- * ETH_VLAN_STRIP_OFFLOAD
- * ETH_VLAN_FILTER_OFFLOAD
- * ETH_VLAN_EXTEND_OFFLOAD
- * ETH_QINQ_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_FILTER_OFFLOAD
+ * RTE_ETH_VLAN_EXTEND_OFFLOAD
+ * RTE_ETH_QINQ_STRIP_OFFLOAD
* - (-ENODEV) if *port_id* invalid.
*/
int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5371,7 +5593,7 @@ uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
* rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf* buffers
* of those packets whose transmission was effectively completed.
*
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
* invoke this function concurrently on the same Tx queue without SW lock.
* @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
*
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index db3392bf9759..59d9d9eeb63f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2957,7 +2957,7 @@ struct rte_flow_action_rss {
* through.
*/
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
#include "gso_udp4.h"
#define ILLEGAL_UDP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+ ((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
(ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
#define ILLEGAL_TCP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+ ((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
ol_flags = pkt->ol_flags;
if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_tunnel_udp4_segment(pkt, gso_size,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_TCP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_UDP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_udp4_segment(pkt, gso_size, direct_pool,
indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
uint32_t gso_types;
/**< the bit mask of required GSO types. The GSO library
* uses the same macros as that of describing device TX
- * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+ * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
* gso_types.
*
* For example, if applications want to segment TCP/IPv4
- * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+ * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
*/
uint16_t gso_size;
/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index fdaaaf67f2f3..57e871201816 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -185,7 +185,7 @@ extern "C" {
* The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
* HW capability, At minimum, the PMD should support
* PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
*/
#define PKT_RX_OUTER_L4_CKSUM_MASK ((1ULL << 21) | (1ULL << 22))
@@ -208,7 +208,7 @@ extern "C" {
* a) Fill outer_l2_len and outer_l3_len in mbuf.
* b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
* c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
*/
#define PKT_TX_OUTER_UDP_CKSUM (1ULL << 41)
@@ -254,7 +254,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
* or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
@@ -267,7 +267,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
* if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index fb03cf1dcf90..29abe8da53cf 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
* of the dynamic field to be registered:
* const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
* - The application initializes the PMD, and asks for this feature
- * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ * at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
* rxconf. This will make the PMD to register the field by calling
* rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
* stores the returned offset.
--
2.31.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
2021-10-19 18:35 4% ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-10-22 20:49 4% ` Harman Kalra
2021-10-24 20:04 4% ` [dpdk-dev] [PATCH v6 0/9] " David Marchand
` (3 more replies)
2 siblings, 4 replies; 200+ results
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 2: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 3: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 4: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 5: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 6: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.
Harman Kalra (6):
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 163 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 10 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 16 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 15 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 ++-
drivers/bus/pci/pci_common.c | 29 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 108 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 +--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 23 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 19 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 111 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 61 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 53 +-
drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 26 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 36 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +-
drivers/net/thunderx/nicvf_ethdev.c | 12 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 76 +-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 48 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 10 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 588 +++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 53 +-
lib/eal/freebsd/eal_interrupts.c | 112 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 -------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 668 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 37 +-
lib/eal/linux/eal_dev.c | 63 +-
lib/eal/linux/eal_interrupts.c | 303 +++++---
lib/eal/version.map | 46 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3631 insertions(+), 1713 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI
@ 2021-10-22 21:24 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-22 21:24 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev
> Dmitry Kozlyuk (2):
> cmdline: make struct cmdline opaque
> cmdline: make struct rdline opaque
Applied, thanks.
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal
2021-10-22 20:49 4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
@ 2021-10-24 20:04 4% ` David Marchand
2021-10-25 13:04 0% ` [dpdk-dev] [PATCH v5 0/6] " Raslan Darawsheh
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.
v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
(see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
* (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
are squashed in it,
* (now) patch 5 concerns other libraries updates,
* (now) patch 6 concerns drivers updates:
* instance allocation is moved to probing for auxiliary,
* there might be a bug for PCI drivers non requesting
RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
* (now) patch 7 only hides structure, but keep it in a EAL private
header, this makes it possible to keep info in tracepoints,
* (now) patch 8 deals with VFIO/UIO internal fds merge,
* (now) patch 9 extends event list,
--
David Marchand
Harman Kalra (9):
interrupts: add allocator and accessors
interrupts: remove direct access to interrupt handle
test/interrupts: remove direct access to interrupt handle
alarm: remove direct access to interrupt handle
lib: remove direct access to interrupt handle
drivers: remove direct access to interrupt handle
interrupts: make interrupt handle structure opaque
interrupts: rename device specific file descriptor
interrupts: extend event list
MAINTAINERS | 1 +
app/test/test_interrupts.c | 164 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 +-
drivers/bus/auxiliary/auxiliary_common.c | 17 +-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 +-
drivers/bus/fslmc/fslmc_vfio.c | 30 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +-
drivers/bus/pci/linux/pci_vfio.c | 108 ++-
drivers/bus/pci/pci_common.c | 28 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 107 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 21 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 19 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 55 +-
drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 33 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 10 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 80 ++-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 528 ++++++++++++++
lib/eal/common/eal_interrupts.h | 30 +
lib/eal/common/eal_private.h | 10 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 44 +-
lib/eal/freebsd/eal_interrupts.c | 85 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 10 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 651 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 +-
lib/eal/linux/eal_dev.c | 57 +-
lib/eal/linux/eal_interrupts.c | 304 ++++----
lib/eal/version.map | 45 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3473 insertions(+), 1742 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.23.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
2021-10-20 7:49 3% ` [dpdk-dev] [PATCH v17 " Liguzinski, WojciechX
@ 2021-10-25 11:32 3% ` Liguzinski, WojciechX
2021-10-26 8:24 3% ` Liu, Yu Y
2021-10-28 10:17 3% ` [dpdk-dev] [PATCH v19 " Liguzinski, WojciechX
0 siblings, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-10-25 11:32 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Liguzinski, WojciechX (5):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 3 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 241 ++--
lib/sched/rte_sched.h | 63 +-
lib/sched/version.map | 4 +
19 files changed, 2172 insertions(+), 279 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
2021-10-22 20:49 4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
2021-10-24 20:04 4% ` [dpdk-dev] [PATCH v6 0/9] " David Marchand
@ 2021-10-25 13:04 0% ` Raslan Darawsheh
2021-10-25 13:09 0% ` David Marchand
2021-10-25 13:34 4% ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
2021-10-25 14:27 4% ` [dpdk-dev] [PATCH v8 " David Marchand
3 siblings, 1 reply; 200+ results
From: Raslan Darawsheh @ 2021-10-25 13:04 UTC (permalink / raw)
To: Harman Kalra, dev
Cc: david.marchand, dmitry.kozliuk, mdr, NBU-Contact-Thomas Monjalon
Hi,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> Sent: Friday, October 22, 2021 11:49 PM
> To: dev@dpdk.org
> Cc: david.marchand@redhat.com; dmitry.kozliuk@gmail.com;
> mdr@ashroe.eu; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> Harman Kalra <hkalra@marvell.com>
> Subject: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
>
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> 3A__docs.google.com_s&data=04%7C01%7Crasland%40nvidia.com%7C
> 567d8ee2e3c842a9e59808d9959d822e%7C43083d15727340c1b7db39efd9ccc1
> 7a%7C0%7C0%7C637705326003996997%7CUnknown%7CTWFpbGZsb3d8eyJ
> WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
> 7C1000&sdata=7UgxpkEtH%2Fnjk7xo9qELjqWi58XLzzCH2pimeDWLzvc%
> 3D&reserved=0
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> 23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> 7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> &s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are
> defined
> and also hides struct rte_intr_handle definition.
>
> Details on each patch of the series:
> Patch 1: eal/interrupts: implement get set APIs
> This patch provides prototypes and implementation of all the new
> get set APIs. Alloc APIs are implemented to allocate memory for
> interrupt handle instance. Currently most of the drivers defines
> interrupt handle instance as static but now it cant be static as
> size of rte_intr_handle is unknown to all the drivers. Drivers are
> expected to allocate interrupt instances during initialization
> and free these instances during cleanup phase.
> This patch also rearranges the headers related to interrupt
> framework. Epoll related definitions prototypes are moved into a
> new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
> which were driver specific are moved to rte_interrupts.h (as anyways
> it was accessible and used outside DPDK library. Later in the series
> rte_eal_interrupts.h is removed.
>
> Patch 2: eal/interrupts: avoid direct access to interrupt handle
> Modifying the interrupt framework for linux and freebsd to use these
> get set alloc APIs as per requirement and avoid accessing the fields
> directly.
>
> Patch 3: test/interrupt: apply get set interrupt handle APIs
> Updating interrupt test suite to use interrupt handle APIs.
>
> Patch 4: drivers: remove direct access to interrupt handle fields
> Modifying all the drivers and libraries which are currently directly
> accessing the interrupt handle fields. Drivers are expected to
> allocated the interrupt instance, use get set APIs with the allocated
> interrupt handle and free it on cleanup.
>
> Patch 5: eal/interrupts: make interrupt handle structure opaque
> In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
> definition is moved to c file to make it completely opaque. As part of
> interrupt handle allocation, array like efds and elist(which are currently
> static) are dynamically allocated with default size
> (RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
> device requirement using new API rte_intr_handle_event_list_update().
> Eg, on PCI device probing MSIX size can be queried and these arrays can
> be reallocated accordingly.
>
> Patch 6: eal/alarm: introduce alarm fini routine
> Introducing alarm fini routine, as the memory allocated for alarm interrupt
> instance can be freed in alarm fini.
>
> Testing performed:
> 1. Validated the series by running interrupts and alarm test suite.
> 2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
> where interrupts are expected on packet arrival.
>
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
>
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally
> allocated memory information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
>
> v3:
> * Removed flag from instance alloc API, rather auto detect
> if memory should be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
>
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
>
> v5:
> * Reverted back to passing flag to instance alloc API, as
> with auto detect some multiprocess issues existing in the
> library were causing tests failure.
> * Rebased to top of tree.
>
> Harman Kalra (6):
> eal/interrupts: implement get set APIs
> eal/interrupts: avoid direct access to interrupt handle
> test/interrupt: apply get set interrupt handle APIs
> drivers: remove direct access to interrupt handle
> eal/interrupts: make interrupt handle structure opaque
> eal/alarm: introduce alarm fini routine
>
> MAINTAINERS | 1 +
> app/test/test_interrupts.c | 163 +++--
> drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
> .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
> drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
> drivers/bus/auxiliary/auxiliary_common.c | 2 +
> drivers/bus/auxiliary/linux/auxiliary.c | 10 +
> drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
> drivers/bus/dpaa/dpaa_bus.c | 28 +-
> drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
> drivers/bus/fslmc/fslmc_bus.c | 16 +-
> drivers/bus/fslmc/fslmc_vfio.c | 32 +-
> drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 +-
> drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
> drivers/bus/fslmc/rte_fslmc.h | 2 +-
> drivers/bus/ifpga/ifpga_bus.c | 15 +-
> drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
> drivers/bus/pci/bsd/pci.c | 21 +-
> drivers/bus/pci/linux/pci.c | 4 +-
> drivers/bus/pci/linux/pci_uio.c | 73 +-
> drivers/bus/pci/linux/pci_vfio.c | 115 ++-
> drivers/bus/pci/pci_common.c | 29 +-
> drivers/bus/pci/pci_common_uio.c | 21 +-
> drivers/bus/pci/rte_bus_pci.h | 4 +-
> drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
> drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
> drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
> drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
> drivers/common/cnxk/roc_cpt.c | 8 +-
> drivers/common/cnxk/roc_dev.c | 14 +-
> drivers/common/cnxk/roc_irq.c | 108 +--
> drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
> drivers/common/cnxk/roc_nix_irq.c | 36 +-
> drivers/common/cnxk/roc_npa.c | 2 +-
> drivers/common/cnxk/roc_platform.h | 49 +-
> drivers/common/cnxk/roc_sso.c | 4 +-
> drivers/common/cnxk/roc_tim.c | 4 +-
> drivers/common/octeontx2/otx2_dev.c | 14 +-
> drivers/common/octeontx2/otx2_irq.c | 117 +--
> .../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
> drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
> drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
> drivers/net/atlantic/atl_ethdev.c | 20 +-
> drivers/net/avp/avp_ethdev.c | 8 +-
> drivers/net/axgbe/axgbe_ethdev.c | 12 +-
> drivers/net/axgbe/axgbe_mdio.c | 6 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
> drivers/net/bnxt/bnxt_ethdev.c | 33 +-
> drivers/net/bnxt/bnxt_irq.c | 4 +-
> drivers/net/dpaa/dpaa_ethdev.c | 47 +-
> drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
> drivers/net/e1000/em_ethdev.c | 23 +-
> drivers/net/e1000/igb_ethdev.c | 79 +--
> drivers/net/ena/ena_ethdev.c | 35 +-
> drivers/net/enic/enic_main.c | 26 +-
> drivers/net/failsafe/failsafe.c | 23 +-
> drivers/net/failsafe/failsafe_intr.c | 43 +-
> drivers/net/failsafe/failsafe_ops.c | 19 +-
> drivers/net/failsafe/failsafe_private.h | 2 +-
> drivers/net/fm10k/fm10k_ethdev.c | 32 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
> drivers/net/hns3/hns3_ethdev.c | 57 +-
> drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
> drivers/net/hns3/hns3_rxtx.c | 2 +-
> drivers/net/i40e/i40e_ethdev.c | 53 +-
> drivers/net/iavf/iavf_ethdev.c | 42 +-
> drivers/net/iavf/iavf_vchnl.c | 4 +-
> drivers/net/ice/ice_dcf.c | 10 +-
> drivers/net/ice/ice_dcf_ethdev.c | 21 +-
> drivers/net/ice/ice_ethdev.c | 49 +-
> drivers/net/igc/igc_ethdev.c | 45 +-
> drivers/net/ionic/ionic_ethdev.c | 17 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
> drivers/net/memif/memif_socket.c | 111 ++-
> drivers/net/memif/memif_socket.h | 4 +-
> drivers/net/memif/rte_eth_memif.c | 61 +-
> drivers/net/memif/rte_eth_memif.h | 2 +-
> drivers/net/mlx4/mlx4.c | 19 +-
> drivers/net/mlx4/mlx4.h | 2 +-
> drivers/net/mlx4/mlx4_intr.c | 47 +-
> drivers/net/mlx5/linux/mlx5_os.c | 53 +-
> drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
> drivers/net/mlx5/mlx5.h | 6 +-
> drivers/net/mlx5/mlx5_rxq.c | 42 +-
> drivers/net/mlx5/mlx5_trigger.c | 4 +-
> drivers/net/mlx5/mlx5_txpp.c | 26 +-
> drivers/net/netvsc/hn_ethdev.c | 4 +-
> drivers/net/nfp/nfp_common.c | 34 +-
> drivers/net/nfp/nfp_ethdev.c | 13 +-
> drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
> drivers/net/ngbe/ngbe_ethdev.c | 29 +-
> drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
> drivers/net/qede/qede_ethdev.c | 16 +-
> drivers/net/sfc/sfc_intr.c | 30 +-
> drivers/net/tap/rte_eth_tap.c | 36 +-
> drivers/net/tap/rte_eth_tap.h | 2 +-
> drivers/net/tap/tap_intr.c | 32 +-
> drivers/net/thunderx/nicvf_ethdev.c | 12 +
> drivers/net/thunderx/nicvf_struct.h | 2 +-
> drivers/net/txgbe/txgbe_ethdev.c | 38 +-
> drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
> drivers/net/vhost/rte_eth_vhost.c | 76 +-
> drivers/net/virtio/virtio_ethdev.c | 21 +-
> .../net/virtio/virtio_user/virtio_user_dev.c | 48 +-
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
> drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
> drivers/raw/ntb/ntb.c | 9 +-
> .../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
> drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
> drivers/vdpa/mlx5/mlx5_vdpa.c | 10 +
> drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
> drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
> drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 +-
> lib/bbdev/rte_bbdev.c | 4 +-
> lib/eal/common/eal_common_interrupts.c | 588 +++++++++++++++
> lib/eal/common/eal_private.h | 11 +
> lib/eal/common/meson.build | 1 +
> lib/eal/freebsd/eal.c | 1 +
> lib/eal/freebsd/eal_alarm.c | 53 +-
> lib/eal/freebsd/eal_interrupts.c | 112 ++-
> lib/eal/include/meson.build | 2 +-
> lib/eal/include/rte_eal_interrupts.h | 269 -------
> lib/eal/include/rte_eal_trace.h | 24 +-
> lib/eal/include/rte_epoll.h | 118 ++++
> lib/eal/include/rte_interrupts.h | 668 +++++++++++++++++-
> lib/eal/linux/eal.c | 1 +
> lib/eal/linux/eal_alarm.c | 37 +-
> lib/eal/linux/eal_dev.c | 63 +-
> lib/eal/linux/eal_interrupts.c | 303 +++++---
> lib/eal/version.map | 46 +-
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_ethdev.c | 14 +-
> 132 files changed, 3631 insertions(+), 1713 deletions(-)
> create mode 100644 lib/eal/common/eal_common_interrupts.c
> delete mode 100644 lib/eal/include/rte_eal_interrupts.h
> create mode 100644 lib/eal/include/rte_epoll.h
>
> --
> 2.18.0
This series is causing this seg fault with MLX5 pmd:
Thread 1 "dpdk-l3fwd-powe" received signal SIGSEGV, Segmentation fault.
rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
1512 if (__atomic_load_n(&rev->status,
(gdb) bt
#0 rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
#1 0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
#2 0x0000555556de73da in mlx5_rx_intr_vec_enable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:836
#3 0x0000555556e04012 in mlx5_dev_start (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1146
#4 0x0000555555b82da7 in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1823
#5 0x000055555575e66d in main (argc=7, argv=0x7fffffffe3f0) at ../examples/l3fwd-power/main.c:2811
(gdb) f 1
#1 0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
934 rte_intr_free_epoll_fd(intr_handle);
It can be easily reproduced as following:
dpdk-l3fwd-power -n 4 -a 0000:08:00.0,txq_inline_mpw=439,rx_vec_en=1 -a 0000:08:00.,txq_inline_mpw=439,rx_vec_en=1 -c 0xfffffff -- -p 0x3 -P --interrupt-only --parse-ptype --config='(0, 0, 0)(1, 0, 1)(0, 1, 2)(1, 1, 3)(0, 2, 4)(1, 2, 5)(0, 3, 6)(1, 3, 7)'
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
2021-10-25 13:04 0% ` [dpdk-dev] [PATCH v5 0/6] " Raslan Darawsheh
@ 2021-10-25 13:09 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-25 13:09 UTC (permalink / raw)
To: Raslan Darawsheh
Cc: Harman Kalra, dev, dmitry.kozliuk, mdr, NBU-Contact-Thomas Monjalon
On Mon, Oct 25, 2021 at 3:04 PM Raslan Darawsheh <rasland@nvidia.com> wrote:
>
> Hi,
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > Sent: Friday, October 22, 2021 11:49 PM
> > To: dev@dpdk.org
> > Cc: david.marchand@redhat.com; dmitry.kozliuk@gmail.com;
> > mdr@ashroe.eu; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> > Harman Kalra <hkalra@marvell.com>
> > Subject: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
> >
> > Moving struct rte_intr_handle as an internal structure to
> > avoid any ABI breakages in future. Since this structure defines
> > some static arrays and changing respective macros breaks the ABI.
> > Eg:
> > Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> > MSI-X interrupts that can be defined for a PCI device, while PCI
> > specification allows maximum 2048 MSI-X interrupts that can be used.
> > If some PCI device requires more than 512 vectors, either change the
> > RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> > PCI device MSI-X size on probe time. Either way its an ABI breakage.
> >
> > Change already included in 21.11 ABI improvement spreadsheet (item 42):
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> > efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> > 3A__docs.google.com_s&data=04%7C01%7Crasland%40nvidia.com%7C
> > 567d8ee2e3c842a9e59808d9959d822e%7C43083d15727340c1b7db39efd9ccc1
> > 7a%7C0%7C0%7C637705326003996997%7CUnknown%7CTWFpbGZsb3d8eyJ
> > WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
> > 7C1000&sdata=7UgxpkEtH%2Fnjk7xo9qELjqWi58XLzzCH2pimeDWLzvc%
> > 3D&reserved=0
> > preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> > 23gid-
> > 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> > 7JdkxT_Z_SU6RrS37ys4U
> > XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> > &s=lh6DEGhR
> > Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
> >
> > This series makes struct rte_intr_handle totally opaque to the outside
> > world by wrapping it inside a .c file and providing get set wrapper APIs
> > to read or manipulate its fields.. Any changes to be made to any of the
> > fields should be done via these get set APIs.
> > Introduced a new eal_common_interrupts.c where all these APIs are
> > defined
> > and also hides struct rte_intr_handle definition.
> >
> > Details on each patch of the series:
> > Patch 1: eal/interrupts: implement get set APIs
> > This patch provides prototypes and implementation of all the new
> > get set APIs. Alloc APIs are implemented to allocate memory for
> > interrupt handle instance. Currently most of the drivers defines
> > interrupt handle instance as static but now it cant be static as
> > size of rte_intr_handle is unknown to all the drivers. Drivers are
> > expected to allocate interrupt instances during initialization
> > and free these instances during cleanup phase.
> > This patch also rearranges the headers related to interrupt
> > framework. Epoll related definitions prototypes are moved into a
> > new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
> > which were driver specific are moved to rte_interrupts.h (as anyways
> > it was accessible and used outside DPDK library. Later in the series
> > rte_eal_interrupts.h is removed.
> >
> > Patch 2: eal/interrupts: avoid direct access to interrupt handle
> > Modifying the interrupt framework for linux and freebsd to use these
> > get set alloc APIs as per requirement and avoid accessing the fields
> > directly.
> >
> > Patch 3: test/interrupt: apply get set interrupt handle APIs
> > Updating interrupt test suite to use interrupt handle APIs.
> >
> > Patch 4: drivers: remove direct access to interrupt handle fields
> > Modifying all the drivers and libraries which are currently directly
> > accessing the interrupt handle fields. Drivers are expected to
> > allocated the interrupt instance, use get set APIs with the allocated
> > interrupt handle and free it on cleanup.
> >
> > Patch 5: eal/interrupts: make interrupt handle structure opaque
> > In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
> > definition is moved to c file to make it completely opaque. As part of
> > interrupt handle allocation, array like efds and elist(which are currently
> > static) are dynamically allocated with default size
> > (RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
> > device requirement using new API rte_intr_handle_event_list_update().
> > Eg, on PCI device probing MSIX size can be queried and these arrays can
> > be reallocated accordingly.
> >
> > Patch 6: eal/alarm: introduce alarm fini routine
> > Introducing alarm fini routine, as the memory allocated for alarm interrupt
> > instance can be freed in alarm fini.
> >
> > Testing performed:
> > 1. Validated the series by running interrupts and alarm test suite.
> > 2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
> > where interrupts are expected on packet arrival.
> >
> > v1:
> > * Fixed freebsd compilation failure
> > * Fixed seg fault in case of memif
> >
> > v2:
> > * Merged the prototype and implementation patch to 1.
> > * Restricting allocation of single interrupt instance.
> > * Removed base APIs, as they were exposing internally
> > allocated memory information.
> > * Fixed some memory leak issues.
> > * Marked some library specific APIs as internal.
> >
> > v3:
> > * Removed flag from instance alloc API, rather auto detect
> > if memory should be allocated using glibc malloc APIs or
> > rte_malloc*
> > * Added APIs for get/set windows handle.
> > * Defined macros for repeated checks.
> >
> > v4:
> > * Rectified some typo in the APIs documentation.
> > * Better names for some internal variables.
> >
> > v5:
> > * Reverted back to passing flag to instance alloc API, as
> > with auto detect some multiprocess issues existing in the
> > library were causing tests failure.
> > * Rebased to top of tree.
> >
> > Harman Kalra (6):
> > eal/interrupts: implement get set APIs
> > eal/interrupts: avoid direct access to interrupt handle
> > test/interrupt: apply get set interrupt handle APIs
> > drivers: remove direct access to interrupt handle
> > eal/interrupts: make interrupt handle structure opaque
> > eal/alarm: introduce alarm fini routine
> >
> > MAINTAINERS | 1 +
> > app/test/test_interrupts.c | 163 +++--
> > drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
> > .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
> > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
> > drivers/bus/auxiliary/auxiliary_common.c | 2 +
> > drivers/bus/auxiliary/linux/auxiliary.c | 10 +
> > drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
> > drivers/bus/dpaa/dpaa_bus.c | 28 +-
> > drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
> > drivers/bus/fslmc/fslmc_bus.c | 16 +-
> > drivers/bus/fslmc/fslmc_vfio.c | 32 +-
> > drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 +-
> > drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
> > drivers/bus/fslmc/rte_fslmc.h | 2 +-
> > drivers/bus/ifpga/ifpga_bus.c | 15 +-
> > drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
> > drivers/bus/pci/bsd/pci.c | 21 +-
> > drivers/bus/pci/linux/pci.c | 4 +-
> > drivers/bus/pci/linux/pci_uio.c | 73 +-
> > drivers/bus/pci/linux/pci_vfio.c | 115 ++-
> > drivers/bus/pci/pci_common.c | 29 +-
> > drivers/bus/pci/pci_common_uio.c | 21 +-
> > drivers/bus/pci/rte_bus_pci.h | 4 +-
> > drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
> > drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
> > drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
> > drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
> > drivers/common/cnxk/roc_cpt.c | 8 +-
> > drivers/common/cnxk/roc_dev.c | 14 +-
> > drivers/common/cnxk/roc_irq.c | 108 +--
> > drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
> > drivers/common/cnxk/roc_nix_irq.c | 36 +-
> > drivers/common/cnxk/roc_npa.c | 2 +-
> > drivers/common/cnxk/roc_platform.h | 49 +-
> > drivers/common/cnxk/roc_sso.c | 4 +-
> > drivers/common/cnxk/roc_tim.c | 4 +-
> > drivers/common/octeontx2/otx2_dev.c | 14 +-
> > drivers/common/octeontx2/otx2_irq.c | 117 +--
> > .../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
> > drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
> > drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
> > drivers/net/atlantic/atl_ethdev.c | 20 +-
> > drivers/net/avp/avp_ethdev.c | 8 +-
> > drivers/net/axgbe/axgbe_ethdev.c | 12 +-
> > drivers/net/axgbe/axgbe_mdio.c | 6 +-
> > drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
> > drivers/net/bnxt/bnxt_ethdev.c | 33 +-
> > drivers/net/bnxt/bnxt_irq.c | 4 +-
> > drivers/net/dpaa/dpaa_ethdev.c | 47 +-
> > drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
> > drivers/net/e1000/em_ethdev.c | 23 +-
> > drivers/net/e1000/igb_ethdev.c | 79 +--
> > drivers/net/ena/ena_ethdev.c | 35 +-
> > drivers/net/enic/enic_main.c | 26 +-
> > drivers/net/failsafe/failsafe.c | 23 +-
> > drivers/net/failsafe/failsafe_intr.c | 43 +-
> > drivers/net/failsafe/failsafe_ops.c | 19 +-
> > drivers/net/failsafe/failsafe_private.h | 2 +-
> > drivers/net/fm10k/fm10k_ethdev.c | 32 +-
> > drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
> > drivers/net/hns3/hns3_ethdev.c | 57 +-
> > drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
> > drivers/net/hns3/hns3_rxtx.c | 2 +-
> > drivers/net/i40e/i40e_ethdev.c | 53 +-
> > drivers/net/iavf/iavf_ethdev.c | 42 +-
> > drivers/net/iavf/iavf_vchnl.c | 4 +-
> > drivers/net/ice/ice_dcf.c | 10 +-
> > drivers/net/ice/ice_dcf_ethdev.c | 21 +-
> > drivers/net/ice/ice_ethdev.c | 49 +-
> > drivers/net/igc/igc_ethdev.c | 45 +-
> > drivers/net/ionic/ionic_ethdev.c | 17 +-
> > drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
> > drivers/net/memif/memif_socket.c | 111 ++-
> > drivers/net/memif/memif_socket.h | 4 +-
> > drivers/net/memif/rte_eth_memif.c | 61 +-
> > drivers/net/memif/rte_eth_memif.h | 2 +-
> > drivers/net/mlx4/mlx4.c | 19 +-
> > drivers/net/mlx4/mlx4.h | 2 +-
> > drivers/net/mlx4/mlx4_intr.c | 47 +-
> > drivers/net/mlx5/linux/mlx5_os.c | 53 +-
> > drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
> > drivers/net/mlx5/mlx5.h | 6 +-
> > drivers/net/mlx5/mlx5_rxq.c | 42 +-
> > drivers/net/mlx5/mlx5_trigger.c | 4 +-
> > drivers/net/mlx5/mlx5_txpp.c | 26 +-
> > drivers/net/netvsc/hn_ethdev.c | 4 +-
> > drivers/net/nfp/nfp_common.c | 34 +-
> > drivers/net/nfp/nfp_ethdev.c | 13 +-
> > drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
> > drivers/net/ngbe/ngbe_ethdev.c | 29 +-
> > drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
> > drivers/net/qede/qede_ethdev.c | 16 +-
> > drivers/net/sfc/sfc_intr.c | 30 +-
> > drivers/net/tap/rte_eth_tap.c | 36 +-
> > drivers/net/tap/rte_eth_tap.h | 2 +-
> > drivers/net/tap/tap_intr.c | 32 +-
> > drivers/net/thunderx/nicvf_ethdev.c | 12 +
> > drivers/net/thunderx/nicvf_struct.h | 2 +-
> > drivers/net/txgbe/txgbe_ethdev.c | 38 +-
> > drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
> > drivers/net/vhost/rte_eth_vhost.c | 76 +-
> > drivers/net/virtio/virtio_ethdev.c | 21 +-
> > .../net/virtio/virtio_user/virtio_user_dev.c | 48 +-
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
> > drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
> > drivers/raw/ntb/ntb.c | 9 +-
> > .../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
> > drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
> > drivers/vdpa/mlx5/mlx5_vdpa.c | 10 +
> > drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
> > drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
> > drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 +-
> > lib/bbdev/rte_bbdev.c | 4 +-
> > lib/eal/common/eal_common_interrupts.c | 588 +++++++++++++++
> > lib/eal/common/eal_private.h | 11 +
> > lib/eal/common/meson.build | 1 +
> > lib/eal/freebsd/eal.c | 1 +
> > lib/eal/freebsd/eal_alarm.c | 53 +-
> > lib/eal/freebsd/eal_interrupts.c | 112 ++-
> > lib/eal/include/meson.build | 2 +-
> > lib/eal/include/rte_eal_interrupts.h | 269 -------
> > lib/eal/include/rte_eal_trace.h | 24 +-
> > lib/eal/include/rte_epoll.h | 118 ++++
> > lib/eal/include/rte_interrupts.h | 668 +++++++++++++++++-
> > lib/eal/linux/eal.c | 1 +
> > lib/eal/linux/eal_alarm.c | 37 +-
> > lib/eal/linux/eal_dev.c | 63 +-
> > lib/eal/linux/eal_interrupts.c | 303 +++++---
> > lib/eal/version.map | 46 +-
> > lib/ethdev/ethdev_pci.h | 2 +-
> > lib/ethdev/rte_ethdev.c | 14 +-
> > 132 files changed, 3631 insertions(+), 1713 deletions(-)
> > create mode 100644 lib/eal/common/eal_common_interrupts.c
> > delete mode 100644 lib/eal/include/rte_eal_interrupts.h
> > create mode 100644 lib/eal/include/rte_epoll.h
> >
> > --
> > 2.18.0
>
> This series is causing this seg fault with MLX5 pmd:
> Thread 1 "dpdk-l3fwd-powe" received signal SIGSEGV, Segmentation fault.
> rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
> 1512 if (__atomic_load_n(&rev->status,
> (gdb) bt
> #0 rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
> #1 0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
> #2 0x0000555556de73da in mlx5_rx_intr_vec_enable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:836
> #3 0x0000555556e04012 in mlx5_dev_start (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1146
> #4 0x0000555555b82da7 in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1823
> #5 0x000055555575e66d in main (argc=7, argv=0x7fffffffe3f0) at ../examples/l3fwd-power/main.c:2811
> (gdb) f 1
> #1 0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
> 934 rte_intr_free_epoll_fd(intr_handle);
>
>
> It can be easily reproduced as following:
> dpdk-l3fwd-power -n 4 -a 0000:08:00.0,txq_inline_mpw=439,rx_vec_en=1 -a 0000:08:00.,txq_inline_mpw=439,rx_vec_en=1 -c 0xfffffff -- -p 0x3 -P --interrupt-only --parse-ptype --config='(0, 0, 0)(1, 0, 1)(0, 1, 2)(1, 1, 3)(0, 2, 4)(1, 2, 5)(0, 3, 6)(1, 3, 7)'
>
That confirms my suspicion on pci bus update that look at
RTE_PCI_DRV_NEED_MAPPING.
v7 incoming.
--
David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v7 0/9] make rte_intr_handle internal
2021-10-22 20:49 4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
2021-10-24 20:04 4% ` [dpdk-dev] [PATCH v6 0/9] " David Marchand
2021-10-25 13:04 0% ` [dpdk-dev] [PATCH v5 0/6] " Raslan Darawsheh
@ 2021-10-25 13:34 4% ` David Marchand
2021-10-25 14:27 4% ` [dpdk-dev] [PATCH v8 " David Marchand
3 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.
v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
(see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
* (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
are squashed in it,
* (now) patch 5 concerns other libraries updates,
* (now) patch 6 concerns drivers updates:
* instance allocation is moved to probing for auxiliary,
* there might be a bug for PCI drivers non requesting
RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
* (now) patch 7 only hides structure, but keep it in a EAL private
header, this makes it possible to keep info in tracepoints,
* (now) patch 8 deals with VFIO/UIO internal fds merge,
* (now) patch 9 extends event list,
v7:
* fixed compilation on FreeBSD,
* removed unused interrupt handle in FreeBSD alarm code,
* fixed interrupt handle allocation for PCI drivers without
RTE_PCI_DRV_NEED_MAPPING,
--
David Marchand
Harman Kalra (9):
interrupts: add allocator and accessors
interrupts: remove direct access to interrupt handle
test/interrupts: remove direct access to interrupt handle
alarm: remove direct access to interrupt handle
lib: remove direct access to interrupt handle
drivers: remove direct access to interrupt handle
interrupts: make interrupt handle structure opaque
interrupts: rename device specific file descriptor
interrupts: extend event list
MAINTAINERS | 1 +
app/test/test_interrupts.c | 164 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 +-
drivers/bus/auxiliary/auxiliary_common.c | 17 +-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 +-
drivers/bus/fslmc/fslmc_vfio.c | 30 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +-
drivers/bus/pci/linux/pci_vfio.c | 108 ++-
drivers/bus/pci/pci_common.c | 47 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 107 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 21 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 19 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 55 +-
drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 33 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 10 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 80 ++-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 504 ++++++++++++++
lib/eal/common/eal_interrupts.h | 30 +
lib/eal/common/eal_private.h | 10 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 35 +-
lib/eal/freebsd/eal_interrupts.c | 85 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 10 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 651 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 +-
lib/eal/linux/eal_dev.c | 57 +-
lib/eal/linux/eal_interrupts.c | 304 ++++----
lib/eal/version.map | 45 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3453 insertions(+), 1748 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.23.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
2021-10-22 20:49 4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
` (2 preceding siblings ...)
2021-10-25 13:34 4% ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
@ 2021-10-25 14:27 4% ` David Marchand
2021-10-25 14:32 0% ` Raslan Darawsheh
2021-10-25 19:24 0% ` David Marchand
3 siblings, 2 replies; 200+ results
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.
v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
(see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
* (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
are squashed in it,
* (now) patch 5 concerns other libraries updates,
* (now) patch 6 concerns drivers updates:
* instance allocation is moved to probing for auxiliary,
* there might be a bug for PCI drivers non requesting
RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
* (now) patch 7 only hides structure, but keep it in a EAL private
header, this makes it possible to keep info in tracepoints,
* (now) patch 8 deals with VFIO/UIO internal fds merge,
* (now) patch 9 extends event list,
v7:
* fixed compilation on FreeBSD,
* removed unused interrupt handle in FreeBSD alarm code,
* fixed interrupt handle allocation for PCI drivers without
RTE_PCI_DRV_NEED_MAPPING,
v8:
* lowered logs level to DEBUG in sanity checks,
* fixed corner case with vector list access,
--
David Marchand
Harman Kalra (9):
interrupts: add allocator and accessors
interrupts: remove direct access to interrupt handle
test/interrupts: remove direct access to interrupt handle
alarm: remove direct access to interrupt handle
lib: remove direct access to interrupt handle
drivers: remove direct access to interrupt handle
interrupts: make interrupt handle structure opaque
interrupts: rename device specific file descriptor
interrupts: extend event list
MAINTAINERS | 1 +
app/test/test_interrupts.c | 164 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 +-
drivers/bus/auxiliary/auxiliary_common.c | 17 +-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 +-
drivers/bus/fslmc/fslmc_vfio.c | 30 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +-
drivers/bus/pci/linux/pci_vfio.c | 108 ++-
drivers/bus/pci/pci_common.c | 47 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 107 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 21 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 19 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 55 +-
drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 33 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 10 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 80 ++-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 500 ++++++++++++++
lib/eal/common/eal_interrupts.h | 30 +
lib/eal/common/eal_private.h | 10 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 35 +-
lib/eal/freebsd/eal_interrupts.c | 85 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 10 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 651 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 +-
lib/eal/linux/eal_dev.c | 57 +-
lib/eal/linux/eal_interrupts.c | 304 ++++----
lib/eal/version.map | 45 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3449 insertions(+), 1748 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.23.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
2021-10-25 14:27 4% ` [dpdk-dev] [PATCH v8 " David Marchand
@ 2021-10-25 14:32 0% ` Raslan Darawsheh
2021-10-25 19:24 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: Raslan Darawsheh @ 2021-10-25 14:32 UTC (permalink / raw)
To: David Marchand, hkalra, dev; +Cc: dmitry.kozliuk, NBU-Contact-Thomas Monjalon
Hi,
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Monday, October 25, 2021 5:27 PM
> To: hkalra@marvell.com; dev@dpdk.org
> Cc: dmitry.kozliuk@gmail.com; Raslan Darawsheh <rasland@nvidia.com>;
> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> Subject: [PATCH v8 0/9] make rte_intr_handle internal
>
> Moving struct rte_intr_handle as an internal structure to avoid any ABI
> breakages in future. Since this structure defines some static arrays and
> changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI specification
> allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on PCI
> device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> 3A__docs.google.com_s&data=04%7C01%7Crasland%40nvidia.com%7C
> c626e0d058714bc3075a08d997c39557%7C43083d15727340c1b7db39efd9ccc17
> a%7C0%7C0%7C637707688554493769%7CUnknown%7CTWFpbGZsb3d8eyJWI
> joiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1
> 000&sdata=y7vFUXbUzh6ise1zn8bzbfuUGv6L24gCNcUsuWKqRBk%3D&
> amp;reserved=0
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> 23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> 7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> &s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
> This series makes struct rte_intr_handle totally opaque to the outside world
> by wrapping it inside a .c file and providing get set wrapper APIs to read or
> manipulate its fields.. Any changes to be made to any of the fields should be
> done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are
> defined and also hides struct rte_intr_handle definition.
>
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
>
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally allocated memory
> information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
>
> v3:
> * Removed flag from instance alloc API, rather auto detect if memory should
> be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
>
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
>
> v5:
> * Reverted back to passing flag to instance alloc API, as with auto detect
> some multiprocess issues existing in the library were causing tests failure.
> * Rebased to top of tree.
>
> v6:
> * renamed RTE_INTR_INSTANCE_F_UNSHARED as
> RTE_INTR_INSTANCE_F_PRIVATE,
> * changed API and removed need for alloc_flag content exposure
> (see rte_intr_instance_dup() in patch 1 and 2),
> * exported all symbols for Windows,
> * fixed leak in unit tests in case of alloc failure,
> * split (previously) patch 4 into three patches
> * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
> are squashed in it,
> * (now) patch 5 concerns other libraries updates,
> * (now) patch 6 concerns drivers updates:
> * instance allocation is moved to probing for auxiliary,
> * there might be a bug for PCI drivers non requesting
> RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
> * split (previously) patch 5 into three patches
> * (now) patch 7 only hides structure, but keep it in a EAL private
> header, this makes it possible to keep info in tracepoints,
> * (now) patch 8 deals with VFIO/UIO internal fds merge,
> * (now) patch 9 extends event list,
>
> v7:
> * fixed compilation on FreeBSD,
> * removed unused interrupt handle in FreeBSD alarm code,
> * fixed interrupt handle allocation for PCI drivers without
> RTE_PCI_DRV_NEED_MAPPING,
>
> v8:
> * lowered logs level to DEBUG in sanity checks,
> * fixed corner case with vector list access,
>
> --
> David Marchand
>
> Harman Kalra (9):
> interrupts: add allocator and accessors
> interrupts: remove direct access to interrupt handle
> test/interrupts: remove direct access to interrupt handle
> alarm: remove direct access to interrupt handle
> lib: remove direct access to interrupt handle
> drivers: remove direct access to interrupt handle
> interrupts: make interrupt handle structure opaque
> interrupts: rename device specific file descriptor
> interrupts: extend event list
>
> MAINTAINERS | 1 +
> app/test/test_interrupts.c | 164 +++--
> drivers/baseband/acc100/rte_acc100_pmd.c | 14 +-
> .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 +-
> drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 +-
> drivers/bus/auxiliary/auxiliary_common.c | 17 +-
> drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
> drivers/bus/dpaa/dpaa_bus.c | 28 +-
> drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
> drivers/bus/fslmc/fslmc_bus.c | 14 +-
> drivers/bus/fslmc/fslmc_vfio.c | 30 +-
> drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 +-
> drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
> drivers/bus/fslmc/rte_fslmc.h | 2 +-
> drivers/bus/ifpga/ifpga_bus.c | 13 +-
> drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
> drivers/bus/pci/bsd/pci.c | 20 +-
> drivers/bus/pci/linux/pci.c | 4 +-
> drivers/bus/pci/linux/pci_uio.c | 69 +-
> drivers/bus/pci/linux/pci_vfio.c | 108 ++-
> drivers/bus/pci/pci_common.c | 47 +-
> drivers/bus/pci/pci_common_uio.c | 21 +-
> drivers/bus/pci/rte_bus_pci.h | 4 +-
> drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
> drivers/bus/vmbus/linux/vmbus_uio.c | 35 +-
> drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
> drivers/bus/vmbus/vmbus_common_uio.c | 23 +-
> drivers/common/cnxk/roc_cpt.c | 8 +-
> drivers/common/cnxk/roc_dev.c | 14 +-
> drivers/common/cnxk/roc_irq.c | 107 +--
> drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
> drivers/common/cnxk/roc_nix_irq.c | 36 +-
> drivers/common/cnxk/roc_npa.c | 2 +-
> drivers/common/cnxk/roc_platform.h | 49 +-
> drivers/common/cnxk/roc_sso.c | 4 +-
> drivers/common/cnxk/roc_tim.c | 4 +-
> drivers/common/octeontx2/otx2_dev.c | 14 +-
> drivers/common/octeontx2/otx2_irq.c | 117 ++--
> .../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
> drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
> drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
> drivers/net/atlantic/atl_ethdev.c | 20 +-
> drivers/net/avp/avp_ethdev.c | 8 +-
> drivers/net/axgbe/axgbe_ethdev.c | 12 +-
> drivers/net/axgbe/axgbe_mdio.c | 6 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
> drivers/net/bnxt/bnxt_ethdev.c | 33 +-
> drivers/net/bnxt/bnxt_irq.c | 4 +-
> drivers/net/dpaa/dpaa_ethdev.c | 48 +-
> drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
> drivers/net/e1000/em_ethdev.c | 23 +-
> drivers/net/e1000/igb_ethdev.c | 79 +--
> drivers/net/ena/ena_ethdev.c | 35 +-
> drivers/net/enic/enic_main.c | 26 +-
> drivers/net/failsafe/failsafe.c | 21 +-
> drivers/net/failsafe/failsafe_intr.c | 43 +-
> drivers/net/failsafe/failsafe_ops.c | 19 +-
> drivers/net/failsafe/failsafe_private.h | 2 +-
> drivers/net/fm10k/fm10k_ethdev.c | 32 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
> drivers/net/hns3/hns3_ethdev.c | 57 +-
> drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
> drivers/net/hns3/hns3_rxtx.c | 2 +-
> drivers/net/i40e/i40e_ethdev.c | 53 +-
> drivers/net/iavf/iavf_ethdev.c | 42 +-
> drivers/net/iavf/iavf_vchnl.c | 4 +-
> drivers/net/ice/ice_dcf.c | 10 +-
> drivers/net/ice/ice_dcf_ethdev.c | 21 +-
> drivers/net/ice/ice_ethdev.c | 49 +-
> drivers/net/igc/igc_ethdev.c | 45 +-
> drivers/net/ionic/ionic_ethdev.c | 17 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
> drivers/net/memif/memif_socket.c | 108 ++-
> drivers/net/memif/memif_socket.h | 4 +-
> drivers/net/memif/rte_eth_memif.c | 56 +-
> drivers/net/memif/rte_eth_memif.h | 2 +-
> drivers/net/mlx4/mlx4.c | 19 +-
> drivers/net/mlx4/mlx4.h | 2 +-
> drivers/net/mlx4/mlx4_intr.c | 47 +-
> drivers/net/mlx5/linux/mlx5_os.c | 55 +-
> drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
> drivers/net/mlx5/mlx5.h | 6 +-
> drivers/net/mlx5/mlx5_rxq.c | 43 +-
> drivers/net/mlx5/mlx5_trigger.c | 4 +-
> drivers/net/mlx5/mlx5_txpp.c | 25 +-
> drivers/net/netvsc/hn_ethdev.c | 4 +-
> drivers/net/nfp/nfp_common.c | 34 +-
> drivers/net/nfp/nfp_ethdev.c | 13 +-
> drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
> drivers/net/ngbe/ngbe_ethdev.c | 29 +-
> drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
> drivers/net/qede/qede_ethdev.c | 16 +-
> drivers/net/sfc/sfc_intr.c | 30 +-
> drivers/net/tap/rte_eth_tap.c | 33 +-
> drivers/net/tap/rte_eth_tap.h | 2 +-
> drivers/net/tap/tap_intr.c | 33 +-
> drivers/net/thunderx/nicvf_ethdev.c | 10 +
> drivers/net/thunderx/nicvf_struct.h | 2 +-
> drivers/net/txgbe/txgbe_ethdev.c | 38 +-
> drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
> drivers/net/vhost/rte_eth_vhost.c | 80 ++-
> drivers/net/virtio/virtio_ethdev.c | 21 +-
> .../net/virtio/virtio_user/virtio_user_dev.c | 56 +-
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
> drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
> drivers/raw/ntb/ntb.c | 9 +-
> .../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
> drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
> drivers/vdpa/mlx5/mlx5_vdpa.c | 8 +
> drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
> drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 +-
> drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
> lib/bbdev/rte_bbdev.c | 4 +-
> lib/eal/common/eal_common_interrupts.c | 500 ++++++++++++++
> lib/eal/common/eal_interrupts.h | 30 +
> lib/eal/common/eal_private.h | 10 +
> lib/eal/common/meson.build | 1 +
> lib/eal/freebsd/eal.c | 1 +
> lib/eal/freebsd/eal_alarm.c | 35 +-
> lib/eal/freebsd/eal_interrupts.c | 85 ++-
> lib/eal/include/meson.build | 2 +-
> lib/eal/include/rte_eal_interrupts.h | 269 --------
> lib/eal/include/rte_eal_trace.h | 10 +-
> lib/eal/include/rte_epoll.h | 118 ++++
> lib/eal/include/rte_interrupts.h | 651 +++++++++++++++++-
> lib/eal/linux/eal.c | 1 +
> lib/eal/linux/eal_alarm.c | 32 +-
> lib/eal/linux/eal_dev.c | 57 +-
> lib/eal/linux/eal_interrupts.c | 304 ++++----
> lib/eal/version.map | 45 +-
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_ethdev.c | 14 +-
> 132 files changed, 3449 insertions(+), 1748 deletions(-) create mode 100644
> lib/eal/common/eal_common_interrupts.c
> create mode 100644 lib/eal/common/eal_interrupts.h delete mode 100644
> lib/eal/include/rte_eal_interrupts.h
> create mode 100644 lib/eal/include/rte_epoll.h
>
> --
> 2.23.0
Tested-by: Raslan Darawsheh <rasland@nvidia.com>
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] ci: update machine meson option to platform
@ 2021-10-25 15:42 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-25 15:42 UTC (permalink / raw)
To: Juraj Linkeš
Cc: dev, david.marchand, maicolgabriel, ohilyard, ci, Aaron Conole
14/10/2021 14:26, Aaron Conole:
> Juraj Linkeš <juraj.linkes@pantheon.tech> writes:
>
> > The way we're building DPDK in CI, with -Dmachine=default, has not been
> > updated when the option got replaced to preserve a backwards-complatible
> > build call to facilitate ABI verification between DPDK versions. Update
> > the call to use -Dplatform=generic, which is the most up to date way to
> > execute the same build which is now present in all DPDK versions the ABI
> > check verifies.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>
> Acked-by: Aaron Conole <aconole@redhat.com>
Applied, thanks.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
2021-10-25 14:27 4% ` [dpdk-dev] [PATCH v8 " David Marchand
2021-10-25 14:32 0% ` Raslan Darawsheh
@ 2021-10-25 19:24 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-10-25 19:24 UTC (permalink / raw)
To: Harman Kalra, dev; +Cc: Dmitry Kozlyuk, Raslan Darawsheh, Thomas Monjalon
On Mon, Oct 25, 2021 at 4:27 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are defined
> and also hides struct rte_intr_handle definition.
>
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
>
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally
> allocated memory information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
>
> v3:
> * Removed flag from instance alloc API, rather auto detect
> if memory should be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
>
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
>
> v5:
> * Reverted back to passing flag to instance alloc API, as
> with auto detect some multiprocess issues existing in the
> library were causing tests failure.
> * Rebased to top of tree.
>
> v6:
> * renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
> * changed API and removed need for alloc_flag content exposure
> (see rte_intr_instance_dup() in patch 1 and 2),
> * exported all symbols for Windows,
> * fixed leak in unit tests in case of alloc failure,
> * split (previously) patch 4 into three patches
> * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
> are squashed in it,
> * (now) patch 5 concerns other libraries updates,
> * (now) patch 6 concerns drivers updates:
> * instance allocation is moved to probing for auxiliary,
> * there might be a bug for PCI drivers non requesting
> RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
> * split (previously) patch 5 into three patches
> * (now) patch 7 only hides structure, but keep it in a EAL private
> header, this makes it possible to keep info in tracepoints,
> * (now) patch 8 deals with VFIO/UIO internal fds merge,
> * (now) patch 9 extends event list,
>
> v7:
> * fixed compilation on FreeBSD,
> * removed unused interrupt handle in FreeBSD alarm code,
> * fixed interrupt handle allocation for PCI drivers without
> RTE_PCI_DRV_NEED_MAPPING,
>
> v8:
> * lowered logs level to DEBUG in sanity checks,
> * fixed corner case with vector list access,
>
> --
> David Marchand
>
> Harman Kalra (9):
> interrupts: add allocator and accessors
> interrupts: remove direct access to interrupt handle
> test/interrupts: remove direct access to interrupt handle
> alarm: remove direct access to interrupt handle
> lib: remove direct access to interrupt handle
> drivers: remove direct access to interrupt handle
> interrupts: make interrupt handle structure opaque
> interrupts: rename device specific file descriptor
> interrupts: extend event list
Series applied, thanks.
--
David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
@ 2021-10-25 21:40 4% Thomas Monjalon
2021-10-28 7:10 0% ` Jiang, YuX
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Thomas Monjalon @ 2021-10-25 21:40 UTC (permalink / raw)
To: announce
A new DPDK release candidate is ready for testing:
https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
There are 1171 new patches in this snapshot, big as expected.
Release notes:
https://doc.dpdk.org/guides/rel_notes/release_21_11.html
Highlights of 21.11-rc1:
* General
- more than 512 MSI-X interrupts
- hugetlbfs subdirectories
- mempool flag for non-IO usages
- device class for DMA accelerators
- DMA drivers for Intel DSA and IOAT
* Networking
- MTU handling rework
- get all MAC addresses of a port
- RSS based on L3/L4 checksum fields
- flow match on L2TPv2 and PPP
- flow flex parser for custom header
- control delivery of HW Rx metadata
- transfer flows API rework
- shared Rx queue
- Windows support of Intel e1000, ixgbe and iavf
- testpmd multi-process
- pcapng library and dumpcap tool
* API/ABI
- API namespace improvements (mempool, mbuf, ethdev)
- API internals hidden (intr, ethdev, security, cryptodev, eventdev, cmdline)
- flags check for future ABI compatibility (memzone, mbuf, mempool)
Please test and report issues on bugs.dpdk.org.
DPDK 21.11-rc2 is expected in two weeks or less.
Thank you everyone
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
2021-10-25 11:32 3% ` [dpdk-dev] [PATCH v18 " Liguzinski, WojciechX
@ 2021-10-26 8:24 3% ` Liu, Yu Y
2021-10-26 8:33 0% ` Thomas Monjalon
2021-10-28 10:17 3% ` [dpdk-dev] [PATCH v19 " Liguzinski, WojciechX
1 sibling, 1 reply; 200+ results
From: Liu, Yu Y @ 2021-10-26 8:24 UTC (permalink / raw)
To: Thomas Monjalon, dev, Liguzinski, WojciechX, Singh, Jasvinder,
Dumitrescu, Cristian
Cc: Ajmera, Megha, Liu, Yu Y
Hi Thomas,
Would you merge this patch as the series is acked by Cristian as below?
https://patchwork.dpdk.org/project/dpdk/cover/20211019081902.3514841-1-wojciechx.liguzinski@intel.com/
Thanks & Regards,
Yu Liu
-----Original Message-----
From: dev <dev-bounces@dpdk.org> On Behalf Of Liguzinski, WojciechX
Sent: Monday, October 25, 2021 7:32 PM
To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
Cc: Ajmera, Megha <megha.ajmera@intel.com>
Subject: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem which is a situation when excess buffers in the network cause high latency and latency variation. Currently, it supports RED for active queue management. However, more advanced queue management is required to address this problem and provide desirable quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral controller Enhanced) that can effectively and directly control queuing latency to address the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going to be prepared and sent.
Liguzinski, WojciechX (5):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 3 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 241 ++--
lib/sched/rte_sched.h | 63 +-
lib/sched/version.map | 4 +
19 files changed, 2172 insertions(+), 279 deletions(-) create mode 100644 app/test/test_pie.c create mode 100644 lib/sched/rte_pie.c create mode 100644 lib/sched/rte_pie.h
--
2.25.1
Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
2021-10-26 8:24 3% ` Liu, Yu Y
@ 2021-10-26 8:33 0% ` Thomas Monjalon
2021-10-26 10:02 0% ` Dumitrescu, Cristian
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-26 8:33 UTC (permalink / raw)
To: Liguzinski, WojciechX, Singh, Jasvinder, Dumitrescu, Cristian, Liu, Yu Y
Cc: dev, Ajmera, Megha, Liu, Yu Y, david.marchand
26/10/2021 10:24, Liu, Yu Y:
> Hi Thomas,
>
> Would you merge this patch as the series is acked by Cristian as below?
> https://patchwork.dpdk.org/project/dpdk/cover/20211019081902.3514841-1-wojciechx.liguzinski@intel.com/
I didn't see any email from Cristian.
It seems you just added this ack silently at the bottom of the cover letter.
1/ an email from Cristian is far better
2/ when integrating ack, it must be done in patches, not cover letter
>
> Thanks & Regards,
> Yu Liu
>
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Liguzinski, WojciechX
> Sent: Monday, October 25, 2021 7:32 PM
> To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Ajmera, Megha <megha.ajmera@intel.com>
> Subject: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
>
> DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem which is a situation when excess buffers in the network cause high latency and latency variation. Currently, it supports RED for active queue management. However, more advanced queue management is required to address this problem and provide desirable quality of service to users.
>
> This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral controller Enhanced) that can effectively and directly control queuing latency to address the bufferbloat problem.
>
> The implementation of mentioned functionality includes modification of existing and adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation notice is going to be prepared and sent.
>
> Liguzinski, WojciechX (5):
> sched: add PIE based congestion management
> example/qos_sched: add PIE support
> example/ip_pipeline: add PIE support
> doc/guides/prog_guide: added PIE
> app/test: add tests for PIE
>
> app/test/meson.build | 4 +
> app/test/test_pie.c | 1065 ++++++++++++++++++
> config/rte_config.h | 1 -
> doc/guides/prog_guide/glossary.rst | 3 +
> doc/guides/prog_guide/qos_framework.rst | 64 +-
> doc/guides/prog_guide/traffic_management.rst | 13 +-
> drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
> examples/ip_pipeline/tmgr.c | 142 +--
> examples/qos_sched/cfg_file.c | 127 ++-
> examples/qos_sched/cfg_file.h | 5 +
> examples/qos_sched/init.c | 27 +-
> examples/qos_sched/main.h | 3 +
> examples/qos_sched/profile.cfg | 196 ++--
> lib/sched/meson.build | 3 +-
> lib/sched/rte_pie.c | 86 ++
> lib/sched/rte_pie.h | 398 +++++++
> lib/sched/rte_sched.c | 241 ++--
> lib/sched/rte_sched.h | 63 +-
> lib/sched/version.map | 4 +
> 19 files changed, 2172 insertions(+), 279 deletions(-) create mode 100644 app/test/test_pie.c create mode 100644 lib/sched/rte_pie.c create mode 100644 lib/sched/rte_pie.h
>
> --
> 2.25.1
>
> Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
2021-10-26 8:33 0% ` Thomas Monjalon
@ 2021-10-26 10:02 0% ` Dumitrescu, Cristian
0 siblings, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2021-10-26 10:02 UTC (permalink / raw)
To: Thomas Monjalon, Liguzinski, WojciechX, Singh, Jasvinder, Liu,
Yu Y, Singh, Jasvinder
Cc: dev, Ajmera, Megha, Liu, Yu Y, david.marchand
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, October 26, 2021 9:33 AM
> To: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>; Liu, Yu Y <yu.y.liu@intel.com>
> Cc: dev@dpdk.org; Ajmera, Megha <megha.ajmera@intel.com>; Liu, Yu Y
> <yu.y.liu@intel.com>; david.marchand@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
>
> 26/10/2021 10:24, Liu, Yu Y:
> > Hi Thomas,
> >
> > Would you merge this patch as the series is acked by Cristian as below?
> >
> https://patchwork.dpdk.org/project/dpdk/cover/20211019081902.3514841-
> 1-wojciechx.liguzinski@intel.com/
>
> I didn't see any email from Cristian.
> It seems you just added this ack silently at the bottom of the cover letter.
>
> 1/ an email from Cristian is far better
> 2/ when integrating ack, it must be done in patches, not cover letter
>
Hi Thomas,
I did ack this set in a previous version (V15) by replying with "Series-acked-by" on the cover letter email, which does not show in patchwork. Is there a better way to do this?
It would be good to have Jasvinder's ack as well on this series, as he is looking into some other aspects of the sched library.
Regards,
Cristian
>
> >
> > Thanks & Regards,
> > Yu Liu
> >
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Liguzinski, WojciechX
> > Sent: Monday, October 25, 2021 7:32 PM
> > To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> > Cc: Ajmera, Megha <megha.ajmera@intel.com>
> > Subject: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
> >
> > DPDK sched library is equipped with mechanism that secures it from the
> bufferbloat problem which is a situation when excess buffers in the network
> cause high latency and latency variation. Currently, it supports RED for active
> queue management. However, more advanced queue management is
> required to address this problem and provide desirable quality of service to
> users.
> >
> > This solution (RFC) proposes usage of new algorithm called "PIE"
> (Proportional Integral controller Enhanced) that can effectively and directly
> control queuing latency to address the bufferbloat problem.
> >
> > The implementation of mentioned functionality includes modification of
> existing and adding a new set of data structures to the library, adding PIE
> related APIs.
> > This affects structures in public API/ABI. That is why deprecation notice is
> going to be prepared and sent.
> >
> > Liguzinski, WojciechX (5):
> > sched: add PIE based congestion management
> > example/qos_sched: add PIE support
> > example/ip_pipeline: add PIE support
> > doc/guides/prog_guide: added PIE
> > app/test: add tests for PIE
> >
> > app/test/meson.build | 4 +
> > app/test/test_pie.c | 1065 ++++++++++++++++++
> > config/rte_config.h | 1 -
> > doc/guides/prog_guide/glossary.rst | 3 +
> > doc/guides/prog_guide/qos_framework.rst | 64 +-
> > doc/guides/prog_guide/traffic_management.rst | 13 +-
> > drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
> > examples/ip_pipeline/tmgr.c | 142 +--
> > examples/qos_sched/cfg_file.c | 127 ++-
> > examples/qos_sched/cfg_file.h | 5 +
> > examples/qos_sched/init.c | 27 +-
> > examples/qos_sched/main.h | 3 +
> > examples/qos_sched/profile.cfg | 196 ++--
> > lib/sched/meson.build | 3 +-
> > lib/sched/rte_pie.c | 86 ++
> > lib/sched/rte_pie.h | 398 +++++++
> > lib/sched/rte_sched.c | 241 ++--
> > lib/sched/rte_sched.h | 63 +-
> > lib/sched/version.map | 4 +
> > 19 files changed, 2172 insertions(+), 279 deletions(-) create mode 100644
> app/test/test_pie.c create mode 100644 lib/sched/rte_pie.c create mode
> 100644 lib/sched/rte_pie.h
> >
> > --
> > 2.25.1
> >
> > Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> >
>
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering
2021-10-20 21:42 1% ` [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering Stephen Hemminger
2021-10-21 14:16 0% ` Kinsella, Ray
@ 2021-10-27 6:34 0% ` Wang, Yinan
1 sibling, 0 replies; 200+ results
From: Wang, Yinan @ 2021-10-27 6:34 UTC (permalink / raw)
To: Stephen Hemminger, dev
Cc: Pattan, Reshma, Ray Kinsella, Burakov, Anatoly, Ling, WeiX, He,
Xingguang
Hi Hemminger,
I meet an issue when using dpdk-pdump with your patch ,we try to capture pkts from virtio port, all packets captured shows malformed packets , and no issue if remove your patch. Bug link:https://bugs.dpdk.org/show_bug.cgi?id=840
Could you help to take a look at this issue?
BR,
Yinan
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Stephen Hemminger
> Sent: 2021?10?21? 5:43
> To: dev@dpdk.org
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Pattan, Reshma
> <reshma.pattan@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Burakov,
> Anatoly <anatoly.burakov@intel.com>
> Subject: [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering
>
> This enhances the DPDK pdump library to support new
> pcapng format and filtering via BPF.
>
> The internal client/server protocol is changed to support
> two versions: the original pdump basic version and a
> new pcapng version.
>
> The internal version number (not part of exposed API or ABI)
> is intentionally increased to cause any attempt to try
> mismatched primary/secondary process to fail.
>
> Add new API to do allow filtering of captured packets with
> DPDK BPF (eBPF) filter program. It keeps statistics
> on packets captured, filtered, and missed (because ring was full).
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> Acked-by: Reshma Pattan <reshma.pattan@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
@ 2021-10-27 11:03 3% ` Van Haaren, Harry
2021-10-27 11:41 0% ` Mattias Rönnblom
0 siblings, 1 reply; 200+ results
From: Van Haaren, Harry @ 2021-10-27 11:03 UTC (permalink / raw)
To: Thomas Monjalon, Aman Kumar
Cc: dev, viacheslavo, Burakov, Anatoly, keesang.song, aman.kumar,
jerinjacobk, Ananyev, Konstantin, Richardson, Bruce,
honnappa.nagarahalli, Ruifeng Wang, David Christensen,
david.marchand, stephen
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Thomas Monjalon
> Sent: Wednesday, October 27, 2021 9:13 AM
> To: Aman Kumar <aman.kumar@vvdntech.in>
> Cc: dev@dpdk.org; viacheslavo@nvidia.com; Burakov, Anatoly
> <anatoly.burakov@intel.com>; keesang.song@amd.com;
> aman.kumar@vvdntech.in; jerinjacobk@gmail.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; Ruifeng Wang
> <ruifeng.wang@arm.com>; David Christensen <drc@linux.vnet.ibm.com>;
> david.marchand@redhat.com; stephen@networkplumber.org
> Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy
> support for AMD platform
>
> 27/10/2021 09:28, Aman Kumar:
> > This patch provides a rte_memcpy* call with temporal stores.
> > Use -Dcpu_instruction_set=znverX with build to enable this API.
> >
> > Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
>
> For the series, Acked-by: Thomas Monjalon <thomas@monjalon.net>
> With the hope that such optimization will go in libc in a near future.
>
> If there is no objection, I will merge this AMD-specific series in 21.11-rc2.
> It should not affect other platforms.
Hi Folks,
This patchset was brought to my attention, and I have a few concerns.
I'll add short snippets of context from the patch here so I can refer to it below;
+/**
+ * Copy 16 bytes from one location to another,
+ * with temporal stores
+ */
+static __rte_always_inline void
+rte_copy16_ts(uint8_t *dst, uint8_t *src)
+{
+ __m128i var128;
+
+ var128 = _mm_stream_load_si128((__m128i *)src);
+ _mm_storeu_si128((__m128i *)dst, var128);
+}
1) What is fundamentally specific to the znverX CPU? Is there any reason this can not just be enabled for x86-64 generic with SSE4.1 ISA requirements?
_mm_stream_load_si128() is part of SSE4.1
_mm_storeu_si128() is SSE2.
Using the intrinsics guide for lookup of intrinsics to ISA level: https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html?wapkw=intrinsics%20guide#text=_mm_stream_load&ig_expand=6884
2) Are -D options allowed to change/break API/ABI?
By allowing -Dcpu_instruction_set= to change available functions, any application using it is no longer source-code (API) compatible with "DPDK" proper.
This patch essentially splits a "DPDK" app to depend on "DPDK + CPU version -D flag", in an incompatible way (no fallback?).
3) The stream load instruction used here *requires* 16-byte alignment for its operand.
This is not documented, and worse, a uint8_t* is accepted, which is cast to (__m128i *).
This cast hides the compiler warning for expanding type-alignments.
And the code itself is broken - passing a "src" parameter that is not 16-byte aligned will segfault.
4) Temporal and Non-temporal are not logically presented here.
Temporal loads/stores are normal loads/stores. They use the L1/L2 caches.
Non-temporal loads/stores indicate that the data will *not* be used again in a short space of time.
Non-temporal means "having no relation to time" according to my internet search.
5) The *store* here uses a normal store (temporal, targets cache). The *load* however is a streaming (non-temporal, no cache) load.
It is not clearly documented that A) stream load will be used.
The inverse is documented "copy with ts" aka, copy with temporal store.
Is documenting the store as temporal meant to imply that the load is non-temporal?
6) What is the use-case for this? When would a user *want* to use this instead of rte_memcpy()?
If the data being loaded is relevant to datapath/packets, presumably other packets might require the
loaded data, so temporal (normal) loads should be used to cache the source data?
7) Why is streaming (non-temporal) loads & stores not used? I guess maybe this is regarding the use-case,
but its not clear to me right now why loads are NT, and stores are T.
All in all, I do not think merging this patch is a good idea. I would like to understand the motivation for adding
this type of function, and then see it being done in a way that is clearly documented regarding temporal loads/stores,
and not changing/adding APIs for specific CPUs.
So apologies for late feedback, but this is not of high enough quality to be merged to DPDK right now, NACK.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
2021-10-27 11:03 3% ` Van Haaren, Harry
@ 2021-10-27 11:41 0% ` Mattias Rönnblom
0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2021-10-27 11:41 UTC (permalink / raw)
To: Van Haaren, Harry, Thomas Monjalon, Aman Kumar
Cc: dev, viacheslavo, Burakov, Anatoly, Song, Keesang, jerinjacobk,
Ananyev, Konstantin, Richardson, Bruce, honnappa.nagarahalli,
Ruifeng Wang, David Christensen, david.marchand, stephen
On 2021-10-27 13:03, Van Haaren, Harry wrote:
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Thomas Monjalon
>> Sent: Wednesday, October 27, 2021 9:13 AM
>> To: Aman Kumar <aman.kumar@vvdntech.in>
>> Cc: dev@dpdk.org; viacheslavo@nvidia.com; Burakov, Anatoly
>> <anatoly.burakov@intel.com>; keesang.song@amd.com;
>> aman.kumar@vvdntech.in; jerinjacobk@gmail.com; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>; Richardson, Bruce
>> <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; Ruifeng Wang
>> <ruifeng.wang@arm.com>; David Christensen <drc@linux.vnet.ibm.com>;
>> david.marchand@redhat.com; stephen@networkplumber.org
>> Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy
>> support for AMD platform
>>
>> 27/10/2021 09:28, Aman Kumar:
>>> This patch provides a rte_memcpy* call with temporal stores.
>>> Use -Dcpu_instruction_set=znverX with build to enable this API.
>>>
>>> Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
>> For the series, Acked-by: Thomas Monjalon <thomas@monjalon.net>
>> With the hope that such optimization will go in libc in a near future.
>>
>> If there is no objection, I will merge this AMD-specific series in 21.11-rc2.
>> It should not affect other platforms.
> Hi Folks,
>
> This patchset was brought to my attention, and I have a few concerns.
> I'll add short snippets of context from the patch here so I can refer to it below;
>
> +/**
> + * Copy 16 bytes from one location to another,
> + * with temporal stores
> + */
> +static __rte_always_inline void
> +rte_copy16_ts(uint8_t *dst, uint8_t *src)
> +{
> + __m128i var128;
> +
> + var128 = _mm_stream_load_si128((__m128i *)src);
> + _mm_storeu_si128((__m128i *)dst, var128);
> +}
>
> 1) What is fundamentally specific to the znverX CPU? Is there any reason this can not just be enabled for x86-64 generic with SSE4.1 ISA requirements?
> _mm_stream_load_si128() is part of SSE4.1
> _mm_storeu_si128() is SSE2.
> Using the intrinsics guide for lookup of intrinsics to ISA level: https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html?wapkw=intrinsics%20guide#text=_mm_stream_load&ig_expand=6884
>
> 2) Are -D options allowed to change/break API/ABI?
> By allowing -Dcpu_instruction_set= to change available functions, any application using it is no longer source-code (API) compatible with "DPDK" proper.
> This patch essentially splits a "DPDK" app to depend on "DPDK + CPU version -D flag", in an incompatible way (no fallback?).
>
> 3) The stream load instruction used here *requires* 16-byte alignment for its operand.
> This is not documented, and worse, a uint8_t* is accepted, which is cast to (__m128i *).
> This cast hides the compiler warning for expanding type-alignments.
> And the code itself is broken - passing a "src" parameter that is not 16-byte aligned will segfault.
>
> 4) Temporal and Non-temporal are not logically presented here.
> Temporal loads/stores are normal loads/stores. They use the L1/L2 caches.
> Non-temporal loads/stores indicate that the data will *not* be used again in a short space of time.
> Non-temporal means "having no relation to time" according to my internet search.
>
> 5) The *store* here uses a normal store (temporal, targets cache). The *load* however is a streaming (non-temporal, no cache) load.
> It is not clearly documented that A) stream load will be used.
> The inverse is documented "copy with ts" aka, copy with temporal store.
> Is documenting the store as temporal meant to imply that the load is non-temporal?
>
> 6) What is the use-case for this? When would a user *want* to use this instead of rte_memcpy()?
> If the data being loaded is relevant to datapath/packets, presumably other packets might require the
> loaded data, so temporal (normal) loads should be used to cache the source data?
I'm not sure if your first question is rhetorical or not, but a memcpy()
in a NT variant is certainly useful. One use case for a memcpy() with
temporal loads and non-temporal stores is if you need to archive packet
payload for (distant, potential) future use, and want to avoid causing
unnecessary LLC evictions while doing so.
> 7) Why is streaming (non-temporal) loads & stores not used? I guess maybe this is regarding the use-case,
> but its not clear to me right now why loads are NT, and stores are T.
>
> All in all, I do not think merging this patch is a good idea. I would like to understand the motivation for adding
> this type of function, and then see it being done in a way that is clearly documented regarding temporal loads/stores,
> and not changing/adding APIs for specific CPUs.
>
> So apologies for late feedback, but this is not of high enough quality to be merged to DPDK right now, NACK.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
@ 2021-10-27 12:03 4% ` Xia, Chenbo
0 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2021-10-27 12:03 UTC (permalink / raw)
To: Thomas Monjalon, Harris, James R, Walker, Benjamin
Cc: Liu, Changpeng, David Marchand, dev, Aaron Conole, Zawadzki, Tomasz
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, October 14, 2021 4:26 PM
> To: Harris, James R <james.r.harris@intel.com>; Walker, Benjamin
> <benjamin.walker@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: Liu, Changpeng <changpeng.liu@intel.com>; David Marchand
> <david.marchand@redhat.com>; dev@dpdk.org; Aaron Conole <aconole@redhat.com>;
> Zawadzki, Tomasz <tomasz.zawadzki@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
>
> 14/10/2021 10:07, Xia, Chenbo:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 14/10/2021 09:00, Xia, Chenbo:
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > 14/10/2021 04:21, Xia, Chenbo:
> > > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > > > Yes I think we need to agree on functions to keep as-is for
> > > compatibility.
> > > > > > > Waiting for your input please.
> > > > > >
> > > > > > So, do you mean currently DPDK doesn't guarantee ABI for drivers
> > > > >
> > > > > Yes
> > > > >
> > > > > > but could have driver ABI in the future?
> > > > >
> > > > > I don't think so, not general compatibility,
> > > > > but we can think about a way to avoid breaking SPDK specifically,
> > > > > which has less requirements.
> > > >
> > > > So the problem here is exposing some APIs to SPDK directly? Without the
> > > 'enable_driver_sdk'
> > > > option, I don't see a solution of both exposed and not-ABI. Any idea in
> your
> > > mind?
> > >
> > > No the idea is to keep using enable_driver_sdk.
> > > But so far, there is no compatibility guarantee for driver SDK.
> > > The discussion is about which basic compatibility requirement is needed
> for
> > > SPDK.
> >
> > Sorry for not understanding your point quickly, but what's the difference of
> > 'general compatibility' and 'basic compatibility'? Because in my mind, one
> > struct or function should either be ABI-compatible or not. Could you help
> explain
> > it a bit?
>
> I wonder whether we could have a guarantee for a subset of structs and
> functions.
> Anyway, this is just opening the discussion to collect some inputs first.
> Then we'll have to check what is possible and get a techboard approval.
>
After going through related code in SPDK, I think we can add some new functions and keep
some macros in the exposed header (i.e., rte_bus_pci.h) for SPDK to register pci driver
and get needed info.
Most structs/marocs will be hided and SPDK can use the new proposed APIs and small set
of macros/structs to build. In this way, the problem of SPDK building with DPDK distros
and ABI issue can both be solved. APIs like struct rte_pci_device and struct rte_pci_driver
can be hided to minimize pci bus ABI.
Thomas & SPDK folks, please share your opinions of above.
Thanks,
Chenbo
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
@ 2021-10-27 14:10 2% ` Van Haaren, Harry
2021-10-27 14:31 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Van Haaren, Harry @ 2021-10-27 14:10 UTC (permalink / raw)
To: Aman Kumar, Ananyev, Konstantin
Cc: mattias.ronnblom, Thomas Monjalon, dev, viacheslavo, Burakov,
Anatoly, Song, Keesang, jerinjacobk, Richardson, Bruce,
honnappa.nagarahalli, Ruifeng Wang, David Christensen,
david.marchand, stephen
From: Aman Kumar <aman.kumar@vvdntech.in>
Sent: Wednesday, October 27, 2021 2:35 PM
To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
Cc: Van Haaren, Harry <harry.van.haaren@intel.com>; mattias.ronnblom <mattias.ronnblom@ericsson.com>; Thomas Monjalon <thomas@monjalon.net>; dev@dpdk.org; viacheslavo@nvidia.com; Burakov, Anatoly <anatoly.burakov@intel.com>; Song, Keesang <Keesang.Song@amd.com>; jerinjacobk@gmail.com; Richardson, Bruce <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; Ruifeng Wang <ruifeng.wang@arm.com>; David Christensen <drc@linux.vnet.ibm.com>; david.marchand@redhat.com; stephen@networkplumber.org
Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
Hi Aman,
Please sent plain-text email, converting to other formats it makes writing inline replies difficult.
I've converted this reply email back to plain-text, and will annotate email below with [<author> wrote]:
On Wed, Oct 27, 2021 at 5:53 PM Ananyev, Konstantin <mailto:konstantin.ananyev@intel.com> wrote
>
> Hi Mattias,
>
> > > 6) What is the use-case for this? When would a user *want* to use this instead
> > of rte_memcpy()?
> > > If the data being loaded is relevant to datapath/packets, presumably other
> > packets might require the
> > > loaded data, so temporal (normal) loads should be used to cache the source
> > data?
> >
> >
> > I'm not sure if your first question is rhetorical or not, but a memcpy()
> > in a NT variant is certainly useful. One use case for a memcpy() with
> > temporal loads and non-temporal stores is if you need to archive packet
> > payload for (distant, potential) future use, and want to avoid causing
> > unnecessary LLC evictions while doing so.
>
> Yes I agree that there are certainly benefits in using cache-locality hints.
> There is an open question around if the src or dst or both are non-temporal.
>
> In the implementation of this patch, the NT/T type of store is reversed from your use-case:
> 1) Loads are NT (so loaded data is not cached for future packets)
> 2) Stores are T (so copied/dst data is now resident in L1/L2)
>
> In theory there might even be valid uses for this type of memcpy where loaded
> data is not needed again soon and stored data is referenced again soon,
> although I cannot think of any here while typing this mail..
>
> I think some use-case examples, and clear documentation on when/how to choose
> between rte_memcpy() or any (potential future) rte_memcpy_nt() variants is required
> to progress this patch.
>
> Assuming a strong use-case exists, and it can be clearly indicators to users of DPDK APIs which
> rte_memcpy() to use, we can look at technical details around enabling the implementation.
>
[Konstantin wrote]:
+1 here.
Function behaviour and restrictions (src parameter needs to be 16/32 B aligned, etc.),
along with expected usage scenarios have to be documented properly.
Again, as Harry pointed out, I don't see any AMD specific instructions in this function,
so presumably such function can go into __AVX2__ code block and no new defines will
be required.
[Aman wrote]:
Agreed that APIs are generic but we've kept under an AMD flag for a simple reason that it is NOT tested on any other platform.
A use-case on how to use this was planned earlier for mlx5 pmd but dropped in this version of patch as the data path of mlx5 is going to be refactored soon and may not be useful for future versions of mlx5 (>22.02).
Ref link: https://patchwork.dpdk.org/project/dpdk/patch/20211019104724.19416-2-aman.kumar@vvdntech.in/(we've plan to adapt this into future version)
The patch in the link basically enhances mlx5 mprq implementation for our specific use-case and with 128B packet size, we achieve ~60% better perf. We understand the use of this copy function should be documented which we shall plan along with few other platform specific optimizations in future versions of DPDK. As this does not conflict with other platforms, can we still keep under AMD flag for now as suggested by Thomas?
[HvH wrote]:
As an open-source community, any contributions should aim to improve the whole.
In the past, numerous improvements have been merged to DPDK that improve performance.
Sometimes these are architecture specific (x86/arm/ppc) sometimes the are ISA specific (SSE, AVX512, NEON).
I am not familiar with any cases in DPDK, where there is a #ifdef based on a *specific platform*.
A quick "grep" through the "dpdk/lib" directory does not show any place where PMD or generic code
has been explicitly optimized for a *specific platform*.
Obviously, in cases where ISA either exists or does not exist, yes there is an optimization to enable it.
But this is not exposed as a top-level compile-time option, it uses runtime CPU ISA detection.
Please take a step back from the code, and look at what this patch asks of DPDK:
"Please accept & maintain these changes upstream, which benefit only platform X, even though these ISA features are also available on other platforms".
Other patches that enhance performance of DPDK ask this:
"Please accept & maintain these changes upstream, which benefit all platforms which have ISA capability X".
=== Question "As this does not conflict with other platforms, can we still keep under AMD flag for now"?
I feel the contribution is too specific to a platform. Make it generic by enabling it at an ISA capability level.
Please yes, contribute to the DPDK community by improving performance of a PMD by enabling/leveraging ISA.
But do so in a way that does not benefit only a specific platform - do so in a way that enhances all of DPDK, as
other patches have done for the DPDK that this patch is built on.
If you have concerns that the PMD maintainers will not accept the changes due to potential regressions on
other platforms, then discuss those, make a plan on how to performance validate, and work to a solution.
=== Regarding specifically the request for "can we still keep under AMD flag for now"?
I do not believe we should introduce APIs for specific platforms. DPDK's EAL is an abstraction layer.
The value of EAL is to provide a common abstraction. This platform-specific flag breaks the abstraction,
and results in packaging issues, as well as API/ABI instability based on -Dcpu_instruction_set choice.
So, no, we should not introduce APIs based on any compile-time flag.
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
2021-10-27 14:10 2% ` Van Haaren, Harry
@ 2021-10-27 14:31 0% ` Thomas Monjalon
2021-10-29 16:01 0% ` Song, Keesang
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-27 14:31 UTC (permalink / raw)
To: Aman Kumar, Ananyev, Konstantin, Van Haaren, Harry
Cc: mattias. ronnblom, dev, viacheslavo, Burakov, Anatoly, Song,
Keesang, jerinjacobk, Richardson, Bruce, honnappa.nagarahalli,
Ruifeng Wang, David Christensen, david.marchand, stephen
27/10/2021 16:10, Van Haaren, Harry:
> From: Aman Kumar <aman.kumar@vvdntech.in>
> On Wed, Oct 27, 2021 at 5:53 PM Ananyev, Konstantin <mailto:konstantin.ananyev@intel.com> wrote
> >
> > Hi Mattias,
> >
> > > > 6) What is the use-case for this? When would a user *want* to use this instead
> > > of rte_memcpy()?
> > > > If the data being loaded is relevant to datapath/packets, presumably other
> > > packets might require the
> > > > loaded data, so temporal (normal) loads should be used to cache the source
> > > data?
> > >
> > >
> > > I'm not sure if your first question is rhetorical or not, but a memcpy()
> > > in a NT variant is certainly useful. One use case for a memcpy() with
> > > temporal loads and non-temporal stores is if you need to archive packet
> > > payload for (distant, potential) future use, and want to avoid causing
> > > unnecessary LLC evictions while doing so.
> >
> > Yes I agree that there are certainly benefits in using cache-locality hints.
> > There is an open question around if the src or dst or both are non-temporal.
> >
> > In the implementation of this patch, the NT/T type of store is reversed from your use-case:
> > 1) Loads are NT (so loaded data is not cached for future packets)
> > 2) Stores are T (so copied/dst data is now resident in L1/L2)
> >
> > In theory there might even be valid uses for this type of memcpy where loaded
> > data is not needed again soon and stored data is referenced again soon,
> > although I cannot think of any here while typing this mail..
> >
> > I think some use-case examples, and clear documentation on when/how to choose
> > between rte_memcpy() or any (potential future) rte_memcpy_nt() variants is required
> > to progress this patch.
> >
> > Assuming a strong use-case exists, and it can be clearly indicators to users of DPDK APIs which
> > rte_memcpy() to use, we can look at technical details around enabling the implementation.
> >
>
> [Konstantin wrote]:
> +1 here.
> Function behaviour and restrictions (src parameter needs to be 16/32 B aligned, etc.),
> along with expected usage scenarios have to be documented properly.
> Again, as Harry pointed out, I don't see any AMD specific instructions in this function,
> so presumably such function can go into __AVX2__ code block and no new defines will
> be required.
>
>
> [Aman wrote]:
> Agreed that APIs are generic but we've kept under an AMD flag for a simple reason that it is NOT tested on any other platform.
> A use-case on how to use this was planned earlier for mlx5 pmd but dropped in this version of patch as the data path of mlx5 is going to be refactored soon and may not be useful for future versions of mlx5 (>22.02).
> Ref link: https://patchwork.dpdk.org/project/dpdk/patch/20211019104724.19416-2-aman.kumar@vvdntech.in/(we've plan to adapt this into future version)
> The patch in the link basically enhances mlx5 mprq implementation for our specific use-case and with 128B packet size, we achieve ~60% better perf. We understand the use of this copy function should be documented which we shall plan along with few other platform specific optimizations in future versions of DPDK. As this does not conflict with other platforms, can we still keep under AMD flag for now as suggested by Thomas?
I said I could merge if there is no objection.
I've overlooked that it's adding completely new functions in the API.
And the comments go in the direction of what I asked in previous version:
what is specific to AMD here?
Now seeing the valid objections, I agree it should be reworked.
We must provide API to applications which is generic, stable and well documented.
> [HvH wrote]:
> As an open-source community, any contributions should aim to improve the whole.
> In the past, numerous improvements have been merged to DPDK that improve performance.
> Sometimes these are architecture specific (x86/arm/ppc) sometimes the are ISA specific (SSE, AVX512, NEON).
>
> I am not familiar with any cases in DPDK, where there is a #ifdef based on a *specific platform*.
> A quick "grep" through the "dpdk/lib" directory does not show any place where PMD or generic code
> has been explicitly optimized for a *specific platform*.
>
> Obviously, in cases where ISA either exists or does not exist, yes there is an optimization to enable it.
> But this is not exposed as a top-level compile-time option, it uses runtime CPU ISA detection.
>
> Please take a step back from the code, and look at what this patch asks of DPDK:
> "Please accept & maintain these changes upstream, which benefit only platform X, even though these ISA features are also available on other platforms".
>
> Other patches that enhance performance of DPDK ask this:
> "Please accept & maintain these changes upstream, which benefit all platforms which have ISA capability X".
>
>
> === Question "As this does not conflict with other platforms, can we still keep under AMD flag for now"?
> I feel the contribution is too specific to a platform. Make it generic by enabling it at an ISA capability level.
>
> Please yes, contribute to the DPDK community by improving performance of a PMD by enabling/leveraging ISA.
> But do so in a way that does not benefit only a specific platform - do so in a way that enhances all of DPDK, as
> other patches have done for the DPDK that this patch is built on.
>
> If you have concerns that the PMD maintainers will not accept the changes due to potential regressions on
> other platforms, then discuss those, make a plan on how to performance validate, and work to a solution.
>
>
> === Regarding specifically the request for "can we still keep under AMD flag for now"?
> I do not believe we should introduce APIs for specific platforms. DPDK's EAL is an abstraction layer.
> The value of EAL is to provide a common abstraction. This platform-specific flag breaks the abstraction,
> and results in packaging issues, as well as API/ABI instability based on -Dcpu_instruction_set choice.
> So, no, we should not introduce APIs based on any compile-time flag.
I agree
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver
@ 2021-10-27 14:59 0% ` Thomas Monjalon
2021-10-28 14:54 0% ` Sebastian, Selwin
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-27 14:59 UTC (permalink / raw)
To: Selwin Sebastian; +Cc: dev, David Marchand
Any update please?
06/09/2021 19:17, David Marchand:
> On Mon, Sep 6, 2021 at 6:56 PM Selwin Sebastian
> <selwin.sebastian@amd.com> wrote:
> >
> > Add support for PTDMA driver
>
> - This description is rather short.
>
> Can this new driver be implemented as a dmadev?
> See (current revision):
> https://patchwork.dpdk.org/project/dpdk/list/?series=18677&state=%2A&archive=both
>
>
> - In any case, quick comments on this patch:
> Please update release notes.
> vfio-pci should be preferred over igb_uio.
> Please check indent in meson.
> ABI version is incorrect in version.map.
> RTE_LOG_REGISTER_DEFAULT should be preferred.
> The patch is monolithic, could it be split per functionnality to ease review?
>
> Copy relevant maintainers and/or (sub-)tree maintainers to make them
> aware of this work, and get those patches reviewed.
> Please submit new revisions of patchsets with increased revision
> number in title + changelog that helps track what changed between
> revisions.
>
> Some of those points are described in:
> https://doc.dpdk.org/guides/contributing/patches.html
>
>
> Thanks.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [Bug 842] [dpdk-21.11 rc1] FIPS tests are failing
@ 2021-10-27 17:43 2% bugzilla
0 siblings, 0 replies; 200+ results
From: bugzilla @ 2021-10-27 17:43 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=842
Bug ID: 842
Summary: [dpdk-21.11 rc1] FIPS tests are failing
Product: DPDK
Version: 21.11
Hardware: All
OS: Linux
Status: UNCONFIRMED
Severity: minor
Priority: Normal
Component: cryptodev
Assignee: dev@dpdk.org
Reporter: varalakshmi.s@intel.com
Target Milestone: ---
Environment
DPDK Version: 6c390cee976e33b1e9d8562d32c9d3ebe5d9ce94
OS: 5.4.0-89-generic #100~18.04.1-Ubuntu SMP Wed Sep 29 10:59:42 UTC 2021
x86_64 x86_64 x86_64 GNU/Linux
Compiler: 7.5.0
Hardware platform: Purely
Steps to reproduce
root@dpdk-yaobing-purely147:~/dpdk#
x86_64-native-linuxapp-gcc/examples/dpdk-fips_validation -l 9,10,66 -a
0000:af:00.0 --vdev crypto_aesni_gcm_pmd_1 --socket-mem 2048,2048 --legacy-mem
-n 6 -- --req-file /root/FIPS/GCM/req --rsp-file /root/FIPS/GCM/resp
--cryptodev crypto_aesni_gcm_pmd_1 --path-is-folder --cryptodev-id 0
--self-test
EAL: Detected CPU lcores: 112
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
for that size
EAL: VFIO support initialized
CRYPTODEV: Creating cryptodev crypto_aesni_gcm_pmd_1CRYPTODEV: Initialisation
parameters - name: crypto_aesni_gcm_pmd_1,socket id: 0, max queue pairs: 8
ipsec_mb_create() line 140: IPSec Multi-buffer library version used:
1.0.0CRYPTODEV: elt_size 0 is expanded to 384PMD: Testing (ID 0)
SELF_TEST_AES128_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 0) SELF_TEST_AES128_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 1) SELF_TEST_AES192_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 1) SELF_TEST_AES192_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 2) SELF_TEST_AES256_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 2) SELF_TEST_AES256_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 3) SELF_TEST_3DES_2KEY_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 3) SELF_TEST_3DES_2KEY_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 4) SELF_TEST_3DES_3KEY_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 4) SELF_TEST_3DES_3KEY_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 5) SELF_TEST_AES128_CCM_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 5) SELF_TEST_AES128_CCM_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 6) SELF_TEST_SHA1_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 6) SELF_TEST_SHA1_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 7) SELF_TEST_SHA224_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 7) SELF_TEST_SHA224_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 8) SELF_TEST_SHA256_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 8) SELF_TEST_SHA256_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 9) SELF_TEST_SHA384_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 9) SELF_TEST_SHA384_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 10) SELF_TEST_SHA512_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 10) SELF_TEST_SHA512_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 11) SELF_TEST_AES_CMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 11) SELF_TEST_AES_CMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 12) SELF_TEST_AES128_GCM_encrypt_test_vector Encrypt...
PMD: Testing (ID 12) SELF_TEST_AES128_GCM_encrypt_test_vector Decrypt...
PMD: Testing (ID 13) SELF_TEST_AES192_GCM_encrypt_test_vector Encrypt...
PMD: Testing (ID 13) SELF_TEST_AES192_GCM_encrypt_test_vector Decrypt...
PMD: Testing (ID 14) SELF_TEST_AES256_GCM_encrypt_test_vector Encrypt...
PMD: Testing (ID 14) SELF_TEST_AES256_GCM_encrypt_test_vector Decrypt...
PMD: Testing (ID 15) SELF_TEST_AES128_CTR_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 15) SELF_TEST_AES128_CTR_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 16) SELF_TEST_AES192_CTR_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 16) SELF_TEST_AES192_CTR_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 17) SELF_TEST_AES256_CTR_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 17) SELF_TEST_AES256_CTR_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: PMD 0 finished self-test successfully
CRYPTODEV: elt_size 0 is expanded to 384
Segmentation fault (core dumped)
Expected Result
Test is expected to Pass with no errors.
Stack Trace or Log
-----------------------------------------------------------------
f6849cdcc6ada2a8bc9b82e691eaab1aecf4952f is the first bad commit
commit f6849cdcc6ada2a8bc9b82e691eaab1aecf4952f
Author: Akhil Goyal <gakhil@marvell.com>
Date: Wed Oct 20 16:57:53 2021 +0530
cryptodev: use new flat array in fast path API
Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
2021-10-25 21:40 4% [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1 Thomas Monjalon
@ 2021-10-28 7:10 0% ` Jiang, YuX
2021-11-01 11:53 0% ` Jiang, YuX
2021-11-05 21:51 0% ` Thinh Tran
2021-11-08 10:50 0% ` Pei Zhang
2 siblings, 1 reply; 200+ results
From: Jiang, YuX @ 2021-10-28 7:10 UTC (permalink / raw)
To: Thomas Monjalon, dev (dev@dpdk.org)
Cc: Devlin, Michelle, Mcnamara, John, Yigit, Ferruh
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Thomas Monjalon
> Sent: Tuesday, October 26, 2021 5:41 AM
> To: announce@dpdk.org
> Subject: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
>
> A new DPDK release candidate is ready for testing:
> https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
>
> There are 1171 new patches in this snapshot, big as expected.
>
> Release notes:
> https://doc.dpdk.org/guides/rel_notes/release_21_11.html
>
> Highlights of 21.11-rc1:
> * General
> - more than 512 MSI-X interrupts
> - hugetlbfs subdirectories
> - mempool flag for non-IO usages
> - device class for DMA accelerators
> - DMA drivers for Intel DSA and IOAT
> * Networking
> - MTU handling rework
> - get all MAC addresses of a port
> - RSS based on L3/L4 checksum fields
> - flow match on L2TPv2 and PPP
> - flow flex parser for custom header
> - control delivery of HW Rx metadata
> - transfer flows API rework
> - shared Rx queue
> - Windows support of Intel e1000, ixgbe and iavf
> - testpmd multi-process
> - pcapng library and dumpcap tool
> * API/ABI
> - API namespace improvements (mempool, mbuf, ethdev)
> - API internals hidden (intr, ethdev, security, cryptodev, eventdev,
> cmdline)
> - flags check for future ABI compatibility (memzone, mbuf, mempool)
>
> Please test and report issues on bugs.dpdk.org.
> DPDK 21.11-rc2 is expected in two weeks or less.
>
> Thank you everyone
>
Update the test status for Intel part. Till now dpdk21.11-rc1 test execution rate is 50%. No critical issue is found.
But one little high issue https://bugs.dpdk.org/show_bug.cgi?id=843 impacts cryptodev function and performance test.
Bad commit id is 8cb5d08db940a6b26f5c5ac03b49bac25e9a7022/Author: Harman Kalra <hkalra@marvell.com>. Please help to handle it.
# Basic Intel(R) NIC testing
* Build or compile:
*Build: cover the build test combination with latest GCC/Clang/ICC version and the popular OS revision such as Ubuntu20.04, Fedora34, RHEL8.4, etc.
- All test done.
*Compile: cover the CFLAGES(O0/O1/O2/O3) with popular OS such as Ubuntu20.04 and Fedora34.
- All test done.
- Find one bug: https://bugs.dpdk.org/show_bug.cgi?id=841 Marvell Dev has provided patch and Intel validation team verify passed.
Patch link: http://patchwork.dpdk.org/project/dpdk/patch/20211027131259.11775-1-ktejasree@marvell.com/
* PF(i40e, ixgbe): test scenarios including RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
- Execution rate is 60%. No new issue is found yet.
* VF(i40e, ixgbe): test scenarios including VF-RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
- Execution rate is 60%.
- One bug https://bugs.dpdk.org/show_bug.cgi?id=845 about "vm_hotplug: vf testpmd core dumped after executing "device_del dev1" in qemu" is found.
Bad commit id is commit c2bd9367e18f5b00c1a3c5eb281a512ef52c5dfd Author: Harman Kalra <hkalra@marvell.com>
* PF/VF(ice): test scenarios including Switch features/Package Management/Flow Director/Advanced Tx/Advanced RSS/ACL/DCF/Share code update/Flexible Descriptor, etc.
- Execution rate is 60%.
- One bug about kni_autotest failed on Suse15.3. Trying to find bad commit id. Known issues, Intel dev is under investigating.
* Intel NIC single core/NIC performance: test scenarios including PF/VF single core performance test, RFC2544 Zero packet loss performance test, etc.
- Execution rate is 60%.
- One bug about nic single core performance drop 2% is found. Bad commit id is commit: efc6f9104c80d39ec168/Author: Olivier Matz <olivier.matz@6wind.com>
* Power and IPsec:
* Power: test scenarios including bi-direction/Telemetry/Empty Poll Lib/Priority Base Frequency, etc.
- All passed.
* IPsec: test scenarios including ipsec/ipsec-gw/ipsec library basic test - QAT&SW/FIB library, etc.
- Not Start.
# Basic cryptodev and virtio testing
* Virtio: both function and performance test are covered. Such as PVP/Virtio_loopback/virtio-user loopback/virtio-net VM2VM perf testing/VMAWARE ESXI 7.0u3, etc.
- Execution rate is 80%.
- Two new bugs are found.
- One about VMware ESXI 7.0U3: failed to start port. Intel Dev is under investigating.
- One https://bugs.dpdk.org/show_bug.cgi?id=840 about "dpdk-pdump capture the pcap file content are wrong" is found.
Bad commit id: commit 10f726efe26c55805cf0bf6ca1b80e97b98eb724 //bad commit id Author: Stephen Hemminger <stephen@networkplumber.org>
* Cryptodev:
*Function test: test scenarios including Cryptodev API testing/CompressDev ISA-L/QAT/ZLIB PMD Testing/FIPS, etc.
- Execution rate is 60%
- Two new bugs are found.
- One https://bugs.dpdk.org/show_bug.cgi?id=843 about crypto performance tests for QAT are failing. Bad commit id is 8cb5d08db940a6b26f5c5ac03b49bac25e9a7022/Author: Harman Kalra <hkalra@marvell.com>
- One https://bugs.dpdk.org/show_bug.cgi?id=842 about FIP tests are failing. Bad commit id is commit f6849cdcc6ada2a8bc9b82e691eaab1aecf4952f Author: Akhil Goyal gakhil@marvell.com
*Performance test: test scenarios including Thoughput Performance /Cryptodev Latency, etc.
- Execution rate is 10%. Most of performance test are blocked by Bug843.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable
@ 2021-10-28 8:35 3% Thomas Monjalon
2021-10-28 8:38 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-28 8:35 UTC (permalink / raw)
To: dev; +Cc: matan, Ferruh Yigit, Andrew Rybchenko, Ray Kinsella
The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
and is integrated in error checks of ethdev library.
It is promoted as stable ABI.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
lib/ethdev/rte_ethdev.h | 4 ----
lib/ethdev/version.map | 2 +-
2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 24f30b4b28..09d60351a3 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2385,9 +2385,6 @@ int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_queue,
uint16_t nb_tx_queue, const struct rte_eth_conf *eth_conf);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
* Check if an Ethernet device was physically removed.
*
* @param port_id
@@ -2395,7 +2392,6 @@ int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_queue,
* @return
* 1 when the Ethernet device is removed, otherwise 0.
*/
-__rte_experimental
int
rte_eth_dev_is_removed(uint16_t port_id);
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index e1abe99729..c2fb0669a4 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -31,6 +31,7 @@ DPDK_22 {
rte_eth_dev_get_supported_ptypes;
rte_eth_dev_get_vlan_offload;
rte_eth_dev_info_get;
+ rte_eth_dev_is_removed;
rte_eth_dev_is_valid_port;
rte_eth_dev_logtype;
rte_eth_dev_mac_addr_add;
@@ -148,7 +149,6 @@ EXPERIMENTAL {
rte_mtr_stats_update;
# added in 18.02
- rte_eth_dev_is_removed;
rte_eth_dev_owner_delete;
rte_eth_dev_owner_get;
rte_eth_dev_owner_new;
--
2.33.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable
2021-10-28 8:35 3% [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable Thomas Monjalon
@ 2021-10-28 8:38 0% ` Kinsella, Ray
2021-10-28 8:56 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-28 8:38 UTC (permalink / raw)
To: Thomas Monjalon, dev; +Cc: matan, Ferruh Yigit, Andrew Rybchenko
On 28/10/2021 09:35, Thomas Monjalon wrote:
> The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
> and is integrated in error checks of ethdev library.
>
> It is promoted as stable ABI.
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> lib/ethdev/rte_ethdev.h | 4 ----
> lib/ethdev/version.map | 2 +-
> 2 files changed, 1 insertion(+), 5 deletions(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable
2021-10-28 8:38 0% ` Kinsella, Ray
@ 2021-10-28 8:56 0% ` Andrew Rybchenko
2021-11-04 10:45 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-10-28 8:56 UTC (permalink / raw)
To: Kinsella, Ray, Thomas Monjalon, dev; +Cc: matan, Ferruh Yigit
On 10/28/21 11:38 AM, Kinsella, Ray wrote:
>
>
> On 28/10/2021 09:35, Thomas Monjalon wrote:
>> The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
>> and is integrated in error checks of ethdev library.
>>
>> It is promoted as stable ABI.
>>
>> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
>> ---
>> lib/ethdev/rte_ethdev.h | 4 ----
>> lib/ethdev/version.map | 2 +-
>> 2 files changed, 1 insertion(+), 5 deletions(-)
>>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v19 0/5] Add PIE support for HQoS library
2021-10-25 11:32 3% ` [dpdk-dev] [PATCH v18 " Liguzinski, WojciechX
2021-10-26 8:24 3% ` Liu, Yu Y
@ 2021-10-28 10:17 3% ` Liguzinski, WojciechX
2021-11-02 23:57 3% ` [dpdk-dev] [PATCH v20 " Liguzinski, WojciechX
1 sibling, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-10-28 10:17 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Liguzinski, WojciechX (5):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 3 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 241 ++--
lib/sched/rte_sched.h | 63 +-
lib/sched/version.map | 4 +
19 files changed, 2172 insertions(+), 279 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver
2021-10-27 14:59 0% ` Thomas Monjalon
@ 2021-10-28 14:54 0% ` Sebastian, Selwin
0 siblings, 0 replies; 200+ results
From: Sebastian, Selwin @ 2021-10-28 14:54 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, David Marchand
[AMD Official Use Only]
Hi,
I am working on making ptdma driver as a dmadev. Will submit new patch for review.
Thanks and Regards
Selwin Sebastian
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Wednesday, October 27, 2021 8:29 PM
To: Sebastian, Selwin <Selwin.Sebastian@amd.com>
Cc: dev@dpdk.org; David Marchand <david.marchand@redhat.com>
Subject: Re: [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver
[CAUTION: External Email]
Any update please?
06/09/2021 19:17, David Marchand:
> On Mon, Sep 6, 2021 at 6:56 PM Selwin Sebastian
> <selwin.sebastian@amd.com> wrote:
> >
> > Add support for PTDMA driver
>
> - This description is rather short.
>
> Can this new driver be implemented as a dmadev?
> See (current revision):
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatc
> hwork.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D18677%26state%3D%
> 252A%26archive%3Dboth&data=04%7C01%7Cselwin.sebastian%40amd.com%7C
> 5ed560abb6e442d8b53108d9995a57ad%7C3dd8961fe4884e608e11a82d994e183d%7C
> 0%7C0%7C637709435600817110%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA
> iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=a%2FX
> cY1WBb1zzuAxtODkvYdg0jHFBue7HBULapymhrhk%3D&reserved=0
>
>
> - In any case, quick comments on this patch:
> Please update release notes.
> vfio-pci should be preferred over igb_uio.
> Please check indent in meson.
> ABI version is incorrect in version.map.
> RTE_LOG_REGISTER_DEFAULT should be preferred.
> The patch is monolithic, could it be split per functionnality to ease review?
>
> Copy relevant maintainers and/or (sub-)tree maintainers to make them
> aware of this work, and get those patches reviewed.
> Please submit new revisions of patchsets with increased revision
> number in title + changelog that helps track what changed between
> revisions.
>
> Some of those points are described in:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc.
> dpdk.org%2Fguides%2Fcontributing%2Fpatches.html&data=04%7C01%7Csel
> win.sebastian%40amd.com%7C5ed560abb6e442d8b53108d9995a57ad%7C3dd8961fe
> 4884e608e11a82d994e183d%7C0%7C0%7C637709435600817110%7CUnknown%7CTWFpb
> GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0
> %3D%7C1000&sdata=8TIG%2B8uXgz2DIwTHt69YKklYc3%2By3UdxGJ4deKG19iw%3
> D&reserved=0
>
>
> Thanks.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] Windows community call: MoM 2021-10-27
@ 2021-10-28 21:01 4% Dmitry Kozlyuk
0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-28 21:01 UTC (permalink / raw)
To: dev
# About
The meeting takes place in MS Teams every two weeks on Wednesday 15:00 UTC.
Note: it is going to be rescheduled.
Ask Harini Ramakrishnan <Harini.Ramakrishnan@microsoft.com> for invitation.
# Agenda
* Patch review
* Opens
# 1. Patch review
1.1. [kmods,v2] windows/netuio: add Intel device ID (William Tu)
http://patchwork.dpdk.org/project/dpdk/patch/20211019190102.1903-1-u9012063@gmail.com/
Ready to be merged.
1.2. [v3] eal/windows: ensure all enabled CPUs are counted (Naty)
http://patchwork.dpdk.org/project/dpdk/patch/1629294360-5737-1-git-send-email-navasile@linux.microsoft.com/
Merged.
1.3. Support MLX5 crypto driver on Windows (Tal)
http://patchwork.dpdk.org/project/dpdk/list/?series=19951
* Limited to crypto/mlx5 PMD, doesn't require Windows maintainers review.
* Issues cross-compiling with MinGW.
1.4. app/test: enable subset of tests on Windows (Jie)
http://patchwork.dpdk.org/project/dpdk/list/?series=19970
* v8 sent, needs review.
* Thomas recommends enabling tests on library-by-library basis.
1.5. eal: Add EAL API for threading (Naty)
http://patchwork.dpdk.org/project/dpdk/list/?series=19478
* Failed to integrate in 21.11:
- Comments came late and require major rework.
- DmitryK is going to send more comments, although small ones.
- This blocks the plan to make DPDK 21.11 static build shippable,
because we still need pthread shim.
* Can be integrated before the next TLS, because only introduces new API
and a unit test for it, doesn't break ABI for non-Windows parts.
1.6. Enable the internal EAL thread API
http://patchwork.dpdk.org/project/dpdk/list/?series=18338
* Depends on 1.5, not integrated.
* Cannot be merged fully until the next TLS,
because it breaks ABI of sync primitives.
* Needs to be revised:
- Parts that don't break ABI can be integrated early.
- This course of action is approved. More time for review and testing.
- Patches need to be rearranged.
1.7. Intel patches merged.
# 2. Opens
2.1. William Tu:
There is no solution for a Windows guest running on Windows host
to get a performant paravirtual device, like NetVSC in Linux.
The only option is VF passthrough.
William wonders if QEMU on Windows allows that.
Also some customers don't want to enable HyperV role for Windows host.
Resolution: no one has relevant experience, William is going to experiment.
2.2. Dmitry Kozlyuk:
Interrupt support draft is ready, but there are fundamental issues
that may require to rework NetUIO and userspace part.
An email thread is started on the topic
explaining the issue and possible solutions
(if someone is interested but not mentioned, tell DmitryK).
Mark Cheatham (Boulder Imaging) is willing to share info about interrupt
support in their app. However, their case is quite specialized
and logic is implemented in the kernel.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
@ 2021-10-29 15:51 2% ` Jerin Jacob
2021-10-31 9:18 4% ` Mattias Rönnblom
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-29 15:51 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: jerinj, dev, thomas, ferruh.yigit, ajit.khaparde, aboyer,
andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
Elana Agostini
On Fri, Oct 29, 2021 at 5:27 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> On 2021-10-25 11:03, Jerin Jacob wrote:
> > On Mon, Oct 25, 2021 at 1:05 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> >> On 2021-10-19 20:14, jerinj@marvell.com wrote:
> >>> From: Jerin Jacob <jerinj@marvell.com>
> >>>
> >>>
> >>> Dataplane Workload Accelerator library
> >>> ======================================
> >>>
> >>> Definition of Dataplane Workload Accelerator
> >>> --------------------------------------------
> >>> Dataplane Workload Accelerator(DWA) typically contains a set of CPUs,
> >>> Network controllers and programmable data acceleration engines for
> >>> packet processing, cryptography, regex engines, baseband processing, etc.
> >>> This allows DWA to offload compute/packet processing/baseband/
> >>> cryptography-related workload from the host CPU to save the cost and power.
> >>> Also to enable scaling the workload by adding DWAs to the Host CPU as needed.
> >>>
> >>> Unlike other devices in DPDK, the DWA device is not fixed-function
> >>> due to the fact that it has CPUs and programmable HW accelerators.
> >>
> >> There are already several instances of DPDK devices with pure-software
> >> implementation. In this regard, a DPU/SmartNIC represents nothing new.
> >> What's new, it seems to me, is a much-increased need to
> >> configure/arrange the processing in complex manners, to avoid bouncing
> >> everything to the host CPU.
> > Yes and No. It will be based on the profile. The TLV type TYPE_USER_PLANE will
> > have user plane traffic from/to host. For example, offloading ORAN split 7.2
> > baseband profile. Transport blocks sent to/from host as TYPE_USER_PLANE.
> >
> >> Something like P4 or rte_flow-based hooks or
> >> some other kind of extension. The eventdev adapters solve the same
> >> problem (where on some systems packets go through the host CPU on their
> >> way to the event device, and others do not) - although on a *much*
> >> smaller scale.
> > Yes. Eventdev Adapters only for event device plumbing.
> >
> >
> >>
> >> "Not-fixed function" seems to call for more hot plug support in the
> >> device APIs. Such functionality could then be reused by anything that
> >> can be reconfigured dynamically (FPGAs, firmware-programmed
> >> accelerators, etc.),
> > Yes.
> >
> >> but which may not be able to serve as a RPC
> >> endpoint, like a SmartNIC.
> > It can. That's the reason for choosing TLVs. So that
> > any higher level language can use TLVs like https://protect2.fireeye.com/v1/url?k=96886daf-c91357b6-96882d34-8682aaa22bc0-c994a5dcbda5d9e8&q=1&e=e89c0aca-a3b3-4f72-b616-ba4550b856b6&u=https%3A%2F%2Fgithub.com%2Fustropo%2Futtlv
> > to communicate with the accelerator. TLVs follow the request and
> > response scheme like RPC. So it can warp it under application if needed.
> >
> >>
> >> DWA could be some kind of DPDK-internal framework for managing certain
> >> type of DPUs, but should it be exposed to the user application?
> >
> > Could you clarify a bit more.
> > The offload is represented as a set of TLVs in generic fashion. There
> > is no DPU specific bit in offload representation. See
> > rte_dwa_profiile_l3fwd.h header file.
>
>
> It seems a bit cumbersome to work with TLVs on the user application
> side. Would it be an alternative to have the profile API as a set of C
> APIs instead of TLV-based messaging interface? The underlying
> implementation could still be - in many or all cases - be TLVs sent over
> some appropriate transport.
The reason to pick TLVs is as follows
1) Very easy to enable ABI compatibility. (Learned from rte_flow)
2) If it needs to be transported over network etc it needs to be
packed so that way
it is easy for implementation to do that with TLV also it gives better
performance in such
cases by avoiding reformatting or possibly avoiding memcpy etc.
3) It is easy to plugin with another high-level programing language as
just one API
4) Easy to decouple DWA core library functionalities from profile.
5) Easy to enable asynchronous scheme using request and response TLVs.
6) Most importantly, We could introduce type notion with TLV
(connected with the type of message See TYPE_ATTACHED, TYPE_STOPPED,
TYPE_USER_PLANE etc ),
That way, we can have a uniform outlook of profiles instead of each profile
coming with a setup of its own APIs and __rules__ on the state machine.
I think, for a framework to leverage communication mechanisms and other
aspects between profiles, it's important to have some synergy between profiles.
Yes. I agree that a bit more logic is required on the application side
to use TLV,
But I think we can have a wrapper function getting req and response structures.
>
> Such a C API could still be asynchronous, and still be a profile API
> (rather than a set of new DPDK device types).
>
>
> What I tried to ask during the meeting but where I didn't get an answer
> (or at least one that I could understand) was how the profiles was to be
> specified and/or documented. Maybe the above is what you had in mind
> already.
Yes. Documentation is easy, please check the RFC header file for Doxygen
meta to express all the attributes of a TLV.
+enum rte_dwa_port_host_ethernet {
+ /**
+ * Attribute | Value
+ * ----------|--------
+ * Tag | RTE_DWA_TAG_PORT_HOST_ETHERNET
+ * Stag | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
+ * Direction | H2D
+ * Type | TYPE_ATTACHED
+ * Payload | NA
+ * Pair TLV | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
+ *
+ * Request DWA host ethernet port information.
+ */
+ RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO,
+ /**
+ * Attribute | Value
+ * ----------|---------
+ * Tag | RTE_DWA_TAG_PORT_HOST_ETHERNET
+ * Stag | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
+ * Direction | H2D
+ * Type | TYPE_ATTACHED
+ * Payload | struct rte_dwa_port_host_ethernet_d2h_info
+ * Pair TLV | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
+ *
+ * Response for DWA host ethernet port information.
+ */
+ RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO,
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [dpdk-techboard] [PATCH v2] vhost: mark vDPA driver API as internal
@ 2021-10-29 16:15 3% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-29 16:15 UTC (permalink / raw)
To: Maxime Coquelin
Cc: dev, techboard, chenbo.xia, xuemingl, xiao.w.wang, david.marchand
28/10/2021 16:15, Maxime Coquelin:
> This patch marks the vDPA driver APIs as internal and
> rename the corresponding header file to vdpa_driver.h.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>
> Hi Techboard,
>
> Please vote for an exception for this unannounced API
> breakage.
[...]
> lib/vhost/{rte_vdpa_dev.h => vdpa_driver.h} | 12 +++++++++---
Hiding more internal structs is a good breakage.
[...]
> --- a/lib/vhost/rte_vdpa_dev.h
> +++ b/lib/vhost/vdpa_driver.h
> +__rte_internal
> struct rte_vdpa_device *
> rte_vdpa_register_device(struct rte_device *rte_dev,
> struct rte_vdpa_dev_ops *ops);
[...]
> +__rte_internal
> int
> rte_vdpa_unregister_device(struct rte_vdpa_device *dev);
[...]
> +__rte_internal
> int
> rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
[...]
> +__rte_internal
> int
> rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
[...]
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> - rte_vdpa_register_device;
> - rte_vdpa_relay_vring_used;
> - rte_vdpa_unregister_device;
> - rte_vhost_host_notifier_ctrl;
OK to remove these functions from the ABI
and mark them internal.
I suppose this breakage should not hurt too much,
as I don't see the need for out-of-tree vDPA drivers.
Of course it is always better to announce such change,
but it would be a pity to wait one more year for hiding this.
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
2021-10-29 15:51 2% ` Jerin Jacob
@ 2021-10-31 9:18 4% ` Mattias Rönnblom
2021-10-31 14:01 4% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2021-10-31 9:18 UTC (permalink / raw)
To: Jerin Jacob
Cc: jerinj, dev, thomas, ferruh.yigit, ajit.khaparde, aboyer,
andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
Elana Agostini
On 2021-10-29 17:51, Jerin Jacob wrote:
> On Fri, Oct 29, 2021 at 5:27 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>> On 2021-10-25 11:03, Jerin Jacob wrote:
>>> On Mon, Oct 25, 2021 at 1:05 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>> On 2021-10-19 20:14, jerinj@marvell.com wrote:
>>>>> From: Jerin Jacob <jerinj@marvell.com>
>>>>>
>>>>>
>>>>> Dataplane Workload Accelerator library
>>>>> ======================================
>>>>>
>>>>> Definition of Dataplane Workload Accelerator
>>>>> --------------------------------------------
>>>>> Dataplane Workload Accelerator(DWA) typically contains a set of CPUs,
>>>>> Network controllers and programmable data acceleration engines for
>>>>> packet processing, cryptography, regex engines, baseband processing, etc.
>>>>> This allows DWA to offload compute/packet processing/baseband/
>>>>> cryptography-related workload from the host CPU to save the cost and power.
>>>>> Also to enable scaling the workload by adding DWAs to the Host CPU as needed.
>>>>>
>>>>> Unlike other devices in DPDK, the DWA device is not fixed-function
>>>>> due to the fact that it has CPUs and programmable HW accelerators.
>>>> There are already several instances of DPDK devices with pure-software
>>>> implementation. In this regard, a DPU/SmartNIC represents nothing new.
>>>> What's new, it seems to me, is a much-increased need to
>>>> configure/arrange the processing in complex manners, to avoid bouncing
>>>> everything to the host CPU.
>>> Yes and No. It will be based on the profile. The TLV type TYPE_USER_PLANE will
>>> have user plane traffic from/to host. For example, offloading ORAN split 7.2
>>> baseband profile. Transport blocks sent to/from host as TYPE_USER_PLANE.
>>>
>>>> Something like P4 or rte_flow-based hooks or
>>>> some other kind of extension. The eventdev adapters solve the same
>>>> problem (where on some systems packets go through the host CPU on their
>>>> way to the event device, and others do not) - although on a *much*
>>>> smaller scale.
>>> Yes. Eventdev Adapters only for event device plumbing.
>>>
>>>
>>>> "Not-fixed function" seems to call for more hot plug support in the
>>>> device APIs. Such functionality could then be reused by anything that
>>>> can be reconfigured dynamically (FPGAs, firmware-programmed
>>>> accelerators, etc.),
>>> Yes.
>>>
>>>> but which may not be able to serve as a RPC
>>>> endpoint, like a SmartNIC.
>>> It can. That's the reason for choosing TLVs. So that
>>> any higher level language can use TLVs like https://protect2.fireeye.com/v1/url?k=96886daf-c91357b6-96882d34-8682aaa22bc0-c994a5dcbda5d9e8&q=1&e=e89c0aca-a3b3-4f72-b616-ba4550b856b6&u=https%3A%2F%2Fgithub.com%2Fustropo%2Futtlv
>>> to communicate with the accelerator. TLVs follow the request and
>>> response scheme like RPC. So it can warp it under application if needed.
>>>
>>>> DWA could be some kind of DPDK-internal framework for managing certain
>>>> type of DPUs, but should it be exposed to the user application?
>>> Could you clarify a bit more.
>>> The offload is represented as a set of TLVs in generic fashion. There
>>> is no DPU specific bit in offload representation. See
>>> rte_dwa_profiile_l3fwd.h header file.
>>
>> It seems a bit cumbersome to work with TLVs on the user application
>> side. Would it be an alternative to have the profile API as a set of C
>> APIs instead of TLV-based messaging interface? The underlying
>> implementation could still be - in many or all cases - be TLVs sent over
>> some appropriate transport.
> The reason to pick TLVs is as follows
>
> 1) Very easy to enable ABI compatibility. (Learned from rte_flow)
Do you include the TLV-defined profile interface in "ABI"? Or do you
with ABI only mean the C ABI to send/receive TLVs? To me, the former
makes the most sense, since changing the profile will break binary
compatibility with then-existing applications.
> 2) If it needs to be transported over network etc it needs to be
> packed so that way
> it is easy for implementation to do that with TLV also it gives better
> performance in such
> cases by avoiding reformatting or possibly avoiding memcpy etc.
My question was not "why TLVs", but the more specific "why are TLVs
exposed to the user application." I find it likely the user applications
are going to wrap the TLV serialization and de-serialization into their
own functions.
> 3) It is easy to plugin with another high-level programing language as
> just one API
Make sense. One note though: the transport is just one API, but then
each profile makes up an API as well, although it's not C, but TLV-based.
> 4) Easy to decouple DWA core library functionalities from profile.
> 5) Easy to enable asynchronous scheme using request and response TLVs.
> 6) Most importantly, We could introduce type notion with TLV
> (connected with the type of message See TYPE_ATTACHED, TYPE_STOPPED,
> TYPE_USER_PLANE etc ),
> That way, we can have a uniform outlook of profiles instead of each profile
> coming with a setup of its own APIs and __rules__ on the state machine.
> I think, for a framework to leverage communication mechanisms and other
> aspects between profiles, it's important to have some synergy between profiles.
>
>
> Yes. I agree that a bit more logic is required on the application side
> to use TLV,
> But I think we can have a wrapper function getting req and response structures.
Do you think ethdev, eventdev, cryptodev and the other DPDK APIs had
been better off as TLV-based messaging interfaces as well? From a user
point of view, I'm not sure I see what's so special about talking to a
SmartNIC compared to functions implemented in a GPU, FPGA, an
fix-function ASIC, a large array of garden gnomes or some other manner.
More functionality and more need for asynchronicity (if that's a word)
maybe.
>> Such a C API could still be asynchronous, and still be a profile API
>> (rather than a set of new DPDK device types).
>>
>>
>> What I tried to ask during the meeting but where I didn't get an answer
>> (or at least one that I could understand) was how the profiles was to be
>> specified and/or documented. Maybe the above is what you had in mind
>> already.
> Yes. Documentation is easy, please check the RFC header file for Doxygen
> meta to express all the attributes of a TLV.
>
>
> +enum rte_dwa_port_host_ethernet {
> + /**
> + * Attribute | Value
> + * ----------|--------
> + * Tag | RTE_DWA_TAG_PORT_HOST_ETHERNET
> + * Stag | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
> + * Direction | H2D
> + * Type | TYPE_ATTACHED
> + * Payload | NA
> + * Pair TLV | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
> + *
> + * Request DWA host ethernet port information.
> + */
> + RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO,
> + /**
> + * Attribute | Value
> + * ----------|---------
> + * Tag | RTE_DWA_TAG_PORT_HOST_ETHERNET
> + * Stag | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
> + * Direction | H2D
> + * Type | TYPE_ATTACHED
> + * Payload | struct rte_dwa_port_host_ethernet_d2h_info
> + * Pair TLV | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
> + *
> + * Response for DWA host ethernet port information.
> + */
> + RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO,
Thanks for the pointer.
It would make sense to have a machine-readable schema, so you can
generate the (in my view) inevitable wrapper code. Much like what gRPC
is to protobuf, or Sun RPC to XDR.
Why not use protobuf and its IDL to specify the interface?
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
2021-10-31 9:18 4% ` Mattias Rönnblom
@ 2021-10-31 14:01 4% ` Jerin Jacob
2021-10-31 19:34 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-31 14:01 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: jerinj, dev, thomas, ferruh.yigit, ajit.khaparde, aboyer,
andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
Elana Agostini
On Sun, Oct 31, 2021 at 2:48 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> On 2021-10-29 17:51, Jerin Jacob wrote:
> > On Fri, Oct 29, 2021 at 5:27 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> >> On 2021-10-25 11:03, Jerin Jacob wrote:
> >>> On Mon, Oct 25, 2021 at 1:05 PM Mattias Rönnblom
> >>> <mattias.ronnblom@ericsson.com> wrote:
> >>>> On 2021-10-19 20:14, jerinj@marvell.com wrote:
> >>>>> From: Jerin Jacob <jerinj@marvell.com>
> >>>>>
> >>>>>
> >>>>> Dataplane Workload Accelerator library
> >>>>> ======================================
> >>>>>
> >>>>> Definition of Dataplane Workload Accelerator
> >>>>> --------------------------------------------
> >>>>> Dataplane Workload Accelerator(DWA) typically contains a set of CPUs,
> >>>>> Network controllers and programmable data acceleration engines for
> >>>>> packet processing, cryptography, regex engines, baseband processing, etc.
> >>>>> This allows DWA to offload compute/packet processing/baseband/
> >>>>> cryptography-related workload from the host CPU to save the cost and power.
> >>>>> Also to enable scaling the workload by adding DWAs to the Host CPU as needed.
> >>>>>
> >>>>> Unlike other devices in DPDK, the DWA device is not fixed-function
> >>>>> due to the fact that it has CPUs and programmable HW accelerators.
> >>>> There are already several instances of DPDK devices with pure-software
> >>>> implementation. In this regard, a DPU/SmartNIC represents nothing new.
> >>>> What's new, it seems to me, is a much-increased need to
> >>>> configure/arrange the processing in complex manners, to avoid bouncing
> >>>> everything to the host CPU.
> >>> Yes and No. It will be based on the profile. The TLV type TYPE_USER_PLANE will
> >>> have user plane traffic from/to host. For example, offloading ORAN split 7.2
> >>> baseband profile. Transport blocks sent to/from host as TYPE_USER_PLANE.
> >>>
> >>>> Something like P4 or rte_flow-based hooks or
> >>>> some other kind of extension. The eventdev adapters solve the same
> >>>> problem (where on some systems packets go through the host CPU on their
> >>>> way to the event device, and others do not) - although on a *much*
> >>>> smaller scale.
> >>> Yes. Eventdev Adapters only for event device plumbing.
> >>>
> >>>
> >>>> "Not-fixed function" seems to call for more hot plug support in the
> >>>> device APIs. Such functionality could then be reused by anything that
> >>>> can be reconfigured dynamically (FPGAs, firmware-programmed
> >>>> accelerators, etc.),
> >>> Yes.
> >>>
> >>>> but which may not be able to serve as a RPC
> >>>> endpoint, like a SmartNIC.
> >>> It can. That's the reason for choosing TLVs. So that
> >>> any higher level language can use TLVs like https://protect2.fireeye.com/v1/url?k=96886daf-c91357b6-96882d34-8682aaa22bc0-c994a5dcbda5d9e8&q=1&e=e89c0aca-a3b3-4f72-b616-ba4550b856b6&u=https%3A%2F%2Fgithub.com%2Fustropo%2Futtlv
> >>> to communicate with the accelerator. TLVs follow the request and
> >>> response scheme like RPC. So it can warp it under application if needed.
> >>>
> >>>> DWA could be some kind of DPDK-internal framework for managing certain
> >>>> type of DPUs, but should it be exposed to the user application?
> >>> Could you clarify a bit more.
> >>> The offload is represented as a set of TLVs in generic fashion. There
> >>> is no DPU specific bit in offload representation. See
> >>> rte_dwa_profiile_l3fwd.h header file.
> >>
> >> It seems a bit cumbersome to work with TLVs on the user application
> >> side. Would it be an alternative to have the profile API as a set of C
> >> APIs instead of TLV-based messaging interface? The underlying
> >> implementation could still be - in many or all cases - be TLVs sent over
> >> some appropriate transport.
> > The reason to pick TLVs is as follows
> >
> > 1) Very easy to enable ABI compatibility. (Learned from rte_flow)
>
>
> Do you include the TLV-defined profile interface in "ABI"? Or do you
> with ABI only mean the C ABI to send/receive TLVs? To me, the former
> makes the most sense, since changing the profile will break binary
> compatibility with then-existing applications.
The TLV payload will be as part of ABI just like rte_flow.
If there is ABI breakage on any TLV we can add a new Tag and it is associated
payload to enable backward compatibility. i.e old TLV will work
without any change
>
>
> > 2) If it needs to be transported over network etc it needs to be
> > packed so that way
> > it is easy for implementation to do that with TLV also it gives better
> > performance in such
> > cases by avoiding reformatting or possibly avoiding memcpy etc.
>
> My question was not "why TLVs", but the more specific "why are TLVs
> exposed to the user application." I find it likely the user applications
> are going to wrap the TLV serialization and de-serialization into their
> own functions.
We can stack up the TLVs, unlike traditional function calls.
Those things really need if the device supports N profiles so multiple TLVs
can be used in a single shot in fastpath.
>
>
> > 3) It is easy to plugin with another high-level programing language as
> > just one API
>
>
> Make sense. One note though: the transport is just one API, but then
> each profile makes up an API as well, although it's not C, but TLV-based.
Yes,
>
>
> > 4) Easy to decouple DWA core library functionalities from profile.
> > 5) Easy to enable asynchronous scheme using request and response TLVs.
> > 6) Most importantly, We could introduce type notion with TLV
> > (connected with the type of message See TYPE_ATTACHED, TYPE_STOPPED,
> > TYPE_USER_PLANE etc ),
> > That way, we can have a uniform outlook of profiles instead of each profile
> > coming with a setup of its own APIs and __rules__ on the state machine.
> > I think, for a framework to leverage communication mechanisms and other
> > aspects between profiles, it's important to have some synergy between profiles.
> >
> >
> > Yes. I agree that a bit more logic is required on the application side
> > to use TLV,
> > But I think we can have a wrapper function getting req and response structures.
>
>
> Do you think ethdev, eventdev, cryptodev and the other DPDK APIs had
> been better off as TLV-based messaging interfaces as well? From a user
> point of view, I'm not sure I see what's so special about talking to a
> SmartNIC compared to functions implemented in a GPU, FPGA, an
> fix-function ASIC, a large array of garden gnomes or some other manner.
> More functionality and more need for asynchronicity (if that's a word)
> maybe.
No. I am trying to avoid creating 1000s of API and it is driver hooks
for all profiles and enable symmetry between all the profiles by
attaching state, type attributes to TLV so that we can get a unified view.
Nothing specific to SmartNIC/GPU/FPGA.
Also, TLVs are very common in interoperable solutions like
https://scf.io/en/documents/222_5G_FAPI_PHY_API_Specification.php
>
> >> Such a C API could still be asynchronous, and still be a profile API
> >> (rather than a set of new DPDK device types).
> >>
> >>
> >> What I tried to ask during the meeting but where I didn't get an answer
> >> (or at least one that I could understand) was how the profiles was to be
> >> specified and/or documented. Maybe the above is what you had in mind
> >> already.
> > Yes. Documentation is easy, please check the RFC header file for Doxygen
> > meta to express all the attributes of a TLV.
> >
> >
> > +enum rte_dwa_port_host_ethernet {
> > + /**
> > + * Attribute | Value
> > + * ----------|--------
> > + * Tag | RTE_DWA_TAG_PORT_HOST_ETHERNET
> > + * Stag | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
> > + * Direction | H2D
> > + * Type | TYPE_ATTACHED
> > + * Payload | NA
> > + * Pair TLV | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
> > + *
> > + * Request DWA host ethernet port information.
> > + */
> > + RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO,
> > + /**
> > + * Attribute | Value
> > + * ----------|---------
> > + * Tag | RTE_DWA_TAG_PORT_HOST_ETHERNET
> > + * Stag | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
> > + * Direction | H2D
> > + * Type | TYPE_ATTACHED
> > + * Payload | struct rte_dwa_port_host_ethernet_d2h_info
> > + * Pair TLV | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
> > + *
> > + * Response for DWA host ethernet port information.
> > + */
> > + RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO,
>
>
> Thanks for the pointer.
>
>
> It would make sense to have a machine-readable schema, so you can
> generate the (in my view) inevitable wrapper code. Much like what gRPC
> is to protobuf, or Sun RPC to XDR.
I thought of doing that, I thought it may not be good due to
1) Additional library dependencies
2) Performance overhead of such solutions.
Not all the transports are not supported in all the libraries
and allow drivers to enable any sort of transport.
3) Keep it simple
4) Better asynchronous support.
5) If someone needs gRPC kind of thing, it can be wrapped over TLV.
Since rte_flow already has the TLV concept it may not be new to DPDK.
I really liked rte_flow enablement of ABI combability and its ease of adding
new stuff. Try to follow similar stuff which is proven in DPDK.
Ie. New profile creation will very easy, it will be a matter of identifying
the TLVs and their type and payload, rather than everyone comes with
new APIs in every profile.
>
>
> Why not use protobuf and its IDL to specify the interface?
>
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
2021-10-31 14:01 4% ` Jerin Jacob
@ 2021-10-31 19:34 0% ` Thomas Monjalon
2021-10-31 21:13 2% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-31 19:34 UTC (permalink / raw)
To: Mattias Rönnblom, Jerin Jacob
Cc: jerinj, dev, ferruh.yigit, ajit.khaparde, aboyer,
andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
Elana Agostini
31/10/2021 15:01, Jerin Jacob:
> Since rte_flow already has the TLV concept it may not be new to DPDK.
Where is there TLV in rte_flow?
> I really liked rte_flow enablement of ABI combability and its ease of adding
> new stuff. Try to follow similar stuff which is proven in DPDK.
> Ie. New profile creation will very easy, it will be a matter of identifying
> the TLVs and their type and payload, rather than everyone comes with
> new APIs in every profile.
>
> > Why not use protobuf and its IDL to specify the interface?
Yes I think it is important to discuss alternatives,
and at least get justifications of why TLV is chosen among others.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
2021-10-31 19:34 0% ` Thomas Monjalon
@ 2021-10-31 21:13 2% ` Jerin Jacob
2021-10-31 21:55 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-31 21:13 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Mattias Rönnblom, jerinj, dev, ferruh.yigit, ajit.khaparde,
aboyer, andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
Elana Agostini
On Mon, Nov 1, 2021 at 1:04 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 31/10/2021 15:01, Jerin Jacob:
> > Since rte_flow already has the TLV concept it may not be new to DPDK.
>
> Where is there TLV in rte_flow?
struct rte_flow_item {
enum rte_flow_item_type type; /**< Item type. */
const void *spec; /**< Pointer to item specification structure. */
Type is the tag here and the spec is the value here. Length is the
size of the specification structure.
rte_flows spec does not support/need zero length variable at the end
of spec structure,
that reason for not embedding explicit length value as it is can be
derived from sizeof(specification structure).
>
> > I really liked rte_flow enablement of ABI combability and its ease of adding
> > new stuff. Try to follow similar stuff which is proven in DPDK.
> > Ie. New profile creation will very easy, it will be a matter of identifying
> > the TLVs and their type and payload, rather than everyone comes with
> > new APIs in every profile.
> >
> > > Why not use protobuf and its IDL to specify the interface?
>
> Yes I think it is important to discuss alternatives,
> and at least get justifications of why TLV is chosen among others.
Yes. Current list is
1) Very easy to enable ABI compatibility.
2) If it needs to be transported over network etc it needs to be
packed so that way it is easy for implementation to do that
with TLV also gives better performance in such
cases by avoiding reformatting or possibly avoiding memcpy etc.
3) It is easy to plugin with another high-level programing language as
just one API.
4) Easy to decouple DWA core library functionalities from profile.
5) Easy to enable asynchronous scheme using request and response TLVs.
6) Most importantly, We could introduce type notion with TLV
(connected with the type of message See TYPE_ATTACHED, TYPE_STOPPED,
TYPE_USER_PLANE etc ),
That way, we can have a uniform outlook of profiles instead of each profile
coming with a setup of its own APIs and __rules__ on the state machine.
I think, for a framework to leverage communication mechanisms and other
aspects between profiles, it's important to have some synergy between profiles.
7) No Additional library dependencies like gRPC, protobuf
8) Provide driver to implement the optimized means of supporting different
transport such as Ethernet, Shared memory, PCIe DMA style HW etc.
9) Avoid creating endless APIs and their associated driver function
calls for each
profile APIs.
>
>
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
2021-10-31 21:13 2% ` Jerin Jacob
@ 2021-10-31 21:55 0% ` Thomas Monjalon
2021-10-31 22:19 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-31 21:55 UTC (permalink / raw)
To: Jerin Jacob
Cc: Mattias Rönnblom, jerinj, dev, ferruh.yigit, ajit.khaparde,
aboyer, andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
Elana Agostini
31/10/2021 22:13, Jerin Jacob:
> On Mon, Nov 1, 2021 at 1:04 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 31/10/2021 15:01, Jerin Jacob:
> > > Since rte_flow already has the TLV concept it may not be new to DPDK.
> >
> > Where is there TLV in rte_flow?
>
> struct rte_flow_item {
> enum rte_flow_item_type type; /**< Item type. */
> const void *spec; /**< Pointer to item specification structure. */
>
> Type is the tag here and the spec is the value here. Length is the
> size of the specification structure.
> rte_flows spec does not support/need zero length variable at the end
> of spec structure,
> that reason for not embedding explicit length value as it is can be
> derived from sizeof(specification structure).
Ah OK I see what you mean.
But rte_flow_item is quite limited,
it is not the kind of TLV with multiple levels of nesting.
Do you need nesting of objects in DWA?
> > > I really liked rte_flow enablement of ABI combability and its ease of adding
> > > new stuff. Try to follow similar stuff which is proven in DPDK.
> > > Ie. New profile creation will very easy, it will be a matter of identifying
> > > the TLVs and their type and payload, rather than everyone comes with
> > > new APIs in every profile.
> > >
> > > > Why not use protobuf and its IDL to specify the interface?
> >
> > Yes I think it is important to discuss alternatives,
> > and at least get justifications of why TLV is chosen among others.
>
> Yes. Current list is
>
> 1) Very easy to enable ABI compatibility.
> 2) If it needs to be transported over network etc it needs to be
> packed so that way it is easy for implementation to do that
> with TLV also gives better performance in such
> cases by avoiding reformatting or possibly avoiding memcpy etc.
> 3) It is easy to plugin with another high-level programing language as
> just one API.
> 4) Easy to decouple DWA core library functionalities from profile.
> 5) Easy to enable asynchronous scheme using request and response TLVs.
> 6) Most importantly, We could introduce type notion with TLV
> (connected with the type of message See TYPE_ATTACHED, TYPE_STOPPED,
> TYPE_USER_PLANE etc ),
> That way, we can have a uniform outlook of profiles instead of each profile
> coming with a setup of its own APIs and __rules__ on the state machine.
> I think, for a framework to leverage communication mechanisms and other
> aspects between profiles, it's important to have some synergy between profiles.
> 7) No Additional library dependencies like gRPC, protobuf
> 8) Provide driver to implement the optimized means of supporting different
> transport such as Ethernet, Shared memory, PCIe DMA style HW etc.
> 9) Avoid creating endless APIs and their associated driver function
> calls for each
> profile APIs.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
2021-10-31 21:55 0% ` Thomas Monjalon
@ 2021-10-31 22:19 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2021-10-31 22:19 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Mattias Rönnblom, jerinj, dev, ferruh.yigit, ajit.khaparde,
aboyer, andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
Elana Agostini
On Mon, Nov 1, 2021 at 3:25 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 31/10/2021 22:13, Jerin Jacob:
> > On Mon, Nov 1, 2021 at 1:04 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >
> > > 31/10/2021 15:01, Jerin Jacob:
> > > > Since rte_flow already has the TLV concept it may not be new to DPDK.
> > >
> > > Where is there TLV in rte_flow?
> >
> > struct rte_flow_item {
> > enum rte_flow_item_type type; /**< Item type. */
> > const void *spec; /**< Pointer to item specification structure. */
> >
> > Type is the tag here and the spec is the value here. Length is the
> > size of the specification structure.
> > rte_flows spec does not support/need zero length variable at the end
> > of spec structure,
> > that reason for not embedding explicit length value as it is can be
> > derived from sizeof(specification structure).
>
> Ah OK I see what you mean.
> But rte_flow_item is quite limited,
> it is not the kind of TLV with multiple levels of nesting.
> Do you need nesting of objects in DWA?
No. Currently, ethernet-based on host port has the following
prototype[1] and it has array
of TLV(not in continuous memory). For simplicity, we could remove
legth value from rte_dwa_tlv and just
keep like rte_flow and let the payload contain the length of the
message if the message has
a variable length. See rte_dwa_profile_l3fwd_d2h_exception_pkts::nb_pkts below.
[1]
+/**
+ * Receive a burst of TLVs of type `TYPE_USER_PLANE` from the Rx queue
+ * designated by its *queue_id* of DWA object *obj*.
+ *
+ * @param obj
+ * DWA object.
+ * @param queue_id
+ * The identifier of Rx queue id. The queue id should in the range of
+ * [0 to rte_dwa_port_host_ethernet_config::nb_rx_queues].
+ * @param[out] tlvs
+ * Points to an array of *nb_tlvs* tlvs of type *rte_dwa_tlv* structure
+ * to be received.
+ * @param nb_tlvs
+ * The maximum number of TLVs to received.
+ *
+ * @return
+ * The number of TLVs actually received on the Rx queue. The return
+ * value can be less than the value of the *nb_tlvs* parameter when the
+ * Rx queue is not full.
+ */
+uint16_t rte_dwa_port_host_ethernet_rx(rte_dwa_obj_t obj, uint16_t queue_id,
+ struct rte_dwa_tlv **tlvs, uint16_t nb_tlvs);
[2]
example TLV for TYPE_USER_PLANE traffic.
+ /**
+ * Attribute | Value
+ * ----------|--------
+ * Tag | RTE_DWA_TAG_PROFILE_L3FWD
+ * Stag | RTE_DWA_STAG_PROFILE_L3FWD_D2H_EXCEPTION_PACKETS
+ * Direction | D2H
+ * Type | TYPE_USER_PLANE
+ * Payload | struct rte_dwa_profile_l3fwd_d2h_exception_pkts
+ * Pair TLV | NA
+ *
+ * Response from DWA of exception packets.
+ */
+/**
+ * Payload of RTE_DWA_STAG_PROFILE_L3FWD_D2H_EXCEPTION_PACKETS message.
+ */
+struct rte_dwa_profile_l3fwd_d2h_exception_pkts {
+ uint16_t nb_pkts;
+ /**< Number of packets in the variable size array.*/
+ uint16_t rsvd16;
+ /**< Reserved field to make pkts[0] to be 64bit aligned.*/
+ uint32_t rsvd32;
+ /**< Reserved field to make pkts[0] to be 64bit aligned.*/
+ struct rte_mbuf *pkts[0];
+ /**< Array of rte_mbufs of size nb_pkts. */
+} __rte_packed;
>
> > > > I really liked rte_flow enablement of ABI combability and its ease of adding
> > > > new stuff. Try to follow similar stuff which is proven in DPDK.
> > > > Ie. New profile creation will very easy, it will be a matter of identifying
> > > > the TLVs and their type and payload, rather than everyone comes with
> > > > new APIs in every profile.
> > > >
> > > > > Why not use protobuf and its IDL to specify the interface?
> > >
> > > Yes I think it is important to discuss alternatives,
> > > and at least get justifications of why TLV is chosen among others.
> >
> > Yes. Current list is
> >
> > 1) Very easy to enable ABI compatibility.
> > 2) If it needs to be transported over network etc it needs to be
> > packed so that way it is easy for implementation to do that
> > with TLV also gives better performance in such
> > cases by avoiding reformatting or possibly avoiding memcpy etc.
> > 3) It is easy to plugin with another high-level programing language as
> > just one API.
> > 4) Easy to decouple DWA core library functionalities from profile.
> > 5) Easy to enable asynchronous scheme using request and response TLVs.
> > 6) Most importantly, We could introduce type notion with TLV
> > (connected with the type of message See TYPE_ATTACHED, TYPE_STOPPED,
> > TYPE_USER_PLANE etc ),
> > That way, we can have a uniform outlook of profiles instead of each profile
> > coming with a setup of its own APIs and __rules__ on the state machine.
> > I think, for a framework to leverage communication mechanisms and other
> > aspects between profiles, it's important to have some synergy between profiles.
> > 7) No Additional library dependencies like gRPC, protobuf
> > 8) Provide driver to implement the optimized means of supporting different
> > transport such as Ethernet, Shared memory, PCIe DMA style HW etc.
> > 9) Avoid creating endless APIs and their associated driver function
> > calls for each
> > profile APIs.
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 1/3] eventdev: allow for event devices requiring maintenance
@ 2021-11-01 9:26 3% ` Mattias Rönnblom
0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2021-11-01 9:26 UTC (permalink / raw)
To: Jerin Jacob, Van Haaren, Harry, McDaniel, Timothy,
Pavan Nikhilesh, Hemant Agrawal, Liang Ma
Cc: Richardson, Bruce, Jerin Jacob, Gujjar, Abhinandan S,
Erik Gabriel Carrillo, Jayatheerthan, Jay, dpdk-dev
On 2021-10-29 17:17, Jerin Jacob wrote:
> On Fri, Oct 29, 2021 at 8:33 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>> On 2021-10-29 16:38, Jerin Jacob wrote:
>>> On Tue, Oct 26, 2021 at 11:02 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>> Extend Eventdev API to allow for event devices which require various
>>>> forms of internal processing to happen, even when events are not
>>>> enqueued to or dequeued from a port.
>>>>
>>>> PATCH v1:
>>>> - Adapt to the move of fastpath function pointers out of
>>>> rte_eventdev struct
>>>> - Attempt to clarify how often the application is expected to
>>>> call rte_event_maintain()
>>>> - Add trace point
>>>> RFC v2:
>>>> - Change rte_event_maintain() return type to be consistent
>>>> with the documentation.
>>>> - Remove unused typedef from eventdev_pmd.h.
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>> Tested-by: Richard Eklycke <richard.eklycke@ericsson.com>
>>>> Tested-by: Liron Himi <lironh@marvell.com>
>>>> ---
>>>>
>>>> +/**
>>>> + * Maintain an event device.
>>>> + *
>>>> + * This function is only relevant for event devices which has the
>>>> + * RTE_EVENT_DEV_CAP_REQUIRES_MAINT flag set. Such devices require the
>>>> + * application to call rte_event_maintain() on a port during periods
>>>> + * which it is neither enqueuing nor dequeuing events from that
>>>> + * port.
>>> # We need to add "by the same core". Right? As other core such as
>>> service core can not call rte_event_maintain()
>>
>> Do you mean by the same lcore thread that "owns" (dequeues and enqueues
>> to) the port? Yes. I thought that was implicit, since eventdev port are
>> not MT safe. I'll try to figure out some wording that makes that more clear.
> OK.
>
>>
>>> # Also, Incase of Adapters enqueue() happens, right? If so, either
>>> above text is not correct.
>>> # @Erik Gabriel Carrillo @Jayatheerthan, Jay @Gujjar, Abhinandan S
>>> Please review 3/3 patch on adapter change.
>>> Let me know you folks are OK with change or not or need more time to analyze.
>>>
>>> If it need only for the adapter subsystem then can we make it an
>>> internal API between DSW and adapters?
>>
>> No, it's needed for any producer-only eventdev ports, including any such
>> ports used by the application.
>
> In that case, the code path in testeventdev, eventdev_pipeline, etc needs
> to be updated. I am worried about the performance impact for the drivers they
> don't have such limitations.
>
> Why not have an additional config option in port_config which says
> it is a producer-only port by an application and takes care of the driver.
>
> In the current adapters code, you are calling maintain() when enqueue
> returns zero.
> In such a case, if the port is configured as producer and then
> internally it can call maintain.
>
> Thoughts from other eventdev maintainers?
> Cc+ @Van Haaren, Harry @Richardson, Bruce @Gujjar, Abhinandan S
> @Jayatheerthan, Jay @Erik Gabriel Carrillo @McDaniel, Timothy @Pavan
> Nikhilesh @Hemant Agrawal @Liang Ma
>
One more thing to consider: should we add a "int op" parameter to
rte_event_maintain()? It would also solve hack #2 in DSW eventdev API
integration: forcing an output buffer flush. This is today done with a
zero-sized rte_event_enqueue() call.
You could have something like:
#define RTE_EVENT_DEV_MAINT_FLUSH (1)
int
rte_event_maintain(int op);
It would also allow future extensions of "maintain", without ABI breakage.
Explicit flush is rare in real applications, in my experience, but
useful for test cases. I suspect for DSW to work with the DPDK eventdev
test suite, flushing buffered events (either zero-sized enqueue,
repeated rte_event_maintain() calls, or a single of the
rte_event_maintain(RTE_EVENT_DEV_MAINT_FLUSH) call [assuming the above
API]) is required in the test code.
>>
>> Should rte_event_maintain() be marked experimental? I don't know how
>> that works for inline functions.
>>
>>
>>> + rte_event_maintain() is a low-overhead function and should be
>>>> + * called at a high rate (e.g., in the applications poll loop).
>>>> + *
>>>> + * No port may be left unmaintained.
>>>> + *
>>>> + * rte_event_maintain() may be called on event devices which haven't
>>>> + * set RTE_EVENT_DEV_CAP_REQUIRES_MAINT flag, in which case it is a
>>>> + * no-operation.
>>>> + *
>>>> + * @param dev_id
>>>> + * The identifier of the device.
>>>> + * @param port_id
>>>> + * The identifier of the event port.
>>>> + * @return
>>>> + * - 0 on success.
>>>> + * - -EINVAL if *dev_id* or *port_id* is invalid
>>>> + *
>>>> + * @see RTE_EVENT_DEV_CAP_REQUIRES_MAINT
>>>> + */
>>>> +static inline int
>>>> +rte_event_maintain(uint8_t dev_id, uint8_t port_id)
>>>> +{
>>>> + const struct rte_event_fp_ops *fp_ops;
>>>> + void *port;
>>>> +
>>>> + fp_ops = &rte_event_fp_ops[dev_id];
>>>> + port = fp_ops->data[port_id];
>>>> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>>>> + if (dev_id >= RTE_EVENT_MAX_DEVS ||
>>>> + port_id >= RTE_EVENT_MAX_PORTS_PER_DEV) {
>>>> + rte_errno = EINVAL;
>>>> + return 0;
>>>> + }
>>>> +
>>>> + if (port == NULL) {
>>>> + rte_errno = EINVAL;
>>>> + return 0;
>>>> + }
>>>> +#endif
>>>> + rte_eventdev_trace_maintain(dev_id, port_id);
>>>> +
>>>> + if (fp_ops->maintain != NULL)
>>>> + fp_ops->maintain(port);
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> #ifdef __cplusplus
>>>> }
>>>> #endif
>>>> diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
>>>> index 61d5ebdc44..61fa65cab3 100644
>>>> --- a/lib/eventdev/rte_eventdev_core.h
>>>> +++ b/lib/eventdev/rte_eventdev_core.h
>>>> @@ -29,6 +29,9 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
>>>> uint64_t timeout_ticks);
>>>> /**< @internal Dequeue burst of events from port of a device */
>>>>
>>>> +typedef void (*event_maintain_t)(void *port);
>>>> +/**< @internal Maintains a port */
>>>> +
>>>> typedef uint16_t (*event_tx_adapter_enqueue_t)(void *port,
>>>> struct rte_event ev[],
>>>> uint16_t nb_events);
>>>> @@ -54,6 +57,8 @@ struct rte_event_fp_ops {
>>>> /**< PMD dequeue function. */
>>>> event_dequeue_burst_t dequeue_burst;
>>>> /**< PMD dequeue burst function. */
>>>> + event_maintain_t maintain;
>>>> + /**< PMD port maintenance function. */
>>>> event_tx_adapter_enqueue_t txa_enqueue;
>>>> /**< PMD Tx adapter enqueue function. */
>>>> event_tx_adapter_enqueue_t txa_enqueue_same_dest;
>>>> diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
>>>> index 5639e0b83a..c5a79a14d8 100644
>>>> --- a/lib/eventdev/rte_eventdev_trace_fp.h
>>>> +++ b/lib/eventdev/rte_eventdev_trace_fp.h
>>>> @@ -38,6 +38,13 @@ RTE_TRACE_POINT_FP(
>>>> rte_trace_point_emit_ptr(enq_mode_cb);
>>>> )
>>>>
>>>> +RTE_TRACE_POINT_FP(
>>>> + rte_eventdev_trace_maintain,
>>>> + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
>>>> + rte_trace_point_emit_u8(dev_id);
>>>> + rte_trace_point_emit_u8(port_id);
>>>> +)
>>>> +
>>>> RTE_TRACE_POINT_FP(
>>>> rte_eventdev_trace_eth_tx_adapter_enqueue,
>>>> RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
>>>> --
>>>> 2.25.1
>>>>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
2021-10-27 14:31 0% ` Thomas Monjalon
@ 2021-10-29 16:01 0% ` Song, Keesang
0 siblings, 0 replies; 200+ results
From: Song, Keesang @ 2021-10-29 16:01 UTC (permalink / raw)
To: Thomas Monjalon, Aman Kumar, Ananyev, Konstantin, Van Haaren, Harry
Cc: mattias. ronnblom, dev, viacheslavo, Burakov, Anatoly,
jerinjacobk, Richardson, Bruce, honnappa.nagarahalli,
Ruifeng Wang, David Christensen, david.marchand, stephen
[AMD Official Use Only]
Hi Thomas,
There are some gaps among us, so I think we really need another quick meeting call to discuss. I will set up a call like the last time on Monday.
Please join in the call if possible.
Thanks,
Keesang
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Wednesday, October 27, 2021 7:31 AM
To: Aman Kumar <aman.kumar@vvdntech.in>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>
Cc: mattias. ronnblom <mattias.ronnblom@ericsson.com>; dev@dpdk.org; viacheslavo@nvidia.com; Burakov, Anatoly <anatoly.burakov@intel.com>; Song, Keesang <Keesang.Song@amd.com>; jerinjacobk@gmail.com; Richardson, Bruce <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; Ruifeng Wang <ruifeng.wang@arm.com>; David Christensen <drc@linux.vnet.ibm.com>; david.marchand@redhat.com; stephen@networkplumber.org
Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
[CAUTION: External Email]
27/10/2021 16:10, Van Haaren, Harry:
> From: Aman Kumar <aman.kumar@vvdntech.in> On Wed, Oct 27, 2021 at 5:53
> PM Ananyev, Konstantin <mailto:konstantin.ananyev@intel.com> wrote
> >
> > Hi Mattias,
> >
> > > > 6) What is the use-case for this? When would a user *want* to
> > > > use this instead
> > > of rte_memcpy()?
> > > > If the data being loaded is relevant to datapath/packets,
> > > > presumably other
> > > packets might require the
> > > > loaded data, so temporal (normal) loads should be used to cache
> > > > the source
> > > data?
> > >
> > >
> > > I'm not sure if your first question is rhetorical or not, but a
> > > memcpy() in a NT variant is certainly useful. One use case for a
> > > memcpy() with temporal loads and non-temporal stores is if you
> > > need to archive packet payload for (distant, potential) future
> > > use, and want to avoid causing unnecessary LLC evictions while doing so.
> >
> > Yes I agree that there are certainly benefits in using cache-locality hints.
> > There is an open question around if the src or dst or both are non-temporal.
> >
> > In the implementation of this patch, the NT/T type of store is reversed from your use-case:
> > 1) Loads are NT (so loaded data is not cached for future packets)
> > 2) Stores are T (so copied/dst data is now resident in L1/L2)
> >
> > In theory there might even be valid uses for this type of memcpy
> > where loaded data is not needed again soon and stored data is
> > referenced again soon, although I cannot think of any here while typing this mail..
> >
> > I think some use-case examples, and clear documentation on when/how
> > to choose between rte_memcpy() or any (potential future)
> > rte_memcpy_nt() variants is required to progress this patch.
> >
> > Assuming a strong use-case exists, and it can be clearly indicators
> > to users of DPDK APIs which
> > rte_memcpy() to use, we can look at technical details around enabling the implementation.
> >
>
> [Konstantin wrote]:
> +1 here.
> Function behaviour and restrictions (src parameter needs to be 16/32 B
> aligned, etc.), along with expected usage scenarios have to be documented properly.
> Again, as Harry pointed out, I don't see any AMD specific instructions
> in this function, so presumably such function can go into __AVX2__
> code block and no new defines will be required.
>
>
> [Aman wrote]:
> Agreed that APIs are generic but we've kept under an AMD flag for a simple reason that it is NOT tested on any other platform.
> A use-case on how to use this was planned earlier for mlx5 pmd but dropped in this version of patch as the data path of mlx5 is going to be refactored soon and may not be useful for future versions of mlx5 (>22.02).
> Ref link:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.dpdk.org%2Fproject%2Fdpdk%2Fpatch%2F20211019104724.19416-2-aman.kumar%40vvdntech.in%2F&data=04%7C01%7CKeesang.Song%40amd.com%7C1988237087f74375caf808d9995678f0%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637709418976849481%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=FErr0cuni6WLxpq5z2KKjAx2StGTlGuN4QaXoXFE%2BKI%3D&reserved=0(we've plan to adapt this into future version) The patch in the link basically enhances mlx5 mprq implementation for our specific use-case and with 128B packet size, we achieve ~60% better perf. We understand the use of this copy function should be documented which we shall plan along with few other platform specific optimizations in future versions of DPDK. As this does not conflict with other platforms, can we still keep under AMD flag for now as suggested by Thomas?
I said I could merge if there is no objection.
I've overlooked that it's adding completely new functions in the API.
And the comments go in the direction of what I asked in previous version:
what is specific to AMD here?
Now seeing the valid objections, I agree it should be reworked.
We must provide API to applications which is generic, stable and well documented.
> [HvH wrote]:
> As an open-source community, any contributions should aim to improve the whole.
> In the past, numerous improvements have been merged to DPDK that improve performance.
> Sometimes these are architecture specific (x86/arm/ppc) sometimes the are ISA specific (SSE, AVX512, NEON).
>
> I am not familiar with any cases in DPDK, where there is a #ifdef based on a *specific platform*.
> A quick "grep" through the "dpdk/lib" directory does not show any
> place where PMD or generic code has been explicitly optimized for a *specific platform*.
>
> Obviously, in cases where ISA either exists or does not exist, yes there is an optimization to enable it.
> But this is not exposed as a top-level compile-time option, it uses runtime CPU ISA detection.
>
> Please take a step back from the code, and look at what this patch asks of DPDK:
> "Please accept & maintain these changes upstream, which benefit only platform X, even though these ISA features are also available on other platforms".
>
> Other patches that enhance performance of DPDK ask this:
> "Please accept & maintain these changes upstream, which benefit all platforms which have ISA capability X".
>
>
> === Question "As this does not conflict with other platforms, can we still keep under AMD flag for now"?
> I feel the contribution is too specific to a platform. Make it generic by enabling it at an ISA capability level.
>
> Please yes, contribute to the DPDK community by improving performance of a PMD by enabling/leveraging ISA.
> But do so in a way that does not benefit only a specific platform - do
> so in a way that enhances all of DPDK, as other patches have done for the DPDK that this patch is built on.
>
> If you have concerns that the PMD maintainers will not accept the
> changes due to potential regressions on other platforms, then discuss those, make a plan on how to performance validate, and work to a solution.
>
>
> === Regarding specifically the request for "can we still keep under AMD flag for now"?
> I do not believe we should introduce APIs for specific platforms. DPDK's EAL is an abstraction layer.
> The value of EAL is to provide a common abstraction. This
> platform-specific flag breaks the abstraction, and results in packaging issues, as well as API/ABI instability based on -Dcpu_instruction_set choice.
> So, no, we should not introduce APIs based on any compile-time flag.
I agree
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
2021-10-28 7:10 0% ` Jiang, YuX
@ 2021-11-01 11:53 0% ` Jiang, YuX
0 siblings, 0 replies; 200+ results
From: Jiang, YuX @ 2021-11-01 11:53 UTC (permalink / raw)
To: Thomas Monjalon, dev (dev@dpdk.org)
Cc: Devlin, Michelle, Mcnamara, John, Yigit, Ferruh
> -----Original Message-----
> From: Jiang, YuX
> Sent: Thursday, October 28, 2021 3:11 PM
> To: Thomas Monjalon <thomas@monjalon.net>; dev (dev@dpdk.org)
> <dev@dpdk.org>
> Cc: Devlin, Michelle <michelle.devlin@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>
> Subject: RE: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Thomas Monjalon
> > Sent: Tuesday, October 26, 2021 5:41 AM
> > To: announce@dpdk.org
> > Subject: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
> >
> > A new DPDK release candidate is ready for testing:
> > https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
> >
> > There are 1171 new patches in this snapshot, big as expected.
> >
> > Release notes:
> > https://doc.dpdk.org/guides/rel_notes/release_21_11.html
> >
> > Highlights of 21.11-rc1:
> > * General
> > - more than 512 MSI-X interrupts
> > - hugetlbfs subdirectories
> > - mempool flag for non-IO usages
> > - device class for DMA accelerators
> > - DMA drivers for Intel DSA and IOAT
> > * Networking
> > - MTU handling rework
> > - get all MAC addresses of a port
> > - RSS based on L3/L4 checksum fields
> > - flow match on L2TPv2 and PPP
> > - flow flex parser for custom header
> > - control delivery of HW Rx metadata
> > - transfer flows API rework
> > - shared Rx queue
> > - Windows support of Intel e1000, ixgbe and iavf
> > - testpmd multi-process
> > - pcapng library and dumpcap tool
> > * API/ABI
> > - API namespace improvements (mempool, mbuf, ethdev)
> > - API internals hidden (intr, ethdev, security, cryptodev, eventdev,
> > cmdline)
> > - flags check for future ABI compatibility (memzone, mbuf, mempool)
> >
> > Please test and report issues on bugs.dpdk.org.
> > DPDK 21.11-rc2 is expected in two weeks or less.
> >
> > Thank you everyone
> >
> Update the test status for Intel part. Till now dpdk21.11-rc1 test execution
> rate is 50%. No critical issue is found.
> But one little high issue https://bugs.dpdk.org/show_bug.cgi?id=843 impacts
> cryptodev function and performance test.
> Bad commit id is 8cb5d08db940a6b26f5c5ac03b49bac25e9a7022/Author:
> Harman Kalra <hkalra@marvell.com>. Please help to handle it.
> # Basic Intel(R) NIC testing
> * Build or compile:
> *Build: cover the build test combination with latest GCC/Clang/ICC
> version and the popular OS revision such as Ubuntu20.04, Fedora34, RHEL8.4,
> etc.
> - All test done.
> *Compile: cover the CFLAGES(O0/O1/O2/O3) with popular OS such
> as Ubuntu20.04 and Fedora34.
> - All test done.
> - Find one bug: https://bugs.dpdk.org/show_bug.cgi?id=841
> Marvell Dev has provided patch and Intel validation team verify passed.
> Patch link:
> http://patchwork.dpdk.org/project/dpdk/patch/20211027131259.11775-1-
> ktejasree@marvell.com/
> * PF(i40e, ixgbe): test scenarios including
> RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
> - Execution rate is 60%. No new issue is found yet.
> * VF(i40e, ixgbe): test scenarios including VF-
> RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
>
> - Execution rate is 60%.
> - One bug https://bugs.dpdk.org/show_bug.cgi?id=845
> about "vm_hotplug: vf testpmd core dumped after executing "device_del
> dev1" in qemu" is found.
> Bad commit id is commit
> c2bd9367e18f5b00c1a3c5eb281a512ef52c5dfd Author: Harman Kalra
> <hkalra@marvell.com>
> * PF/VF(ice): test scenarios including Switch features/Package
> Management/Flow Director/Advanced Tx/Advanced RSS/ACL/DCF/Share
> code update/Flexible Descriptor, etc.
> - Execution rate is 60%.
> - One bug about kni_autotest failed on Suse15.3. Trying to
> find bad commit id. Known issues, Intel dev is under investigating.
>
> * Intel NIC single core/NIC performance: test scenarios including
> PF/VF single core performance test, RFC2544 Zero packet loss performance
> test, etc.
> - Execution rate is 60%.
> - One bug about nic single core performance drop 2% is
> found. Bad commit id is commit: efc6f9104c80d39ec168/Author: Olivier Matz
> <olivier.matz@6wind.com>
> * Power and IPsec:
> * Power: test scenarios including bi-
> direction/Telemetry/Empty Poll Lib/Priority Base Frequency, etc.
> - All passed.
> * IPsec: test scenarios including ipsec/ipsec-gw/ipsec library
> basic test - QAT&SW/FIB library, etc.
> - Not Start.
> # Basic cryptodev and virtio testing
> * Virtio: both function and performance test are covered. Such as
> PVP/Virtio_loopback/virtio-user loopback/virtio-net VM2VM perf
> testing/VMAWARE ESXI 7.0u3, etc.
> - Execution rate is 80%.
> - Two new bugs are found.
> - One about VMware ESXI 7.0U3: failed to start port.
> Intel Dev is under investigating.
> - One https://bugs.dpdk.org/show_bug.cgi?id=840
> about "dpdk-pdump capture the pcap file content are wrong" is found.
> Bad commit id: commit
> 10f726efe26c55805cf0bf6ca1b80e97b98eb724 //bad commit id Author:
> Stephen Hemminger <stephen@networkplumber.org>
> * Cryptodev:
> *Function test: test scenarios including Cryptodev API
> testing/CompressDev ISA-L/QAT/ZLIB PMD Testing/FIPS, etc.
> - Execution rate is 60%
> - Two new bugs are found.
> - One
> https://bugs.dpdk.org/show_bug.cgi?id=843 about crypto performance tests
> for QAT are failing. Bad commit id is
> 8cb5d08db940a6b26f5c5ac03b49bac25e9a7022/Author: Harman Kalra
> <hkalra@marvell.com>
> - One
> https://bugs.dpdk.org/show_bug.cgi?id=842 about FIP tests are failing. Bad
> commit id is commit f6849cdcc6ada2a8bc9b82e691eaab1aecf4952f Author:
> Akhil Goyal gakhil@marvell.com
> *Performance test: test scenarios including Thoughput
> Performance /Cryptodev Latency, etc.
> - Execution rate is 10%. Most of performance test are
> blocked by Bug843.
Update the test status for Intel part. Till now dpdk21.11-rc1 test is almost finished. No critical issue is found.
One little high issue https://bugs.dpdk.org/show_bug.cgi?id=843 impacts cryptodev function and performance test.
It has patch https://git.dpdk.org/dpdk/commit/?id=eb89595d45ca268ebe6c0cb88f0ae17dba08d8f6 to fix and the patch has been merged into dpdk main.
# Basic Intel(R) NIC testing
* Build or compile:
*Build: cover the build test combination with latest GCC/Clang/ICC version and the popular OS revision such as Ubuntu20.04, Fedora34, RHEL8.4, etc.
- All test done. All passed.
*Compile: cover the CFLAGES(O0/O1/O2/O3) with popular OS such as Ubuntu20.04 and RHEL8.4.
- All test done.
- Find one bug: https://bugs.dpdk.org/show_bug.cgi?id=841 Marvell Dev has provided patch and verify passed.
Patch link: http://patchwork.dpdk.org/project/dpdk/patch/20211027131259.11775-1-ktejasree@marvell.com/, patch has been applied into dpdk-next-net-mrvl/for-next-net.
* PF(i40e, ixgbe): test scenarios including RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
- All test done.
- Find 5 new bugs.
a, https://bugs.dpdk.org/show_bug.cgi?id=863 external_mempool_handler:execute mempool_autotest command failed on FreeBSD: verify patch passed.
b, https://bugs.dpdk.org/show_bug.cgi?id=864 pmd_stacked_bonded/test_mode_backup_rx:after setup stacked bonded ports,start top level bond port.
- Has patch but verify failed.
c, https://bugs.dpdk.org/show_bug.cgi?id=865 launch testpmd with "--vfio-intr=legacy" appears core dumped.
- Has patch from Redhat, Intel validation team will verify it later.
d, when set rx_ offload rss_hash is set to off, port start will automatically load RSS_hash: Intel dev is investigating.
e, checksum_offload/hardware_checksum_check_l4_tx: sctp checksum value incorrect: Intel dev is investigating.
* VF(i40e, ixgbe): test scenarios including VF-RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
- Execution rate is 60%.
- Find 3 new bugs, Intel dev is investigating.
a, https://bugs.dpdk.org/show_bug.cgi?id=845 about "vm_hotplug: vf testpmd core dumped after executing "device_del dev1" in qemu" is found.
- Bad commit id is commit c2bd9367e18f5b00c1a3c5eb281a512ef52c5dfd Author: Harman Kalra <hkalra@marvell.com>
- Marvell dev has no similar env, Intel validation team don't find avaiable app to reproduce this bug yet.
b, send packet with vlan id(1~4095), vf port can received on i40e-2.17.1, this is not as expected: Intel dev is investigating.
c, ixgbe_vf_get_extra_queue_information/ test_enable_dcb: executing "port config 0 dcb vt on 4 pfc off" failed under testpmd
* PF/VF(ice): test scenarios including Switch features/Package Management/Flow Director/Advanced Tx/Advanced RSS/ACL/DCF/Share code update/Flexible Descriptor, etc.
- All test done. No new issue is found during 21.11rc1. Some known issues are investigated by Intel Dev.
* Intel NIC single core/NIC performance: test scenarios including PF/VF single core performance test, RFC2544 Zero packet loss performance test, etc.
- All test done. No big performance drop.
* Power and IPsec:
* Power: test scenarios including bi-direction/Telemetry/Empty Poll Lib/Priority Base Frequency, etc.
- All passed.
* IPsec: test scenarios including ipsec/ipsec-gw/ipsec library basic test - QAT&SW/FIB library, etc.
- All passed.
# Basic cryptodev and virtio testing
* Virtio: both function and performance test are covered. Such as PVP/Virtio_loopback/virtio-user loopback/virtio-net VM2VM perf testing/VMAWARE ESXI 7.0u3, etc.
- All test done.
- 3 new bugs are found.
- One about VMware ESXI 7.0U3: failed to start port. Intel Dev is under investigating.
- One https://bugs.dpdk.org/show_bug.cgi?id=840 about "dpdk-pdump capture the pcap file content are wrong" is found.
Bad commit id: commit 10f726efe26c55805cf0bf6ca1b80e97b98eb724 //bad commit id Author: Stephen Hemminger <stephen@networkplumber.org>
- vhost_event_idx_interrupt/wake_up_packed_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt: start 2 packed ring vm, lcore can't waked up.
- Intel Dev is under investigating.
* Cryptodev:
*Function test: test scenarios including Cryptodev API testing/CompressDev ISA-L/QAT/ZLIB PMD Testing/FIPS, etc.
- All test done.
- 2 new bugs are found.
- One https://bugs.dpdk.org/show_bug.cgi?id=843 about crypto performance tests for QAT are failing. Patch has been merged into dpdk main branch.
- One https://bugs.dpdk.org/show_bug.cgi?id=842 about FIP tests are failing. Has patch to fix and verify passed.
*Performance test: test scenarios including Thoughput Performance /Cryptodev Latency, etc.
- All test done. No big performance drop. Most of performance test are blocked by Bug843.
BRs
Yu Jiang
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3] vhost: mark vDPA driver API as internal
@ 2021-11-02 9:56 4% Maxime Coquelin
0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-11-02 9:56 UTC (permalink / raw)
To: dev, chenbo.xia, xuemingl, xiao.w.wang, david.marchand
Cc: Maxime Coquelin, Thomas Monjalon
This patch marks the vDPA driver APIs as internal and
rename the corresponding header file to vdpa_driver.h.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
---
Changes in v3:
==============
- Update deprecation notice and release note
Changes in v2:
=============
- Alphabetical ordering in version.map (David)
- Rename header to vdpa_driver.h (David)
- Add Techboard in Cc to vote for API breakage exception
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_21_11.rst | 4 ++++
drivers/vdpa/ifc/ifcvf_vdpa.c | 2 +-
drivers/vdpa/mlx5/mlx5_vdpa.h | 2 +-
lib/vhost/meson.build | 4 +++-
lib/vhost/vdpa.c | 2 +-
lib/vhost/{rte_vdpa_dev.h => vdpa_driver.h} | 12 +++++++++---
lib/vhost/version.map | 13 +++++++++----
lib/vhost/vhost.h | 2 +-
9 files changed, 29 insertions(+), 16 deletions(-)
rename lib/vhost/{rte_vdpa_dev.h => vdpa_driver.h} (95%)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4366015b01..ce1b727e77 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -107,10 +107,6 @@ Deprecation Notices
is deprecated as ambiguous with respect to the embedded switch. The use of
these attributes will become invalid starting from DPDK 22.11.
-* vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
- ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
- driver interface will be marked as internal in DPDK v21.11.
-
* vhost: rename ``struct vhost_device_ops`` to ``struct rte_vhost_device_ops``
in DPDK v21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 98d50a160b..7c2c976d47 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -475,6 +475,10 @@ API Changes
* eventdev: Moved memory used by timer adapters to hugepage. This will prevent
TLB misses if any and aligns to memory structure of other subsystems.
+* vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
+ ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
+ driver interface are marked as internal.
+
ABI Changes
-----------
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index dd5251d382..3853c4cf7e 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -17,7 +17,7 @@
#include <rte_bus_pci.h>
#include <rte_vhost.h>
#include <rte_vdpa.h>
-#include <rte_vdpa_dev.h>
+#include <vdpa_driver.h>
#include <rte_vfio.h>
#include <rte_spinlock.h>
#include <rte_log.h>
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index cf4f384fa4..a6c9404cb0 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -12,7 +12,7 @@
#pragma GCC diagnostic ignored "-Wpedantic"
#endif
#include <rte_vdpa.h>
-#include <rte_vdpa_dev.h>
+#include <vdpa_driver.h>
#include <rte_vhost.h>
#ifdef PEDANTIC
#pragma GCC diagnostic error "-Wpedantic"
diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
index 2d8fe0239f..cdb37a4814 100644
--- a/lib/vhost/meson.build
+++ b/lib/vhost/meson.build
@@ -29,9 +29,11 @@ sources = files(
)
headers = files(
'rte_vdpa.h',
- 'rte_vdpa_dev.h',
'rte_vhost.h',
'rte_vhost_async.h',
'rte_vhost_crypto.h',
)
+driver_sdk_headers = files(
+ 'vdpa_driver.h',
+)
deps += ['ethdev', 'cryptodev', 'hash', 'pci']
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 6dd91859ac..09ad5d866e 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -17,7 +17,7 @@
#include <rte_tailq.h>
#include "rte_vdpa.h"
-#include "rte_vdpa_dev.h"
+#include "vdpa_driver.h"
#include "vhost.h"
/** Double linked list of vDPA devices. */
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/vdpa_driver.h
similarity index 95%
rename from lib/vhost/rte_vdpa_dev.h
rename to lib/vhost/vdpa_driver.h
index b0f494815f..fc2d6acedd 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/vdpa_driver.h
@@ -2,11 +2,13 @@
* Copyright(c) 2018 Intel Corporation
*/
-#ifndef _RTE_VDPA_H_DEV_
-#define _RTE_VDPA_H_DEV_
+#ifndef _VDPA_DRIVER_H_
+#define _VDPA_DRIVER_H_
#include <stdbool.h>
+#include <rte_compat.h>
+
#include "rte_vhost.h"
#include "rte_vdpa.h"
@@ -88,6 +90,7 @@ struct rte_vdpa_device {
* @return
* vDPA device pointer on success, NULL on failure
*/
+__rte_internal
struct rte_vdpa_device *
rte_vdpa_register_device(struct rte_device *rte_dev,
struct rte_vdpa_dev_ops *ops);
@@ -100,6 +103,7 @@ rte_vdpa_register_device(struct rte_device *rte_dev,
* @return
* device id on success, -1 on failure
*/
+__rte_internal
int
rte_vdpa_unregister_device(struct rte_vdpa_device *dev);
@@ -115,6 +119,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev);
* @return
* 0 on success, -1 on failure
*/
+__rte_internal
int
rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
@@ -132,7 +137,8 @@ rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
* @return
* number of synced used entries on success, -1 on failure
*/
+__rte_internal
int
rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
-#endif /* _RTE_VDPA_DEV_H_ */
+#endif /* _VDPA_DRIVER_H_ */
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index c8599ddb97..a7ef7f1976 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -8,10 +8,7 @@ DPDK_22 {
rte_vdpa_get_rte_device;
rte_vdpa_get_stats;
rte_vdpa_get_stats_names;
- rte_vdpa_register_device;
- rte_vdpa_relay_vring_used;
rte_vdpa_reset_stats;
- rte_vdpa_unregister_device;
rte_vhost_avail_entries;
rte_vhost_clr_inflight_desc_packed;
rte_vhost_clr_inflight_desc_split;
@@ -52,7 +49,6 @@ DPDK_22 {
rte_vhost_get_vring_base_from_inflight;
rte_vhost_get_vring_num;
rte_vhost_gpa_to_vva;
- rte_vhost_host_notifier_ctrl;
rte_vhost_log_used_vring;
rte_vhost_log_write;
rte_vhost_rx_queue_count;
@@ -89,3 +85,12 @@ EXPERIMENTAL {
# added in 21.11
rte_vhost_get_monitor_addr;
};
+
+INTERNAL {
+ global;
+
+ rte_vdpa_register_device;
+ rte_vdpa_relay_vring_used;
+ rte_vdpa_unregister_device;
+ rte_vhost_host_notifier_ctrl;
+};
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 05ccc35f37..c07219296d 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -22,7 +22,7 @@
#include "rte_vhost.h"
#include "rte_vdpa.h"
-#include "rte_vdpa_dev.h"
+#include "vdpa_driver.h"
#include "rte_vhost_async.h"
--
2.31.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH] vhost: rename driver callbacks struct
@ 2021-11-02 10:47 4% Maxime Coquelin
2021-11-03 8:16 0% ` Xia, Chenbo
0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2021-11-02 10:47 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand; +Cc: Maxime Coquelin
As previously announced, this patch renames struct
vhost_device_ops to struct rte_vhost_device_ops.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_21_11.rst | 2 ++
drivers/net/vhost/rte_eth_vhost.c | 2 +-
examples/vdpa/main.c | 2 +-
examples/vhost/main.c | 2 +-
examples/vhost_blk/vhost_blk.c | 2 +-
examples/vhost_blk/vhost_blk.h | 2 +-
examples/vhost_crypto/main.c | 2 +-
lib/vhost/rte_vhost.h | 4 ++--
lib/vhost/socket.c | 6 +++---
lib/vhost/vhost.h | 4 ++--
11 files changed, 15 insertions(+), 16 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4366015b01..a9e2433988 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -111,9 +111,6 @@ Deprecation Notices
``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
driver interface will be marked as internal in DPDK v21.11.
-* vhost: rename ``struct vhost_device_ops`` to ``struct rte_vhost_device_ops``
- in DPDK v21.11.
-
* vhost: The experimental tags of ``rte_vhost_driver_get_protocol_features``,
``rte_vhost_driver_get_queue_num``, ``rte_vhost_crypto_create``,
``rte_vhost_crypto_free``, ``rte_vhost_crypto_fetch_requests``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 98d50a160b..dea038e3ac 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -564,6 +564,8 @@ ABI Changes
* eventdev: Re-arranged fields in ``rte_event_timer`` to remove holes.
+* vhost: rename ``struct vhost_device_ops`` to ``struct rte_vhost_device_ops``.
+
Known Issues
------------
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 8bb3b27d01..070f0e6dfd 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -975,7 +975,7 @@ vring_state_changed(int vid, uint16_t vring, int enable)
return 0;
}
-static struct vhost_device_ops vhost_ops = {
+static struct rte_vhost_device_ops vhost_ops = {
.new_device = new_device,
.destroy_device = destroy_device,
.vring_state_changed = vring_state_changed,
diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c
index 097a267b8c..5ab07655ae 100644
--- a/examples/vdpa/main.c
+++ b/examples/vdpa/main.c
@@ -153,7 +153,7 @@ destroy_device(int vid)
}
}
-static const struct vhost_device_ops vdpa_sample_devops = {
+static const struct rte_vhost_device_ops vdpa_sample_devops = {
.new_device = new_device,
.destroy_device = destroy_device,
};
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 58e12aa710..8685dfd81b 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1519,7 +1519,7 @@ vring_state_changed(int vid, uint16_t queue_id, int enable)
* These callback allow devices to be added to the data core when configuration
* has been fully complete.
*/
-static const struct vhost_device_ops virtio_net_device_ops =
+static const struct rte_vhost_device_ops virtio_net_device_ops =
{
.new_device = new_device,
.destroy_device = destroy_device,
diff --git a/examples/vhost_blk/vhost_blk.c b/examples/vhost_blk/vhost_blk.c
index fe2b4e4803..feadacc62e 100644
--- a/examples/vhost_blk/vhost_blk.c
+++ b/examples/vhost_blk/vhost_blk.c
@@ -753,7 +753,7 @@ new_connection(int vid)
return 0;
}
-struct vhost_device_ops vhost_blk_device_ops = {
+struct rte_vhost_device_ops vhost_blk_device_ops = {
.new_device = new_device,
.destroy_device = destroy_device,
.new_connection = new_connection,
diff --git a/examples/vhost_blk/vhost_blk.h b/examples/vhost_blk/vhost_blk.h
index 540998eb1b..975f0b4065 100644
--- a/examples/vhost_blk/vhost_blk.h
+++ b/examples/vhost_blk/vhost_blk.h
@@ -104,7 +104,7 @@ struct vhost_blk_task {
};
extern struct vhost_blk_ctrlr *g_vhost_ctrlr;
-extern struct vhost_device_ops vhost_blk_device_ops;
+extern struct rte_vhost_device_ops vhost_blk_device_ops;
int vhost_bdev_process_blk_commands(struct vhost_block_dev *bdev,
struct vhost_blk_task *task);
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index dea7dcbd07..7d75623a5e 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -363,7 +363,7 @@ destroy_device(int vid)
RTE_LOG(INFO, USER1, "Vhost Crypto Device %i Removed\n", vid);
}
-static const struct vhost_device_ops virtio_crypto_device_ops = {
+static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
.new_device = new_device,
.destroy_device = destroy_device,
};
diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index 6f0915b98f..af0afbcf60 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -264,7 +264,7 @@ struct rte_vhost_user_extern_ops {
/**
* Device and vring operations.
*/
-struct vhost_device_ops {
+struct rte_vhost_device_ops {
int (*new_device)(int vid); /**< Add device. */
void (*destroy_device)(int vid); /**< Remove device. */
@@ -606,7 +606,7 @@ rte_vhost_get_negotiated_protocol_features(int vid,
/* Register callbacks. */
int rte_vhost_driver_callback_register(const char *path,
- struct vhost_device_ops const * const ops);
+ struct rte_vhost_device_ops const * const ops);
/**
*
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index c6548608a3..82963c1e6d 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -58,7 +58,7 @@ struct vhost_user_socket {
struct rte_vdpa_device *vdpa_dev;
- struct vhost_device_ops const *notify_ops;
+ struct rte_vhost_device_ops const *notify_ops;
};
struct vhost_user_connection {
@@ -1093,7 +1093,7 @@ rte_vhost_driver_unregister(const char *path)
*/
int
rte_vhost_driver_callback_register(const char *path,
- struct vhost_device_ops const * const ops)
+ struct rte_vhost_device_ops const * const ops)
{
struct vhost_user_socket *vsocket;
@@ -1106,7 +1106,7 @@ rte_vhost_driver_callback_register(const char *path,
return vsocket ? 0 : -1;
}
-struct vhost_device_ops const *
+struct rte_vhost_device_ops const *
vhost_driver_callback_get(const char *path)
{
struct vhost_user_socket *vsocket;
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 05ccc35f37..080c67ef99 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -394,7 +394,7 @@ struct virtio_net {
uint16_t mtu;
uint8_t status;
- struct vhost_device_ops const *notify_ops;
+ struct rte_vhost_device_ops const *notify_ops;
uint32_t nr_guest_pages;
uint32_t max_guest_pages;
@@ -702,7 +702,7 @@ void vhost_enable_linearbuf(int vid);
int vhost_enable_guest_notification(struct virtio_net *dev,
struct vhost_virtqueue *vq, int enable);
-struct vhost_device_ops const *vhost_driver_callback_get(const char *path);
+struct rte_vhost_device_ops const *vhost_driver_callback_get(const char *path);
/*
* Backend-specific cleanup.
--
2.31.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] Overriding rte_config.h
@ 2021-11-02 12:24 3% ` Ananyev, Konstantin
2021-11-02 14:19 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-11-02 12:24 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Ben Magistro, dev
> > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > With the transition to meson, what is the best way to provide custom values
> > > > to parameters in rte_config.h? When using makefiles, (from memory, I
> > > > think) we used common_base as a template that was copied in as a
> > > > replacement for defconfig_x86.... Our current thinking is to apply a
> > > > locally maintained patch so that we can track custom values easier to the
> > > > rte_config.h file unless there is another way to pass in an overridden
> > > > value. As an example, one of the values we are customizing is
> > > > IP_FRAG_MAX_FRAG.
> > > >
> > > > Cheers,
> > > >
> > > There is no one defined way for overriding values in rte_config with the
> > > meson build system, as values there are ones that should rarely need to be
> > > overridden. If it's the case that one does need tuning, we generally want
> > > to look to either change the default so it works for everyone, or
> > > alternatively look to replace it with a runtime option.
> > >
> > > In the absense of that, a locally maintained patch may be reasonable. To
> > > what value do you want to change MAX_FRAG? Would it be worth considering as
> > > a newer default value in DPDK itself, since the current default is fairly
> > > low?
> >
> > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > to cover common jumbo frame size (9K) pretty easily.
> > As a drawback default reassembly table size will double.
>
> Maybe not. I'm not an expert in the library, but it seems the basic struct
> used for tracking the packets and fragments is "struct ip_frag_pkt". Due to
> the other data in the struct and the linked-list overheads, the actual size
> increase when doubling MAX_FRAG from 4 to 8 is only 25%. According to gdb
> on my debug build it goes from 192B to 256B.
Ah yes, you right, struct ip_frag should fit into 16B, key seems the biggest one.
>
> > Even better would be to go a step further and rework lib/ip_frag
> > to make it configurable runtime parameter.
> >
> Agree. However, that's not as quick a fix as just increasing the default
> max segs value which could be done immediately if there is consensus on it.
You mean for 21.11?
I don't mind in principle, but would like to know other people thoughts here.
Another thing - we didn't announce it in advance, and it is definitely an ABI change.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] Overriding rte_config.h
2021-11-02 12:24 3% ` Ananyev, Konstantin
@ 2021-11-02 14:19 3% ` Bruce Richardson
2021-11-02 15:00 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-11-02 14:19 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Ben Magistro, dev
On Tue, Nov 02, 2021 at 12:24:43PM +0000, Ananyev, Konstantin wrote:
>
> > > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > > With the transition to meson, what is the best way to provide custom values
> > > > > to parameters in rte_config.h? When using makefiles, (from memory, I
> > > > > think) we used common_base as a template that was copied in as a
> > > > > replacement for defconfig_x86.... Our current thinking is to apply a
> > > > > locally maintained patch so that we can track custom values easier to the
> > > > > rte_config.h file unless there is another way to pass in an overridden
> > > > > value. As an example, one of the values we are customizing is
> > > > > IP_FRAG_MAX_FRAG.
> > > > >
> > > > > Cheers,
> > > > >
> > > > There is no one defined way for overriding values in rte_config with the
> > > > meson build system, as values there are ones that should rarely need to be
> > > > overridden. If it's the case that one does need tuning, we generally want
> > > > to look to either change the default so it works for everyone, or
> > > > alternatively look to replace it with a runtime option.
> > > >
> > > > In the absense of that, a locally maintained patch may be reasonable. To
> > > > what value do you want to change MAX_FRAG? Would it be worth considering as
> > > > a newer default value in DPDK itself, since the current default is fairly
> > > > low?
> > >
> > > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > > to cover common jumbo frame size (9K) pretty easily.
> > > As a drawback default reassembly table size will double.
> >
> > Maybe not. I'm not an expert in the library, but it seems the basic struct
> > used for tracking the packets and fragments is "struct ip_frag_pkt". Due to
> > the other data in the struct and the linked-list overheads, the actual size
> > increase when doubling MAX_FRAG from 4 to 8 is only 25%. According to gdb
> > on my debug build it goes from 192B to 256B.
>
> Ah yes, you right, struct ip_frag should fit into 16B, key seems the biggest one.
>
> >
> > > Even better would be to go a step further and rework lib/ip_frag
> > > to make it configurable runtime parameter.
> > >
> > Agree. However, that's not as quick a fix as just increasing the default
> > max segs value which could be done immediately if there is consensus on it.
>
> You mean for 21.11?
> I don't mind in principle, but would like to know other people thoughts here.
> Another thing - we didn't announce it in advance, and it is definitely an ABI change.
I notice from this patch you submitted that the main structure in question
is being hidden[1]. Will it still be an ABI change if that patch is merged
in? Alternatively, should a fragment count increase be considered as part of
that change?
/Bruce
[1] http://patches.dpdk.org/project/dpdk/patch/20211101124915.9640-1-konstantin.ananyev@intel.com/
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] Overriding rte_config.h
2021-11-02 14:19 3% ` Bruce Richardson
@ 2021-11-02 15:00 0% ` Ananyev, Konstantin
2021-11-03 14:38 0% ` Ben Magistro
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-11-02 15:00 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Ben Magistro, dev
> > > > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > > > With the transition to meson, what is the best way to provide custom values
> > > > > > to parameters in rte_config.h? When using makefiles, (from memory, I
> > > > > > think) we used common_base as a template that was copied in as a
> > > > > > replacement for defconfig_x86.... Our current thinking is to apply a
> > > > > > locally maintained patch so that we can track custom values easier to the
> > > > > > rte_config.h file unless there is another way to pass in an overridden
> > > > > > value. As an example, one of the values we are customizing is
> > > > > > IP_FRAG_MAX_FRAG.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > There is no one defined way for overriding values in rte_config with the
> > > > > meson build system, as values there are ones that should rarely need to be
> > > > > overridden. If it's the case that one does need tuning, we generally want
> > > > > to look to either change the default so it works for everyone, or
> > > > > alternatively look to replace it with a runtime option.
> > > > >
> > > > > In the absense of that, a locally maintained patch may be reasonable. To
> > > > > what value do you want to change MAX_FRAG? Would it be worth considering as
> > > > > a newer default value in DPDK itself, since the current default is fairly
> > > > > low?
> > > >
> > > > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > > > to cover common jumbo frame size (9K) pretty easily.
> > > > As a drawback default reassembly table size will double.
> > >
> > > Maybe not. I'm not an expert in the library, but it seems the basic struct
> > > used for tracking the packets and fragments is "struct ip_frag_pkt". Due to
> > > the other data in the struct and the linked-list overheads, the actual size
> > > increase when doubling MAX_FRAG from 4 to 8 is only 25%. According to gdb
> > > on my debug build it goes from 192B to 256B.
> >
> > Ah yes, you right, struct ip_frag should fit into 16B, key seems the biggest one.
> >
> > >
> > > > Even better would be to go a step further and rework lib/ip_frag
> > > > to make it configurable runtime parameter.
> > > >
> > > Agree. However, that's not as quick a fix as just increasing the default
> > > max segs value which could be done immediately if there is consensus on it.
> >
> > You mean for 21.11?
> > I don't mind in principle, but would like to know other people thoughts here.
> > Another thing - we didn't announce it in advance, and it is definitely an ABI change.
>
> I notice from this patch you submitted that the main structure in question
> is being hidden[1]. Will it still be an ABI change if that patch is merged
> in?
Yes, it would unfortunately:
struct rte_ip_frag_death_row still remains public.
> Alternatively, should a fragment count increase be considered as part of
> that change?
I don't think they are really related.
This patch just hides some structs that are already marked as 'internal'
and not used by public API. It doesn't make any changes in the public structs layout.
But I suppose we can bring that question (increase of RTE_LIBRTE_IP_FRAG_MAX_FRAG) to
tomorrow TB meeting, and ask for approval.
> /Bruce
>
> [1] http://patches.dpdk.org/project/dpdk/patch/20211101124915.9640-1-konstantin.ananyev@intel.com/
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] ip_frag: increase default value for config parameter
@ 2021-11-02 19:03 14% Konstantin Ananyev
2021-11-08 22:08 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-11-02 19:03 UTC (permalink / raw)
To: dev; +Cc: techboard, bruce.richardson, koncept1, Konstantin Ananyev
Increase default value for config parameter RTE_LIBRTE_IP_FRAG_MAX_FRAG
from 4 to 8. This parameter controls maximum number of fragments per
packet in ip reassembly table. Increasing this value from 4 to 8 will
allow users to cover common case with jumbo packet size of 9KB and
fragments with default frame size (1500B).
As RTE_LIBRTE_IP_FRAG_MAX_FRAG is used in definition of public
structure (struct rte_ip_frag_death_row), this is an ABI change.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
config/rte_config.h | 2 +-
doc/guides/rel_notes/release_21_11.rst | 8 ++++++++
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 1a66b42fcc..08e70af497 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -82,7 +82,7 @@
#define RTE_RAWDEV_MAX_DEVS 64
/* ip_fragmentation defines */
-#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4
+#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 8
#undef RTE_LIBRTE_IP_FRAG_TBL_STAT
/* rte_power defines */
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 502cc5ceb2..4d0f112b00 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -543,6 +543,14 @@ ABI Changes
* eventdev: Re-arranged fields in ``rte_event_timer`` to remove holes.
+* Increase default value for config parameter ``RTE_LIBRTE_IP_FRAG_MAX_FRAG``
+ from ``4`` to ``8``. This parameter controls maximum number of fragments
+ per packet in ip reassembly table. Increasing this value from ``4`` to ``8``
+ will allow users to cover common case with jumbo packet size of ``9KB``
+ and fragments with default frame size ``(1500B)``.
+ As ``RTE_LIBRTE_IP_FRAG_MAX_FRAG`` is used in definition of
+ public structure ``rte_ip_frag_death_row``, this is an ABI change.
+
Known Issues
------------
--
2.25.1
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [PATCH v20 0/5] Add PIE support for HQoS library
2021-10-28 10:17 3% ` [dpdk-dev] [PATCH v19 " Liguzinski, WojciechX
@ 2021-11-02 23:57 3% ` Liguzinski, WojciechX
2021-11-03 17:52 0% ` Thomas Monjalon
2021-11-04 10:40 3% ` [dpdk-dev] [PATCH v21 0/3] " Liguzinski, WojciechX
0 siblings, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-11-02 23:57 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu
Cc: megha.ajmera, Wojciech Liguzinski
From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Wojciech Liguzinski (5):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 3 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 259 +++--
lib/sched/rte_sched.h | 64 +-
lib/sched/version.map | 4 +
19 files changed, 2189 insertions(+), 281 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: remove deprecation notice for vhost
@ 2021-11-03 5:25 3% ` Xia, Chenbo
2021-11-03 7:03 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Xia, Chenbo @ 2021-11-03 5:25 UTC (permalink / raw)
To: dev; +Cc: Kevin Traynor, Maxime Coquelin, Ray Kinsella
Hi,
I notice that from the start, I should not send the notice.. as the abi policy said:
For removing the experimental tag associated with an API, deprecation notice is not required.
Sorry for the mistake.
/Chenbo
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
> Sent: Wednesday, November 3, 2021 1:00 PM
> To: dev@dpdk.org
> Cc: Ray Kinsella <mdr@ashroe.eu>; Kevin Traynor <ktraynor@redhat.com>; Maxime
> Coquelin <maxime.coquelin@redhat.com>
> Subject: [dpdk-dev] [PATCH] doc: remove deprecation notice for vhost
>
> Ten vhost APIs were announced to be stable and promoted in below
> commit, so remove the related deprecation notice.
>
> Fixes: 945ef8a04098 ("vhost: promote some APIs to stable")
>
> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
> Reported-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 8 --------
> 1 file changed, 8 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 4366015b01..4f7e95f05f 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -114,14 +114,6 @@ Deprecation Notices
> * vhost: rename ``struct vhost_device_ops`` to ``struct
> rte_vhost_device_ops``
> in DPDK v21.11.
>
> -* vhost: The experimental tags of ``rte_vhost_driver_get_protocol_features``,
> - ``rte_vhost_driver_get_queue_num``, ``rte_vhost_crypto_create``,
> - ``rte_vhost_crypto_free``, ``rte_vhost_crypto_fetch_requests``,
> - ``rte_vhost_crypto_finalize_requests``, ``rte_vhost_crypto_set_zero_copy``,
> - ``rte_vhost_va_from_guest_pa``, ``rte_vhost_extern_callback_register``,
> - and ``rte_vhost_driver_set_protocol_features`` functions will be removed
> - and the API functions will be made stable in DPDK 21.11.
> -
> * cryptodev: Hide structures ``rte_cryptodev_sym_session`` and
> ``rte_cryptodev_asym_session`` to remove unnecessary indirection between
> session and the private data of session. An opaque pointer can be exposed
> --
> 2.17.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: remove deprecation notice for vhost
2021-11-03 5:25 3% ` Xia, Chenbo
@ 2021-11-03 7:03 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-11-03 7:03 UTC (permalink / raw)
To: Xia, Chenbo; +Cc: dev, Kevin Traynor, Maxime Coquelin, Ray Kinsella
On Wed, Nov 3, 2021 at 6:25 AM Xia, Chenbo <chenbo.xia@intel.com> wrote:
>
> Hi,
>
> I notice that from the start, I should not send the notice.. as the abi policy said:
>
> For removing the experimental tag associated with an API, deprecation notice is not required.
>
> Sorry for the mistake.
It is not required, but announcing does not hurt.
A real issue would be the opposite :-).
Your patch lgtm, thanks Chenbo.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] vhost: rename driver callbacks struct
2021-11-02 10:47 4% [dpdk-dev] [PATCH] vhost: rename driver callbacks struct Maxime Coquelin
@ 2021-11-03 8:16 0% ` Xia, Chenbo
0 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2021-11-03 8:16 UTC (permalink / raw)
To: Maxime Coquelin, dev, david.marchand; +Cc: Liu, Changpeng
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Tuesday, November 2, 2021 6:48 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH] vhost: rename driver callbacks struct
>
> As previously announced, this patch renames struct
> vhost_device_ops to struct rte_vhost_device_ops.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 3 ---
> doc/guides/rel_notes/release_21_11.rst | 2 ++
> drivers/net/vhost/rte_eth_vhost.c | 2 +-
> examples/vdpa/main.c | 2 +-
> examples/vhost/main.c | 2 +-
> examples/vhost_blk/vhost_blk.c | 2 +-
> examples/vhost_blk/vhost_blk.h | 2 +-
> examples/vhost_crypto/main.c | 2 +-
> lib/vhost/rte_vhost.h | 4 ++--
> lib/vhost/socket.c | 6 +++---
> lib/vhost/vhost.h | 4 ++--
Miss two in vhost_lib.rst :)
Testing issues reported in patchwork is expected as SPDK uses
this struct, so we can ignore it as SPDK will rename it when it
adapts to DPDK 21.11
With above fixed:
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
> 11 files changed, 15 insertions(+), 16 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 4366015b01..a9e2433988 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -111,9 +111,6 @@ Deprecation Notices
> ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
> driver interface will be marked as internal in DPDK v21.11.
>
> -* vhost: rename ``struct vhost_device_ops`` to ``struct
> rte_vhost_device_ops``
> - in DPDK v21.11.
> -
> * vhost: The experimental tags of ``rte_vhost_driver_get_protocol_features``,
> ``rte_vhost_driver_get_queue_num``, ``rte_vhost_crypto_create``,
> ``rte_vhost_crypto_free``, ``rte_vhost_crypto_fetch_requests``,
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 98d50a160b..dea038e3ac 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -564,6 +564,8 @@ ABI Changes
>
> * eventdev: Re-arranged fields in ``rte_event_timer`` to remove holes.
>
> +* vhost: rename ``struct vhost_device_ops`` to ``struct
> rte_vhost_device_ops``.
> +
>
> Known Issues
> ------------
> diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> index 8bb3b27d01..070f0e6dfd 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -975,7 +975,7 @@ vring_state_changed(int vid, uint16_t vring, int enable)
> return 0;
> }
>
> -static struct vhost_device_ops vhost_ops = {
> +static struct rte_vhost_device_ops vhost_ops = {
> .new_device = new_device,
> .destroy_device = destroy_device,
> .vring_state_changed = vring_state_changed,
> diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c
> index 097a267b8c..5ab07655ae 100644
> --- a/examples/vdpa/main.c
> +++ b/examples/vdpa/main.c
> @@ -153,7 +153,7 @@ destroy_device(int vid)
> }
> }
>
> -static const struct vhost_device_ops vdpa_sample_devops = {
> +static const struct rte_vhost_device_ops vdpa_sample_devops = {
> .new_device = new_device,
> .destroy_device = destroy_device,
> };
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index 58e12aa710..8685dfd81b 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -1519,7 +1519,7 @@ vring_state_changed(int vid, uint16_t queue_id, int
> enable)
> * These callback allow devices to be added to the data core when
> configuration
> * has been fully complete.
> */
> -static const struct vhost_device_ops virtio_net_device_ops =
> +static const struct rte_vhost_device_ops virtio_net_device_ops =
> {
> .new_device = new_device,
> .destroy_device = destroy_device,
> diff --git a/examples/vhost_blk/vhost_blk.c b/examples/vhost_blk/vhost_blk.c
> index fe2b4e4803..feadacc62e 100644
> --- a/examples/vhost_blk/vhost_blk.c
> +++ b/examples/vhost_blk/vhost_blk.c
> @@ -753,7 +753,7 @@ new_connection(int vid)
> return 0;
> }
>
> -struct vhost_device_ops vhost_blk_device_ops = {
> +struct rte_vhost_device_ops vhost_blk_device_ops = {
> .new_device = new_device,
> .destroy_device = destroy_device,
> .new_connection = new_connection,
> diff --git a/examples/vhost_blk/vhost_blk.h b/examples/vhost_blk/vhost_blk.h
> index 540998eb1b..975f0b4065 100644
> --- a/examples/vhost_blk/vhost_blk.h
> +++ b/examples/vhost_blk/vhost_blk.h
> @@ -104,7 +104,7 @@ struct vhost_blk_task {
> };
>
> extern struct vhost_blk_ctrlr *g_vhost_ctrlr;
> -extern struct vhost_device_ops vhost_blk_device_ops;
> +extern struct rte_vhost_device_ops vhost_blk_device_ops;
>
> int vhost_bdev_process_blk_commands(struct vhost_block_dev *bdev,
> struct vhost_blk_task *task);
> diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
> index dea7dcbd07..7d75623a5e 100644
> --- a/examples/vhost_crypto/main.c
> +++ b/examples/vhost_crypto/main.c
> @@ -363,7 +363,7 @@ destroy_device(int vid)
> RTE_LOG(INFO, USER1, "Vhost Crypto Device %i Removed\n", vid);
> }
>
> -static const struct vhost_device_ops virtio_crypto_device_ops = {
> +static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
> .new_device = new_device,
> .destroy_device = destroy_device,
> };
> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
> index 6f0915b98f..af0afbcf60 100644
> --- a/lib/vhost/rte_vhost.h
> +++ b/lib/vhost/rte_vhost.h
> @@ -264,7 +264,7 @@ struct rte_vhost_user_extern_ops {
> /**
> * Device and vring operations.
> */
> -struct vhost_device_ops {
> +struct rte_vhost_device_ops {
> int (*new_device)(int vid); /**< Add device. */
> void (*destroy_device)(int vid); /**< Remove device. */
>
> @@ -606,7 +606,7 @@ rte_vhost_get_negotiated_protocol_features(int vid,
>
> /* Register callbacks. */
> int rte_vhost_driver_callback_register(const char *path,
> - struct vhost_device_ops const * const ops);
> + struct rte_vhost_device_ops const * const ops);
>
> /**
> *
> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
> index c6548608a3..82963c1e6d 100644
> --- a/lib/vhost/socket.c
> +++ b/lib/vhost/socket.c
> @@ -58,7 +58,7 @@ struct vhost_user_socket {
>
> struct rte_vdpa_device *vdpa_dev;
>
> - struct vhost_device_ops const *notify_ops;
> + struct rte_vhost_device_ops const *notify_ops;
> };
>
> struct vhost_user_connection {
> @@ -1093,7 +1093,7 @@ rte_vhost_driver_unregister(const char *path)
> */
> int
> rte_vhost_driver_callback_register(const char *path,
> - struct vhost_device_ops const * const ops)
> + struct rte_vhost_device_ops const * const ops)
> {
> struct vhost_user_socket *vsocket;
>
> @@ -1106,7 +1106,7 @@ rte_vhost_driver_callback_register(const char *path,
> return vsocket ? 0 : -1;
> }
>
> -struct vhost_device_ops const *
> +struct rte_vhost_device_ops const *
> vhost_driver_callback_get(const char *path)
> {
> struct vhost_user_socket *vsocket;
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 05ccc35f37..080c67ef99 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -394,7 +394,7 @@ struct virtio_net {
> uint16_t mtu;
> uint8_t status;
>
> - struct vhost_device_ops const *notify_ops;
> + struct rte_vhost_device_ops const *notify_ops;
>
> uint32_t nr_guest_pages;
> uint32_t max_guest_pages;
> @@ -702,7 +702,7 @@ void vhost_enable_linearbuf(int vid);
> int vhost_enable_guest_notification(struct virtio_net *dev,
> struct vhost_virtqueue *vq, int enable);
>
> -struct vhost_device_ops const *vhost_driver_callback_get(const char *path);
> +struct rte_vhost_device_ops const *vhost_driver_callback_get(const char
> *path);
>
> /*
> * Backend-specific cleanup.
> --
> 2.31.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] Overriding rte_config.h
2021-11-02 15:00 0% ` Ananyev, Konstantin
@ 2021-11-03 14:38 0% ` Ben Magistro
2021-11-04 11:03 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Ben Magistro @ 2021-11-03 14:38 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Richardson, Bruce, dev, ben.magistro, Stefan Baranoff
Thanks for the clarification.
I agree bumping RTE_LIBRTE_IP_FRAG_MAX_FRAG to 8 probably makes sense to
easily support jumbo frames.
The other use case we have is supporting highly fragmented UDP. To support
this we were increasing to 64 (next power of 2) based on a 64K UDP max and
a link MTU of 1200 (VPN/tunneling). I am not sure this is a value that
makes sense for the majority of use cases.
On Tue, Nov 2, 2021 at 11:09 AM Ananyev, Konstantin <
konstantin.ananyev@intel.com> wrote:
>
> > > > > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > > > > With the transition to meson, what is the best way to provide
> custom values
> > > > > > > to parameters in rte_config.h? When using makefiles, (from
> memory, I
> > > > > > > think) we used common_base as a template that was copied in as
> a
> > > > > > > replacement for defconfig_x86.... Our current thinking is to
> apply a
> > > > > > > locally maintained patch so that we can track custom values
> easier to the
> > > > > > > rte_config.h file unless there is another way to pass in an
> overridden
> > > > > > > value. As an example, one of the values we are customizing is
> > > > > > > IP_FRAG_MAX_FRAG.
> > > > > > >
> > > > > > > Cheers,
> > > > > > >
> > > > > > There is no one defined way for overriding values in rte_config
> with the
> > > > > > meson build system, as values there are ones that should rarely
> need to be
> > > > > > overridden. If it's the case that one does need tuning, we
> generally want
> > > > > > to look to either change the default so it works for everyone, or
> > > > > > alternatively look to replace it with a runtime option.
> > > > > >
> > > > > > In the absense of that, a locally maintained patch may be
> reasonable. To
> > > > > > what value do you want to change MAX_FRAG? Would it be worth
> considering as
> > > > > > a newer default value in DPDK itself, since the current default
> is fairly
> > > > > > low?
> > > > >
> > > > > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > > > > to cover common jumbo frame size (9K) pretty easily.
> > > > > As a drawback default reassembly table size will double.
> > > >
> > > > Maybe not. I'm not an expert in the library, but it seems the basic
> struct
> > > > used for tracking the packets and fragments is "struct ip_frag_pkt".
> Due to
> > > > the other data in the struct and the linked-list overheads, the
> actual size
> > > > increase when doubling MAX_FRAG from 4 to 8 is only 25%. According
> to gdb
> > > > on my debug build it goes from 192B to 256B.
> > >
> > > Ah yes, you right, struct ip_frag should fit into 16B, key seems the
> biggest one.
> > >
> > > >
> > > > > Even better would be to go a step further and rework lib/ip_frag
> > > > > to make it configurable runtime parameter.
> > > > >
> > > > Agree. However, that's not as quick a fix as just increasing the
> default
> > > > max segs value which could be done immediately if there is consensus
> on it.
> > >
> > > You mean for 21.11?
> > > I don't mind in principle, but would like to know other people
> thoughts here.
> > > Another thing - we didn't announce it in advance, and it is
> definitely an ABI change.
> >
> > I notice from this patch you submitted that the main structure in
> question
> > is being hidden[1]. Will it still be an ABI change if that patch is
> merged
> > in?
>
> Yes, it would unfortunately:
> struct rte_ip_frag_death_row still remains public.
>
> > Alternatively, should a fragment count increase be considered as part of
> > that change?
>
> I don't think they are really related.
> This patch just hides some structs that are already marked as 'internal'
> and not used by public API. It doesn't make any changes in the public
> structs layout.
> But I suppose we can bring that question (increase of
> RTE_LIBRTE_IP_FRAG_MAX_FRAG) to
> tomorrow TB meeting, and ask for approval.
>
> > /Bruce
> >
> > [1]
> http://patches.dpdk.org/project/dpdk/patch/20211101124915.9640-1-konstantin.ananyev@intel.com/
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] doc: remove deprecation notice for interrupt
@ 2021-11-03 17:50 5% Harman Kalra
0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-11-03 17:50 UTC (permalink / raw)
To: dev, Ray Kinsella; +Cc: Harman Kalra
Deprecation notice targeted for 21.11 has been committed with
following as the first commit of the series.
Fixes: b7c984291611 ("interrupts: add allocator and accessors")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
1 file changed, 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4366015b01..0545245222 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,9 +17,6 @@ Deprecation Notices
* eal: The function ``rte_eal_remote_launch`` will return new error codes
after read or write error on the pipe, instead of calling ``rte_panic``.
-* eal: Making ``struct rte_intr_handle`` internal to avoid any ABI breakages
- in future.
-
* rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
not allow for writing optimized code for all the CPU architectures supported
in DPDK. DPDK has adopted the atomic operations from
--
2.18.0
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH v20 0/5] Add PIE support for HQoS library
2021-11-02 23:57 3% ` [dpdk-dev] [PATCH v20 " Liguzinski, WojciechX
@ 2021-11-03 17:52 0% ` Thomas Monjalon
2021-11-04 8:29 0% ` Liguzinski, WojciechX
2021-11-04 10:40 3% ` [dpdk-dev] [PATCH v21 0/3] " Liguzinski, WojciechX
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-11-03 17:52 UTC (permalink / raw)
To: Wojciech Liguzinski
Cc: dev, jasvinder.singh, cristian.dumitrescu, megha.ajmera, john.mcnamara
03/11/2021 00:57, Liguzinski, WojciechX:
> From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
>
> DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
> which is a situation when excess buffers in the network cause high latency and latency
> variation. Currently, it supports RED for active queue management. However, more
> advanced queue management is required to address this problem and provide desirable
> quality of service to users.
>
> This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
> controller Enhanced) that can effectively and directly control queuing latency to address
> the bufferbloat problem.
>
> The implementation of mentioned functionality includes modification of existing and
> adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation notice is going
> to be prepared and sent.
>
> Wojciech Liguzinski (5):
> sched: add PIE based congestion management
Did you see the checkpatch issues on this patch?
http://mails.dpdk.org/archives/test-report/2021-November/238253.html
> example/qos_sched: add PIE support
The strict minimum is to explain why you add PIE and what the acronym means,
inside the commit log.
> example/ip_pipeline: add PIE support
Title should follow same convention as history.
For examples, it start with "examples/" as the directory name.
> doc/guides/prog_guide: added PIE
doc should be squashed with code patches
Is there any doc update related to the examples?
If not, it should be fully squashed with lib changes.
> app/test: add tests for PIE
If there is nothing special, it can be squashed with the lib patch.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v20 0/5] Add PIE support for HQoS library
2021-11-03 17:52 0% ` Thomas Monjalon
@ 2021-11-04 8:29 0% ` Liguzinski, WojciechX
0 siblings, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-11-04 8:29 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Singh, Jasvinder, Dumitrescu, Cristian, Ajmera, Megha,
Mcnamara, John
Hi Thomas,
Thanks, I will apply your suggestions asap.
Wojtek
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Wednesday, November 3, 2021 6:53 PM
To: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>
Cc: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Ajmera, Megha <megha.ajmera@intel.com>; Mcnamara, John <john.mcnamara@intel.com>
Subject: Re: [dpdk-dev] [PATCH v20 0/5] Add PIE support for HQoS library
03/11/2021 00:57, Liguzinski, WojciechX:
> From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
>
> DPDK sched library is equipped with mechanism that secures it from the
> bufferbloat problem which is a situation when excess buffers in the
> network cause high latency and latency variation. Currently, it
> supports RED for active queue management. However, more advanced queue
> management is required to address this problem and provide desirable quality of service to users.
>
> This solution (RFC) proposes usage of new algorithm called "PIE"
> (Proportional Integral controller Enhanced) that can effectively and
> directly control queuing latency to address the bufferbloat problem.
>
> The implementation of mentioned functionality includes modification of
> existing and adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation
> notice is going to be prepared and sent.
>
> Wojciech Liguzinski (5):
> sched: add PIE based congestion management
Did you see the checkpatch issues on this patch?
http://mails.dpdk.org/archives/test-report/2021-November/238253.html
> example/qos_sched: add PIE support
The strict minimum is to explain why you add PIE and what the acronym means, inside the commit log.
> example/ip_pipeline: add PIE support
Title should follow same convention as history.
For examples, it start with "examples/" as the directory name.
> doc/guides/prog_guide: added PIE
doc should be squashed with code patches Is there any doc update related to the examples?
If not, it should be fully squashed with lib changes.
> app/test: add tests for PIE
If there is nothing special, it can be squashed with the lib patch.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v21 0/3] Add PIE support for HQoS library
2021-11-02 23:57 3% ` [dpdk-dev] [PATCH v20 " Liguzinski, WojciechX
2021-11-03 17:52 0% ` Thomas Monjalon
@ 2021-11-04 10:40 3% ` Liguzinski, WojciechX
2021-11-04 10:49 3% ` [dpdk-dev] [PATCH v22 " Liguzinski, WojciechX
1 sibling, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-11-04 10:40 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu
Cc: megha.ajmera, Wojciech Liguzinski
From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Wojciech Liguzinski (3):
sched: add PIE based congestion management
examples/qos_sched: add PIE support
examples/ip_pipeline: add PIE support
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 3 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 255 +++--
lib/sched/rte_sched.h | 64 +-
lib/sched/version.map | 4 +
19 files changed, 2185 insertions(+), 281 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable
2021-10-28 8:56 0% ` Andrew Rybchenko
@ 2021-11-04 10:45 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-11-04 10:45 UTC (permalink / raw)
To: Andrew Rybchenko, Kinsella, Ray, Thomas Monjalon, dev; +Cc: matan
On 10/28/2021 9:56 AM, Andrew Rybchenko wrote:
> On 10/28/21 11:38 AM, Kinsella, Ray wrote:
>>
>>
>> On 28/10/2021 09:35, Thomas Monjalon wrote:
>>> The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
>>> and is integrated in error checks of ethdev library.
>>>
>>> It is promoted as stable ABI.
>>>
>>> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
>>> ---
>>> lib/ethdev/rte_ethdev.h | 4 ----
>>> lib/ethdev/version.map | 2 +-
>>> 2 files changed, 1 insertion(+), 5 deletions(-)
>>>
>> Acked-by: Ray Kinsella <mdr@ashroe.eu>
>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Applied to dpdk-next-net/main, thanks.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v22 0/3] Add PIE support for HQoS library
2021-11-04 10:40 3% ` [dpdk-dev] [PATCH v21 0/3] " Liguzinski, WojciechX
@ 2021-11-04 10:49 3% ` Liguzinski, WojciechX
2021-11-04 11:03 3% ` [dpdk-dev] [PATCH v23 " Liguzinski, WojciechX
2021-11-04 14:55 3% ` [dpdk-dev] [PATCH v24 " Thomas Monjalon
0 siblings, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-11-04 10:49 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu
Cc: megha.ajmera, Wojciech Liguzinski
From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Wojciech Liguzinski (3):
sched: add PIE based congestion management
examples/qos_sched: add PIE support
examples/ip_pipeline: add PIE support
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 3 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 255 +++--
lib/sched/rte_sched.h | 64 +-
lib/sched/version.map | 4 +
19 files changed, 2185 insertions(+), 281 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] Overriding rte_config.h
2021-11-03 14:38 0% ` Ben Magistro
@ 2021-11-04 11:03 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-11-04 11:03 UTC (permalink / raw)
To: Ben Magistro; +Cc: Richardson, Bruce, dev, ben.magistro, Stefan Baranoff
Hi Ben,
I also don’t think 64 is a common case here.
For such cases we probably should think up some different approach for the reassembly table.
From: Ben Magistro <koncept1@gmail.com>
Sent: Wednesday, November 3, 2021 2:38 PM
To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
Cc: Richardson, Bruce <bruce.richardson@intel.com>; dev@dpdk.org; ben.magistro@trinitycyber.com; Stefan Baranoff <stefan.baranoff@trinitycyber.com>
Subject: Re: [dpdk-dev] Overriding rte_config.h
Thanks for the clarification.
I agree bumping RTE_LIBRTE_IP_FRAG_MAX_FRAG to 8 probably makes sense to easily support jumbo frames.
The other use case we have is supporting highly fragmented UDP. To support this we were increasing to 64 (next power of 2) based on a 64K UDP max and a link MTU of 1200 (VPN/tunneling). I am not sure this is a value that makes sense for the majority of use cases.
On Tue, Nov 2, 2021 at 11:09 AM Ananyev, Konstantin <konstantin.ananyev@intel.com<mailto:konstantin.ananyev@intel.com>> wrote:
> > > > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > > > With the transition to meson, what is the best way to provide custom values
> > > > > > to parameters in rte_config.h? When using makefiles, (from memory, I
> > > > > > think) we used common_base as a template that was copied in as a
> > > > > > replacement for defconfig_x86.... Our current thinking is to apply a
> > > > > > locally maintained patch so that we can track custom values easier to the
> > > > > > rte_config.h file unless there is another way to pass in an overridden
> > > > > > value. As an example, one of the values we are customizing is
> > > > > > IP_FRAG_MAX_FRAG.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > There is no one defined way for overriding values in rte_config with the
> > > > > meson build system, as values there are ones that should rarely need to be
> > > > > overridden. If it's the case that one does need tuning, we generally want
> > > > > to look to either change the default so it works for everyone, or
> > > > > alternatively look to replace it with a runtime option.
> > > > >
> > > > > In the absense of that, a locally maintained patch may be reasonable. To
> > > > > what value do you want to change MAX_FRAG? Would it be worth considering as
> > > > > a newer default value in DPDK itself, since the current default is fairly
> > > > > low?
> > > >
> > > > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > > > to cover common jumbo frame size (9K) pretty easily.
> > > > As a drawback default reassembly table size will double.
> > >
> > > Maybe not. I'm not an expert in the library, but it seems the basic struct
> > > used for tracking the packets and fragments is "struct ip_frag_pkt". Due to
> > > the other data in the struct and the linked-list overheads, the actual size
> > > increase when doubling MAX_FRAG from 4 to 8 is only 25%. According to gdb
> > > on my debug build it goes from 192B to 256B.
> >
> > Ah yes, you right, struct ip_frag should fit into 16B, key seems the biggest one.
> >
> > >
> > > > Even better would be to go a step further and rework lib/ip_frag
> > > > to make it configurable runtime parameter.
> > > >
> > > Agree. However, that's not as quick a fix as just increasing the default
> > > max segs value which could be done immediately if there is consensus on it.
> >
> > You mean for 21.11?
> > I don't mind in principle, but would like to know other people thoughts here.
> > Another thing - we didn't announce it in advance, and it is definitely an ABI change.
>
> I notice from this patch you submitted that the main structure in question
> is being hidden[1]. Will it still be an ABI change if that patch is merged
> in?
Yes, it would unfortunately:
struct rte_ip_frag_death_row still remains public.
> Alternatively, should a fragment count increase be considered as part of
> that change?
I don't think they are really related.
This patch just hides some structs that are already marked as 'internal'
and not used by public API. It doesn't make any changes in the public structs layout.
But I suppose we can bring that question (increase of RTE_LIBRTE_IP_FRAG_MAX_FRAG) to
tomorrow TB meeting, and ask for approval.
> /Bruce
>
> [1] http://patches.dpdk.org/project/dpdk/patch/20211101124915.9640-1-konstantin.ananyev@intel.com/
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v23 0/3] Add PIE support for HQoS library
2021-11-04 10:49 3% ` [dpdk-dev] [PATCH v22 " Liguzinski, WojciechX
@ 2021-11-04 11:03 3% ` Liguzinski, WojciechX
2021-11-04 14:55 3% ` [dpdk-dev] [PATCH v24 " Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-11-04 11:03 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu
Cc: megha.ajmera, Wojciech Liguzinski
From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Wojciech Liguzinski (3):
sched: add PIE based congestion management
examples/qos_sched: add PIE support
examples/ip_pipeline: add PIE support
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 3 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 254 +++--
lib/sched/rte_sched.h | 64 +-
lib/sched/version.map | 4 +
19 files changed, 2184 insertions(+), 281 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v24 0/3] Add PIE support for HQoS library
2021-11-04 10:49 3% ` [dpdk-dev] [PATCH v22 " Liguzinski, WojciechX
2021-11-04 11:03 3% ` [dpdk-dev] [PATCH v23 " Liguzinski, WojciechX
@ 2021-11-04 14:55 3% ` Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-11-04 14:55 UTC (permalink / raw)
To: dev; +Cc: megha.ajmera
last changes to make this series "more acceptable":
- RTE_SCHED_CMAN in rte_config.h, replacing RTE_SCHED_RED
- test file listed in MAINTAINERS
- few whitespaces fixed
From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Wojciech Liguzinski (3):
sched: add PIE based congestion management
examples/qos_sched: support PIE congestion management
examples/ip_pipeline: support PIE congestion management
MAINTAINERS | 1 +
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 2 +-
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 64 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 142 +--
examples/qos_sched/cfg_file.c | 127 ++-
examples/qos_sched/cfg_file.h | 5 +
examples/qos_sched/init.c | 27 +-
examples/qos_sched/main.h | 3 +
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 3 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 396 +++++++
lib/sched/rte_sched.c | 256 +++--
lib/sched/rte_sched.h | 64 +-
lib/sched/version.map | 4 +
20 files changed, 2185 insertions(+), 282 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.33.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] Minutes of Technical Board Meeting, 2021-Nov-03
@ 2021-11-04 19:54 4% Maxime Coquelin
0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-11-04 19:54 UTC (permalink / raw)
To: dev
Minutes of Technical Board Meeting, 2021-Nov-03
Members Attending
-----------------
-Aaron
-Ferruh
-Hemant
-Honnappa
-Jerin
-Kevin
-Konstantin
-Maxime (Chair)
-Olivier
-Stephen
-Thomas
NOTE: The technical board meetings every second Wednesday at
https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.
NOTE: Next meeting will be on Wednesday 2021-Nov-17 @3pm UTC, and will
be chaired by Olivier.
# ENETFEC driver
- TB discussed whether depending on an out-of-tree Kernel module is
acceptable
-- TB voted to accept that ENETFEC PMD relies on out-of-tree kernel module
-- TB recommends avoiding out-of-tree Kernel modules, but the kernel
module required by the ENETFEC PMD is in the process of being upstreamed
- TB discussed whether having this drivers as a VDEV is acceptable or if
a bus driver is required, knowing that only this device would use it
-- TB voted to accept this driver as a VDEV
# IP frag ABI change in v21.11 [0]
- This ABI change was not announced so TB approval was required
-- TB voted to accept this ABI change
# Communication plan around v21.11 release
- Thomas highlighted that a lot of changes are being introduced in
v21.11 release.
- In addition to the usual release blog post, blog posts about specific
new features would be welcomed
-- TB calls for ideas to maintainers and contributors
# Feedback from Governing Board on proposal for technical board process
updates
- Honnappa proposes a new spreadsheet to improve the communication
between the technical and governing boards
# L3 forward mode in testpmd [1]
- Honnappa presented the reasons of this new forwarding mode
-- L3FWD is a standard benchmark for DPDK
-- L3FWD example misses debugging features present in testpmd
- Concerns raised about code duplication and bloating of testpmd
- Suggestions that adding more statistics and interactive mode to L3 FWD
would be preferable
-- But concerns that it would make this application too much complex,
defeating the initial purpose of this example
- As no consensus has been reached, Honnappa proposed to reject/defer it
for now
# DMARC configuration
- Ali monitored the DMARC configuration changes done on the user and web
mailing lists
-- Better results have been observed
- TB voted to apply the new policy to the other mailing lists
- Ali will apply the new policy by the end of next week
# Patch from AMD to raise the maximum number of lcores
- Ran out of time, adding this item to the next meeting
[0]:
https://patches.dpdk.org/project/dpdk/patch/20211102190309.5795-1-konstantin.ananyev@intel.com/
[1]:
https://patchwork.dpdk.org/project/dpdk/patch/20210430213747.41530-2-kathleen.capella@arm.com/
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
2021-10-25 21:40 4% [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1 Thomas Monjalon
2021-10-28 7:10 0% ` Jiang, YuX
@ 2021-11-05 21:51 0% ` Thinh Tran
2021-11-08 10:50 0% ` Pei Zhang
2 siblings, 0 replies; 200+ results
From: Thinh Tran @ 2021-11-05 21:51 UTC (permalink / raw)
To: dpdk-dev
Hi
IBM - Power Systems
DPDK v21.11-rc1-63-gbb0bd346d5
* Basic PF on Mellanox: No new issues or regressions were seen.
* Performance: not tested.
Systems tested:
- IBM Power9 PowerNV 9006-22P
OS: RHEL 8.4
GCC: version 8.3.1 20191121 (Red Hat 8.3.1-5)
NICs:
- Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
- firmware version: 16.29.1017
- MLNX_OFED_LINUX-5.2-1.0.4.1 (OFED-5.2-1.0.4)
Regards,
Thinh Tran
On 10/25/2021 4:40 PM, Thomas Monjalon wrote:
> A new DPDK release candidate is ready for testing:
> https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
>
> There are 1171 new patches in this snapshot, big as expected.
>
> Release notes:
> https://doc.dpdk.org/guides/rel_notes/release_21_11.html
>
> Highlights of 21.11-rc1:
> * General
> - more than 512 MSI-X interrupts
> - hugetlbfs subdirectories
> - mempool flag for non-IO usages
> - device class for DMA accelerators
> - DMA drivers for Intel DSA and IOAT
> * Networking
> - MTU handling rework
> - get all MAC addresses of a port
> - RSS based on L3/L4 checksum fields
> - flow match on L2TPv2 and PPP
> - flow flex parser for custom header
> - control delivery of HW Rx metadata
> - transfer flows API rework
> - shared Rx queue
> - Windows support of Intel e1000, ixgbe and iavf
> - testpmd multi-process
> - pcapng library and dumpcap tool
> * API/ABI
> - API namespace improvements (mempool, mbuf, ethdev)
> - API internals hidden (intr, ethdev, security, cryptodev, eventdev, cmdline)
> - flags check for future ABI compatibility (memzone, mbuf, mempool)
>
> Please test and report issues on bugs.dpdk.org.
> DPDK 21.11-rc2 is expected in two weeks or less.
>
> Thank you everyone
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] eal/rwlock: add note about writer starvation
@ 2021-11-08 10:18 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-11-08 10:18 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Joyce Kong, konstantin.ananyev, Honnappa Nagarahalli
Ping again. Stephen?
12/05/2021 21:10, Thomas Monjalon:
> Ping for v3
>
> 12/02/2021 01:21, Honnappa Nagarahalli:
> > <snip>
> >
> > >
> > > 14/01/2021 17:55, Stephen Hemminger:
> > > > The implementation of reader/writer locks in DPDK (from first release)
> > > > is simple and fast. But it can lead to writer starvation issues.
> > > >
> > > > It is not easy to fix this without changing ABI and potentially
> > > > breaking customer applications that are expect the unfair behavior.
> > >
> > > typo: "are expect"
> > >
> > > > The wikipedia page on reader-writer problem has a similar example
> > > > which summarizes the problem pretty well.
> > >
> > > Maybe add the URL in the commit message?
> > >
> > > >
> > > > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > > > ---
> > > > --- a/lib/librte_eal/include/generic/rte_rwlock.h
> > > > +++ b/lib/librte_eal/include/generic/rte_rwlock.h
> > > > + * Note: This version of reader/writer locks is not fair because
> > ^^^^^^ may be "implementation" would be better?
> >
> > > > + * readers do not block for pending writers. A stream of readers can
> > > > + * subsequently lock out all potential writers and starve them.
> > > > + * This is because after the first reader locks the resource,
> > > > + * no writer can lock it. The writer will only be able to get the
> > > > + lock
> > > > + * when it will only be released by the last reader.
> > This looks good. Though the writer starvation is prominent, the reader starvation is possible if there is a stream of writers when a writer holds the lock. Should we call this out too?
> >
> > >
> > > You did not get review, probably because nobody was Cc'ed.
> > > +Cc Honnappa, Joyce and Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
2021-10-25 21:40 4% [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1 Thomas Monjalon
2021-10-28 7:10 0% ` Jiang, YuX
2021-11-05 21:51 0% ` Thinh Tran
@ 2021-11-08 10:50 0% ` Pei Zhang
2 siblings, 0 replies; 200+ results
From: Pei Zhang @ 2021-11-08 10:50 UTC (permalink / raw)
To: Thomas Monjalon
Cc: David Marchand, Maxime Coquelin, Kevin Traynor, dev, Chao Yang
Hello Thomas,
The testing with dpdk 21.11-rc1 from Red Hat looks good. We tested below 18
scenarios and all got PASS on RHEL8:
(1)Guest with device assignment(PF) throughput testing(1G hugepage size):
PASS
(2)Guest with device assignment(PF) throughput testing(2M hugepage size) :
PASS
(3)Guest with device assignment(VF) throughput testing: PASS
(4)PVP (host dpdk testpmd as vswitch) 1Q: throughput testing: PASS
(5)PVP vhost-user 2Q throughput testing: PASS
(6)PVP vhost-user 1Q - cross numa node throughput testing: PASS
(7)Guest with vhost-user 2 queues throughput testing: PASS
(8)vhost-user reconnect with dpdk-client, qemu-server: qemu reconnect: PASS
(9)vhost-user reconnect with dpdk-client, qemu-server: ovs reconnect: PASS
(10)PVP 1Q live migration testing: PASS
(11)PVP 1Q post copy live migration testing: PASS
(12)PVP 1Q cross numa node live migration testing: PASS
(13)Guest with ovs+dpdk+vhost-user 1Q live migration testing: PASS
(14)Guest with ovs+dpdk+vhost-user 1Q live migration testing (2M): PASS
(15)Guest with ovs+dpdk+vhost-user 2Q live migration testing: PASS
(16)Guest with ovs+dpdk+vhost-user 4Q live migration testing: PASS
(17)Host PF + DPDK testing: PASS
(18)Host VF + DPDK testing: PASS
Versions:
kernel 4.18
qemu 6.1
dpdk: git://dpdk.org/dpdk
# git log -1
commit 6c390cee976e33b1e9d8562d32c9d3ebe5d9ce94 (HEAD -> main, tag:
v21.11-rc1)
Author: Thomas Monjalon <thomas@monjalon.net>
Date: Mon Oct 25 22:42:47 2021 +0200
version: 21.11-rc1
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
NICs: X540-AT2 NIC(ixgbe, 10G)
Best regards,
Pei
On Tue, Oct 26, 2021 at 5:41 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> A new DPDK release candidate is ready for testing:
> https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
>
> There are 1171 new patches in this snapshot, big as expected.
>
> Release notes:
> https://doc.dpdk.org/guides/rel_notes/release_21_11.html
>
> Highlights of 21.11-rc1:
> * General
> - more than 512 MSI-X interrupts
> - hugetlbfs subdirectories
> - mempool flag for non-IO usages
> - device class for DMA accelerators
> - DMA drivers for Intel DSA and IOAT
> * Networking
> - MTU handling rework
> - get all MAC addresses of a port
> - RSS based on L3/L4 checksum fields
> - flow match on L2TPv2 and PPP
> - flow flex parser for custom header
> - control delivery of HW Rx metadata
> - transfer flows API rework
> - shared Rx queue
> - Windows support of Intel e1000, ixgbe and iavf
> - testpmd multi-process
> - pcapng library and dumpcap tool
> * API/ABI
> - API namespace improvements (mempool, mbuf, ethdev)
> - API internals hidden (intr, ethdev, security, cryptodev,
> eventdev, cmdline)
> - flags check for future ABI compatibility (memzone, mbuf, mempool)
>
> Please test and report issues on bugs.dpdk.org.
> DPDK 21.11-rc2 is expected in two weeks or less.
>
> Thank you everyone
>
>
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 2/2] ip_frag: add namespace
@ 2021-11-08 13:55 3% ` Konstantin Ananyev
2021-11-09 12:32 3% ` [dpdk-dev] [PATCH v5] " Konstantin Ananyev
0 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-11-08 13:55 UTC (permalink / raw)
To: dev; +Cc: Konstantin Ananyev
Update public macros to have RTE_IP_FRAG_ prefix.
Remove obsolete macro.
Update DPDK components to use new names.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 3 +++
examples/ip_reassembly/main.c | 2 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 +-
lib/ip_frag/rte_ip_frag.h | 17 +++++++----------
lib/ip_frag/rte_ip_frag_common.c | 3 ++-
lib/ip_frag/rte_ipv6_fragmentation.c | 12 ++++++------
lib/ip_frag/rte_ipv6_reassembly.c | 6 +++---
lib/port/rte_port_ras.c | 2 +-
8 files changed, 24 insertions(+), 23 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8da19c613a..ce47250fbd 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -559,6 +559,9 @@ API Changes
* fib: Added the ``rib_ext_sz`` field to ``rte_fib_conf`` and ``rte_fib6_conf``
so that user can specify the size of the RIB extension inside the FIB.
+* ip_frag: All macros updated to have ``RTE_IP_FRAG_`` prefix. Obsolete
+ macros are removed. DPDK components updated to use new names.
+
ABI Changes
-----------
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 547b47276e..fb3cac3bd0 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -371,7 +371,7 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
eth_hdr->ether_type = rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* if packet is IPv6 */
- struct ipv6_extension_fragment *frag_hdr;
+ struct rte_ipv6_fragment_ext *frag_hdr;
struct rte_ipv6_hdr *ip_hdr;
ip_hdr = (struct rte_ipv6_hdr *)(eth_hdr + 1);
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 0a1c5bcaaa..86bb7e9064 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2647,7 +2647,7 @@ rx_callback(__rte_unused uint16_t port, __rte_unused uint16_t queue,
rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) {
struct rte_ipv6_hdr *iph;
- struct ipv6_extension_fragment *fh;
+ struct rte_ipv6_fragment_ext *fh;
iph = (struct rte_ipv6_hdr *)(eth + 1);
fh = rte_ipv6_frag_get_ipv6_fragment_header(iph);
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index b469bb5f4e..0782ba45d6 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -27,22 +27,19 @@ extern "C" {
struct rte_mbuf;
-#define IP_FRAG_DEATH_ROW_LEN 32 /**< death row size (in packets) */
+#define RTE_IP_FRAG_DEATH_ROW_LEN 32 /**< death row size (in packets) */
/* death row size in mbufs */
-#define IP_FRAG_DEATH_ROW_MBUF_LEN \
- (IP_FRAG_DEATH_ROW_LEN * (RTE_LIBRTE_IP_FRAG_MAX_FRAG + 1))
+#define RTE_IP_FRAG_DEATH_ROW_MBUF_LEN \
+ (RTE_IP_FRAG_DEATH_ROW_LEN * (RTE_LIBRTE_IP_FRAG_MAX_FRAG + 1))
/** mbuf death row (packets to be freed) */
struct rte_ip_frag_death_row {
uint32_t cnt; /**< number of mbufs currently on death row */
- struct rte_mbuf *row[IP_FRAG_DEATH_ROW_MBUF_LEN];
+ struct rte_mbuf *row[RTE_IP_FRAG_DEATH_ROW_MBUF_LEN];
/**< mbufs to be freed */
};
-/* struct ipv6_extension_fragment moved to librte_net/rte_ip.h and renamed. */
-#define ipv6_extension_fragment rte_ipv6_fragment_ext
-
/**
* Create a new IP fragmentation table.
*
@@ -128,7 +125,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
struct rte_ip_frag_death_row *dr,
struct rte_mbuf *mb, uint64_t tms, struct rte_ipv6_hdr *ip_hdr,
- struct ipv6_extension_fragment *frag_hdr);
+ struct rte_ipv6_fragment_ext *frag_hdr);
/**
* Return a pointer to the packet's fragment header, if found.
@@ -141,11 +138,11 @@ struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
* Pointer to the IPv6 fragment extension header, or NULL if it's not
* present.
*/
-static inline struct ipv6_extension_fragment *
+static inline struct rte_ipv6_fragment_ext *
rte_ipv6_frag_get_ipv6_fragment_header(struct rte_ipv6_hdr *hdr)
{
if (hdr->proto == IPPROTO_FRAGMENT) {
- return (struct ipv6_extension_fragment *) ++hdr;
+ return (struct rte_ipv6_fragment_ext *) ++hdr;
}
else
return NULL;
diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
index 6b29e9d7ed..8580ffca5e 100644
--- a/lib/ip_frag/rte_ip_frag_common.c
+++ b/lib/ip_frag/rte_ip_frag_common.c
@@ -135,7 +135,8 @@ rte_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
TAILQ_FOREACH(fp, &tbl->lru, lru)
if (max_cycles + fp->start < tms) {
/* check that death row has enough space */
- if (IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >= fp->last_idx)
+ if (RTE_IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >=
+ fp->last_idx)
ip_frag_tbl_del(tbl, dr, fp);
else
return;
diff --git a/lib/ip_frag/rte_ipv6_fragmentation.c b/lib/ip_frag/rte_ipv6_fragmentation.c
index 5d67336f2d..88f29c158c 100644
--- a/lib/ip_frag/rte_ipv6_fragmentation.c
+++ b/lib/ip_frag/rte_ipv6_fragmentation.c
@@ -22,13 +22,13 @@ __fill_ipv6hdr_frag(struct rte_ipv6_hdr *dst,
const struct rte_ipv6_hdr *src, uint16_t len, uint16_t fofs,
uint32_t mf)
{
- struct ipv6_extension_fragment *fh;
+ struct rte_ipv6_fragment_ext *fh;
rte_memcpy(dst, src, sizeof(*dst));
dst->payload_len = rte_cpu_to_be_16(len);
dst->proto = IPPROTO_FRAGMENT;
- fh = (struct ipv6_extension_fragment *) ++dst;
+ fh = (struct rte_ipv6_fragment_ext *) ++dst;
fh->next_header = src->proto;
fh->reserved = 0;
fh->frag_data = rte_cpu_to_be_16(RTE_IPV6_SET_FRAG_DATA(fofs, mf));
@@ -94,7 +94,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
*/
frag_size = mtu_size - sizeof(struct rte_ipv6_hdr) -
- sizeof(struct ipv6_extension_fragment);
+ sizeof(struct rte_ipv6_fragment_ext);
frag_size = RTE_ALIGN_FLOOR(frag_size, RTE_IPV6_EHDR_FO_ALIGN);
/* Check that pkts_out is big enough to hold all fragments */
@@ -124,9 +124,9 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
/* Reserve space for the IP header that will be built later */
out_pkt->data_len = sizeof(struct rte_ipv6_hdr) +
- sizeof(struct ipv6_extension_fragment);
+ sizeof(struct rte_ipv6_fragment_ext);
out_pkt->pkt_len = sizeof(struct rte_ipv6_hdr) +
- sizeof(struct ipv6_extension_fragment);
+ sizeof(struct rte_ipv6_fragment_ext);
frag_bytes_remaining = frag_size;
out_seg_prev = out_pkt;
@@ -184,7 +184,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
fragment_offset = (uint16_t)(fragment_offset +
out_pkt->pkt_len - sizeof(struct rte_ipv6_hdr)
- - sizeof(struct ipv6_extension_fragment));
+ - sizeof(struct rte_ipv6_fragment_ext));
/* Write the fragment to the output list */
pkts_out[out_pkt_pos] = out_pkt;
diff --git a/lib/ip_frag/rte_ipv6_reassembly.c b/lib/ip_frag/rte_ipv6_reassembly.c
index 6bc0bf792a..d4019e87e6 100644
--- a/lib/ip_frag/rte_ipv6_reassembly.c
+++ b/lib/ip_frag/rte_ipv6_reassembly.c
@@ -33,7 +33,7 @@ struct rte_mbuf *
ipv6_frag_reassemble(struct ip_frag_pkt *fp)
{
struct rte_ipv6_hdr *ip_hdr;
- struct ipv6_extension_fragment *frag_hdr;
+ struct rte_ipv6_fragment_ext *frag_hdr;
struct rte_mbuf *m, *prev;
uint32_t i, n, ofs, first_len;
uint32_t last_len, move_len, payload_len;
@@ -102,7 +102,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
* the main IPv6 header instead.
*/
move_len = m->l2_len + m->l3_len - sizeof(*frag_hdr);
- frag_hdr = (struct ipv6_extension_fragment *) (ip_hdr + 1);
+ frag_hdr = (struct rte_ipv6_fragment_ext *) (ip_hdr + 1);
ip_hdr->proto = frag_hdr->next_header;
ip_frag_memmove(rte_pktmbuf_mtod_offset(m, char *, sizeof(*frag_hdr)),
@@ -136,7 +136,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
struct rte_mbuf *
rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb, uint64_t tms,
- struct rte_ipv6_hdr *ip_hdr, struct ipv6_extension_fragment *frag_hdr)
+ struct rte_ipv6_hdr *ip_hdr, struct rte_ipv6_fragment_ext *frag_hdr)
{
struct ip_frag_pkt *fp;
struct ip_frag_key key;
diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c
index 403028f8d6..8508814bb2 100644
--- a/lib/port/rte_port_ras.c
+++ b/lib/port/rte_port_ras.c
@@ -186,7 +186,7 @@ process_ipv6(struct rte_port_ring_writer_ras *p, struct rte_mbuf *pkt)
struct rte_ipv6_hdr *pkt_hdr =
rte_pktmbuf_mtod(pkt, struct rte_ipv6_hdr *);
- struct ipv6_extension_fragment *frag_hdr;
+ struct rte_ipv6_fragment_ext *frag_hdr;
uint16_t frag_data = 0;
frag_hdr = rte_ipv6_frag_get_ipv6_fragment_header(pkt_hdr);
if (frag_hdr != NULL)
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ip_frag: increase default value for config parameter
2021-11-02 19:03 14% [dpdk-dev] [PATCH] ip_frag: increase default value for config parameter Konstantin Ananyev
@ 2021-11-08 22:08 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-11-08 22:08 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev, techboard, bruce.richardson, koncept1
02/11/2021 20:03, Konstantin Ananyev:
> Increase default value for config parameter RTE_LIBRTE_IP_FRAG_MAX_FRAG
> from 4 to 8. This parameter controls maximum number of fragments per
> packet in ip reassembly table. Increasing this value from 4 to 8 will
> allow users to cover common case with jumbo packet size of 9KB and
> fragments with default frame size (1500B).
> As RTE_LIBRTE_IP_FRAG_MAX_FRAG is used in definition of public
> structure (struct rte_ip_frag_death_row), this is an ABI change.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> -#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4
> +#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 8
This unannounced change was approved by the techboard:
http://inbox.dpdk.org/dev/0fccb0b7-b2bb-7391-9c94-e87fbf64f007@redhat.com/
Applied with simplified release notes, thanks.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v16 8/9] eal: implement functions for thread barrier management
@ 2021-11-09 2:07 3% ` Narcisa Ana Maria Vasile
2021-11-10 3:13 0% ` Narcisa Ana Maria Vasile
0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-11-09 2:07 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, dmitry.kozliuk, khot, dmitrym, roretzla, talshn, ocardona,
bruce.richardson, david.marchand, pallavi.kadam
On Tue, Oct 12, 2021 at 06:32:09PM +0200, Thomas Monjalon wrote:
> 09/10/2021 09:41, Narcisa Ana Maria Vasile:
> > From: Narcisa Vasile <navasile@microsoft.com>
> >
> > Add functions for barrier init, destroy, wait.
> >
> > A portable type is used to represent a barrier identifier.
> > The rte_thread_barrier_wait() function returns the same value
> > on all platforms.
> >
> > Signed-off-by: Narcisa Vasile <navasile@microsoft.com>
> > ---
> > lib/eal/common/rte_thread.c | 61 ++++++++++++++++++++++++++++++++++++
> > lib/eal/include/rte_thread.h | 58 ++++++++++++++++++++++++++++++++++
> > lib/eal/version.map | 3 ++
> > lib/eal/windows/rte_thread.c | 56 +++++++++++++++++++++++++++++++++
> > 4 files changed, 178 insertions(+)
>
> It doesn't need to be part of the API.
> The pthread barrier is used only as part of the control thread implementation.
> The need disappear if you implement control thread on Windows.
>
Actually I think I have the implementation already. I've worked at this some time ago,
I have this patch:
[v4,2/6] eal: add function for control thread creation
The issue is I will break ABI so I cannot merge it as part of this patchset.
I'll see if I can remove this barrier patch though.
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5] ip_frag: add namespace
2021-11-08 13:55 3% ` [dpdk-dev] [PATCH v4 2/2] ip_frag: add namespace Konstantin Ananyev
@ 2021-11-09 12:32 3% ` Konstantin Ananyev
0 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-11-09 12:32 UTC (permalink / raw)
To: dev; +Cc: Konstantin Ananyev
Update public macros to have RTE_IP_FRAG_ prefix.
Update DPDK components to use new names.
Keep obsolete macro for compatibility reasons.
Renamed experimental function ``rte_frag_table_del_expired_entries``to
``rte_ip_frag_table_del_expired_entries`` to comply with other public
API naming convention.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 ++++++
examples/ip_reassembly/main.c | 2 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 +-
lib/ip_frag/rte_ip_frag.h | 29 ++++++++++++++++----------
lib/ip_frag/rte_ip_frag_common.c | 5 +++--
lib/ip_frag/rte_ipv6_fragmentation.c | 12 +++++------
lib/ip_frag/rte_ipv6_reassembly.c | 6 +++---
lib/ip_frag/version.map | 2 +-
lib/port/rte_port_ras.c | 2 +-
9 files changed, 40 insertions(+), 26 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 01923e2deb..226dbb5bf0 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -565,6 +565,12 @@ API Changes
* fib: Added the ``rib_ext_sz`` field to ``rte_fib_conf`` and ``rte_fib6_conf``
so that user can specify the size of the RIB extension inside the FIB.
+* ip_frag: All macros updated to have ``RTE_IP_FRAG_`` prefix. Obsolete
+ macros are kept for compatibility. DPDK components updated to use new names.
+ Experimental function ``rte_frag_table_del_expired_entries`` was renamed to
+ ``rte_ip_frag_table_del_expired_entries`` to comply with other public
+ API naming convention.
+
ABI Changes
-----------
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 547b47276e..fb3cac3bd0 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -371,7 +371,7 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
eth_hdr->ether_type = rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* if packet is IPv6 */
- struct ipv6_extension_fragment *frag_hdr;
+ struct rte_ipv6_fragment_ext *frag_hdr;
struct rte_ipv6_hdr *ip_hdr;
ip_hdr = (struct rte_ipv6_hdr *)(eth_hdr + 1);
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 0a1c5bcaaa..86bb7e9064 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2647,7 +2647,7 @@ rx_callback(__rte_unused uint16_t port, __rte_unused uint16_t queue,
rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) {
struct rte_ipv6_hdr *iph;
- struct ipv6_extension_fragment *fh;
+ struct rte_ipv6_fragment_ext *fh;
iph = (struct rte_ipv6_hdr *)(eth + 1);
fh = rte_ipv6_frag_get_ipv6_fragment_header(iph);
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index b469bb5f4e..9493021428 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -27,22 +27,19 @@ extern "C" {
struct rte_mbuf;
-#define IP_FRAG_DEATH_ROW_LEN 32 /**< death row size (in packets) */
+#define RTE_IP_FRAG_DEATH_ROW_LEN 32 /**< death row size (in packets) */
/* death row size in mbufs */
-#define IP_FRAG_DEATH_ROW_MBUF_LEN \
- (IP_FRAG_DEATH_ROW_LEN * (RTE_LIBRTE_IP_FRAG_MAX_FRAG + 1))
+#define RTE_IP_FRAG_DEATH_ROW_MBUF_LEN \
+ (RTE_IP_FRAG_DEATH_ROW_LEN * (RTE_LIBRTE_IP_FRAG_MAX_FRAG + 1))
/** mbuf death row (packets to be freed) */
struct rte_ip_frag_death_row {
uint32_t cnt; /**< number of mbufs currently on death row */
- struct rte_mbuf *row[IP_FRAG_DEATH_ROW_MBUF_LEN];
+ struct rte_mbuf *row[RTE_IP_FRAG_DEATH_ROW_MBUF_LEN];
/**< mbufs to be freed */
};
-/* struct ipv6_extension_fragment moved to librte_net/rte_ip.h and renamed. */
-#define ipv6_extension_fragment rte_ipv6_fragment_ext
-
/**
* Create a new IP fragmentation table.
*
@@ -128,7 +125,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
struct rte_ip_frag_death_row *dr,
struct rte_mbuf *mb, uint64_t tms, struct rte_ipv6_hdr *ip_hdr,
- struct ipv6_extension_fragment *frag_hdr);
+ struct rte_ipv6_fragment_ext *frag_hdr);
/**
* Return a pointer to the packet's fragment header, if found.
@@ -141,11 +138,11 @@ struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
* Pointer to the IPv6 fragment extension header, or NULL if it's not
* present.
*/
-static inline struct ipv6_extension_fragment *
+static inline struct rte_ipv6_fragment_ext *
rte_ipv6_frag_get_ipv6_fragment_header(struct rte_ipv6_hdr *hdr)
{
if (hdr->proto == IPPROTO_FRAGMENT) {
- return (struct ipv6_extension_fragment *) ++hdr;
+ return (struct rte_ipv6_fragment_ext *) ++hdr;
}
else
return NULL;
@@ -258,9 +255,19 @@ rte_ip_frag_table_statistics_dump(FILE * f, const struct rte_ip_frag_tbl *tbl);
*/
__rte_experimental
void
-rte_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
+rte_ip_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
struct rte_ip_frag_death_row *dr, uint64_t tms);
+/**@{*/
+/**
+ * Obsolete macros, kept here for compatibility reasons.
+ * Will be deprecated/removed in future DPDK releases.
+ */
+#define IP_FRAG_DEATH_ROW_LEN RTE_IP_FRAG_DEATH_ROW_LEN
+#define IP_FRAG_DEATH_ROW_MBUF_LEN RTE_IP_FRAG_DEATH_ROW_MBUF_LEN
+#define ipv6_extension_fragment rte_ipv6_fragment_ext
+/**@}*/
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
index 6b29e9d7ed..2c781a6d33 100644
--- a/lib/ip_frag/rte_ip_frag_common.c
+++ b/lib/ip_frag/rte_ip_frag_common.c
@@ -124,7 +124,7 @@ rte_ip_frag_table_statistics_dump(FILE *f, const struct rte_ip_frag_tbl *tbl)
/* Delete expired fragments */
void
-rte_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
+rte_ip_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
struct rte_ip_frag_death_row *dr, uint64_t tms)
{
uint64_t max_cycles;
@@ -135,7 +135,8 @@ rte_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
TAILQ_FOREACH(fp, &tbl->lru, lru)
if (max_cycles + fp->start < tms) {
/* check that death row has enough space */
- if (IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >= fp->last_idx)
+ if (RTE_IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >=
+ fp->last_idx)
ip_frag_tbl_del(tbl, dr, fp);
else
return;
diff --git a/lib/ip_frag/rte_ipv6_fragmentation.c b/lib/ip_frag/rte_ipv6_fragmentation.c
index 5d67336f2d..88f29c158c 100644
--- a/lib/ip_frag/rte_ipv6_fragmentation.c
+++ b/lib/ip_frag/rte_ipv6_fragmentation.c
@@ -22,13 +22,13 @@ __fill_ipv6hdr_frag(struct rte_ipv6_hdr *dst,
const struct rte_ipv6_hdr *src, uint16_t len, uint16_t fofs,
uint32_t mf)
{
- struct ipv6_extension_fragment *fh;
+ struct rte_ipv6_fragment_ext *fh;
rte_memcpy(dst, src, sizeof(*dst));
dst->payload_len = rte_cpu_to_be_16(len);
dst->proto = IPPROTO_FRAGMENT;
- fh = (struct ipv6_extension_fragment *) ++dst;
+ fh = (struct rte_ipv6_fragment_ext *) ++dst;
fh->next_header = src->proto;
fh->reserved = 0;
fh->frag_data = rte_cpu_to_be_16(RTE_IPV6_SET_FRAG_DATA(fofs, mf));
@@ -94,7 +94,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
*/
frag_size = mtu_size - sizeof(struct rte_ipv6_hdr) -
- sizeof(struct ipv6_extension_fragment);
+ sizeof(struct rte_ipv6_fragment_ext);
frag_size = RTE_ALIGN_FLOOR(frag_size, RTE_IPV6_EHDR_FO_ALIGN);
/* Check that pkts_out is big enough to hold all fragments */
@@ -124,9 +124,9 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
/* Reserve space for the IP header that will be built later */
out_pkt->data_len = sizeof(struct rte_ipv6_hdr) +
- sizeof(struct ipv6_extension_fragment);
+ sizeof(struct rte_ipv6_fragment_ext);
out_pkt->pkt_len = sizeof(struct rte_ipv6_hdr) +
- sizeof(struct ipv6_extension_fragment);
+ sizeof(struct rte_ipv6_fragment_ext);
frag_bytes_remaining = frag_size;
out_seg_prev = out_pkt;
@@ -184,7 +184,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
fragment_offset = (uint16_t)(fragment_offset +
out_pkt->pkt_len - sizeof(struct rte_ipv6_hdr)
- - sizeof(struct ipv6_extension_fragment));
+ - sizeof(struct rte_ipv6_fragment_ext));
/* Write the fragment to the output list */
pkts_out[out_pkt_pos] = out_pkt;
diff --git a/lib/ip_frag/rte_ipv6_reassembly.c b/lib/ip_frag/rte_ipv6_reassembly.c
index 6bc0bf792a..d4019e87e6 100644
--- a/lib/ip_frag/rte_ipv6_reassembly.c
+++ b/lib/ip_frag/rte_ipv6_reassembly.c
@@ -33,7 +33,7 @@ struct rte_mbuf *
ipv6_frag_reassemble(struct ip_frag_pkt *fp)
{
struct rte_ipv6_hdr *ip_hdr;
- struct ipv6_extension_fragment *frag_hdr;
+ struct rte_ipv6_fragment_ext *frag_hdr;
struct rte_mbuf *m, *prev;
uint32_t i, n, ofs, first_len;
uint32_t last_len, move_len, payload_len;
@@ -102,7 +102,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
* the main IPv6 header instead.
*/
move_len = m->l2_len + m->l3_len - sizeof(*frag_hdr);
- frag_hdr = (struct ipv6_extension_fragment *) (ip_hdr + 1);
+ frag_hdr = (struct rte_ipv6_fragment_ext *) (ip_hdr + 1);
ip_hdr->proto = frag_hdr->next_header;
ip_frag_memmove(rte_pktmbuf_mtod_offset(m, char *, sizeof(*frag_hdr)),
@@ -136,7 +136,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
struct rte_mbuf *
rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb, uint64_t tms,
- struct rte_ipv6_hdr *ip_hdr, struct ipv6_extension_fragment *frag_hdr)
+ struct rte_ipv6_hdr *ip_hdr, struct rte_ipv6_fragment_ext *frag_hdr)
{
struct ip_frag_pkt *fp;
struct ip_frag_key key;
diff --git a/lib/ip_frag/version.map b/lib/ip_frag/version.map
index 33f231fb31..e537224293 100644
--- a/lib/ip_frag/version.map
+++ b/lib/ip_frag/version.map
@@ -16,5 +16,5 @@ DPDK_22 {
EXPERIMENTAL {
global:
- rte_frag_table_del_expired_entries;
+ rte_ip_frag_table_del_expired_entries;
};
diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c
index 403028f8d6..8508814bb2 100644
--- a/lib/port/rte_port_ras.c
+++ b/lib/port/rte_port_ras.c
@@ -186,7 +186,7 @@ process_ipv6(struct rte_port_ring_writer_ras *p, struct rte_mbuf *pkt)
struct rte_ipv6_hdr *pkt_hdr =
rte_pktmbuf_mtod(pkt, struct rte_ipv6_hdr *);
- struct ipv6_extension_fragment *frag_hdr;
+ struct rte_ipv6_fragment_ext *frag_hdr;
uint16_t frag_data = 0;
frag_hdr = rte_ipv6_frag_get_ipv6_fragment_header(pkt_hdr);
if (frag_hdr != NULL)
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v17 00/13] eal: Add EAL API for threading
@ 2021-11-10 3:01 3% ` Narcisa Ana Maria Vasile
2021-11-11 1:33 3% ` [PATCH v18 0/8] " Narcisa Ana Maria Vasile
1 sibling, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-11-10 3:01 UTC (permalink / raw)
To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
talshn, ocardona
Cc: bruce.richardson, david.marchand, pallavi.kadam
From: Narcisa Vasile <navasile@microsoft.com>
EAL thread API
**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.
**Goals**
* Introduce a generic EAL API for threading support that will remove
the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
3rd party thread library through a configuration option.
**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)
**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();
lib/librte_eal/common/rte_thread.c
int rte_thread_create()
{
return pthread_create();
}
lib/librte_eal/windows/rte_thread.c
int rte_thread_create()
{
return CreateThread();
}
-----------------------------------------------------
**Thread attributes**
When or after a thread is created, specific characteristics of the thread
can be adjusted. Currently in DPDK most threads operate at the OS-default
priority level but there are cases when increasing the priority is useful.
For example, high-performance applications require elevated priority to
avoid being preempted by other threads on the system.
The following structure that represents thread attributes has been
defined:
typedef struct
{
enum rte_thread_priority priority;
rte_cpuset_t cpuset;
} rte_thread_attr_t;
The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.
*Priority* is represented through an enum that currently advertises
two values for priority:
- RTE_THREAD_PRIORITY_NORMAL
- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority - sets the priority of a thread
rte_thread_get_priority - retrieves the priority of a thread
from the OS
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
with a new value for priority
*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
rte_thread_attr_t object
rte_thread_set/get_affinity – sets/gets the affinity of a thread
**Errors**
As different platforms have different error codes, the approach here
is to translate the Windows error to POSIX-style ones to have
uniformity over the values returned.
**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Additional functionality offered by pthread_*
(such as pthread_setname_np, etc.)
v17:
- Move unrelated changes to the correct patch.
- Rename RTE_STATIC_MUTEX to avoid confusion, since
the mutex is still dynamically initialized behind the scenes.
- Break down the unit tests into smaller patches and reorder them.
- Remove duplicated code in header.
- Improve commit messages and cover letter.
v16:
- Fix warning on freebsd by adding cast
- Change affinity unit test to consider ases when the requested CPU
are not available on the system.
- Fix priority unit test to avoid termination of thread before the
priority is checked.
v15:
- Add try_lock mutex functionality. If the mutex is already owned by a
different thread, the function returns immediately. Otherwise,
the mutex will be acquired.
- Add function for getting the priority of a thread.
An auxiliary function that translates the OS priority to the
EAL accepted ones is added.
- Fix unit tests logging, add descriptive asserts that mark test failures.
Verify mutex locking, verify barrier return values. Add test for
statically initialized mutexes.
- Fix Alpine build by removing the use of pthread_attr_set_affinity() and
using pthread_set_affinity() after the thread is created.
v14:
- Remove patch "eal: add EAL argument for setting thread priority"
This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
as the default.
- Fix issue with thread return value.
v13:
- Fix syntax error in unit tests
v12:
- Fix freebsd warning about initializer in unit tests
v11:
- Add unit tests for thread API
- Rebase
v10:
- Remove patch no. 10. It will be broken down in subpatches
and sent as a different patchset that depends on this one.
This is done due to the ABI breaks that would be caused by patch 10.
- Replace unix/rte_thread.c with common/rte_thread.c
- Remove initializations that may prevent compiler from issuing useful
warnings.
- Remove rte_thread_types.h and rte_windows_thread_types.h
- Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
- Remove functions that retrieves thread handle from process handle
- Remove rte_thread_cancel() until same behavior is obtained on
all platforms.
- Fix rte_thread_detach() function description,
return value and remove empty line.
- Reimplement mutex functions. Add compatible representation for mutex
identifier. Add macro to replace static mutex initialization instances.
- Fix commit messages (lines too long, remove unicode symbols)
v9:
- Sign patches
v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value
v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.
v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()
v5:
- update cover letter with more details on the priority argument
v4:
- fix function description
- rebase
v3:
- rebase
v2:
- revert changes that break ABI
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c
Narcisa Vasile (13):
eal: add basic threading functions
eal: add thread attributes
eal/windows: translate Windows errors to errno-style errors
eal: implement functions for thread affinity management
eal: implement thread priority management functions
eal: add thread lifetime management
app/test: add unit tests for rte_thread_self
app/test: add unit tests for thread attributes
app/test: add unit tests for thread lifetime management
eal: implement functions for thread barrier management
app/test: add unit tests for barrier
eal: implement functions for mutex management
app/test: add unit tests for mutex
app/test/meson.build | 2 +
app/test/test_threads.c | 372 ++++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/common/rte_thread.c | 497 ++++++++++++++++++++++++
lib/eal/include/rte_thread.h | 412 +++++++++++++++++++-
lib/eal/unix/meson.build | 1 -
lib/eal/unix/rte_thread.c | 92 -----
lib/eal/version.map | 22 ++
lib/eal/windows/eal_lcore.c | 176 ++++++---
lib/eal/windows/eal_windows.h | 10 +
lib/eal/windows/include/sched.h | 2 +-
lib/eal/windows/rte_thread.c | 656 ++++++++++++++++++++++++++++++--
12 files changed, 2070 insertions(+), 173 deletions(-)
create mode 100644 app/test/test_threads.c
create mode 100644 lib/eal/common/rte_thread.c
delete mode 100644 lib/eal/unix/rte_thread.c
--
2.31.0.vfs.0.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v16 8/9] eal: implement functions for thread barrier management
2021-11-09 2:07 3% ` Narcisa Ana Maria Vasile
@ 2021-11-10 3:13 0% ` Narcisa Ana Maria Vasile
0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-11-10 3:13 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, dmitry.kozliuk, khot, dmitrym, roretzla, talshn, ocardona,
bruce.richardson, david.marchand, pallavi.kadam
On Mon, Nov 08, 2021 at 06:07:34PM -0800, Narcisa Ana Maria Vasile wrote:
> On Tue, Oct 12, 2021 at 06:32:09PM +0200, Thomas Monjalon wrote:
> > 09/10/2021 09:41, Narcisa Ana Maria Vasile:
> > > From: Narcisa Vasile <navasile@microsoft.com>
> > >
> > > Add functions for barrier init, destroy, wait.
> > >
> > > A portable type is used to represent a barrier identifier.
> > > The rte_thread_barrier_wait() function returns the same value
> > > on all platforms.
> > >
> > > Signed-off-by: Narcisa Vasile <navasile@microsoft.com>
> > > ---
> > > lib/eal/common/rte_thread.c | 61 ++++++++++++++++++++++++++++++++++++
> > > lib/eal/include/rte_thread.h | 58 ++++++++++++++++++++++++++++++++++
> > > lib/eal/version.map | 3 ++
> > > lib/eal/windows/rte_thread.c | 56 +++++++++++++++++++++++++++++++++
> > > 4 files changed, 178 insertions(+)
> >
> > It doesn't need to be part of the API.
> > The pthread barrier is used only as part of the control thread implementation.
> > The need disappear if you implement control thread on Windows.
> >
> Actually I think I have the implementation already. I've worked at this some time ago,
> I have this patch:
> [v4,2/6] eal: add function for control thread creation
>
> The issue is I will break ABI so I cannot merge it as part of this patchset.
> I'll see if I can remove this barrier patch though.
I couldn't find a good way to test mutexes without barriers, so I kept this for now.
^ permalink raw reply [relevance 0%]
* [PATCH 1/5] ci: test build with minimum configuration
@ 2021-11-10 16:48 4% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-11-10 16:48 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, thomas, bluca, tredaelli, i.maximets,
james.r.harris, mohammed, Aaron Conole, Michael Santana
Disabling optional libraries was not tested.
Add a new target in test-meson-builds.sh and GHA.
The Bluefield target is removed from test-meson-builds.sh to save space
and compilation time in exchange of the new target.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
.ci/linux-build.sh | 3 +++
.github/workflows/build.yml | 5 +++++
devtools/test-meson-builds.sh | 4 +++-
3 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index ef0bd099be..e7ed648099 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -87,6 +87,9 @@ OPTS="$OPTS -Dplatform=generic"
OPTS="$OPTS --default-library=$DEF_LIB"
OPTS="$OPTS --buildtype=debugoptimized"
OPTS="$OPTS -Dcheck_includes=true"
+if [ "$NO_OPTIONAL_LIBS" = "true" ]; then
+ OPTS="$OPTS -Ddisable_libs=*"
+fi
meson build --werror $OPTS
ninja -C build
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 4151cafee7..346cc75c20 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -21,6 +21,7 @@ jobs:
CC: ccache ${{ matrix.config.compiler }}
DEF_LIB: ${{ matrix.config.library }}
LIBABIGAIL_VERSION: libabigail-1.8
+ NO_OPTIONAL_LIBS: ${{ matrix.config.no_optional_libs != '' }}
PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
REF_GIT_TAG: none
RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
@@ -32,6 +33,10 @@ jobs:
- os: ubuntu-18.04
compiler: gcc
library: static
+ - os: ubuntu-18.04
+ compiler: gcc
+ library: shared
+ no_optional_libs: no-optional-libs
- os: ubuntu-18.04
compiler: gcc
library: shared
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 9ec8e2bc7e..36ecf63ec6 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -220,6 +220,8 @@ for c in gcc clang ; do
done
done
+build build-x86-no-optional-libs cc skipABI $use_shared -Ddisable_libs=*
+
# test compilation with minimal x86 instruction set
# Set the install path for libraries to "lib" explicitly to prevent problems
# with pkg-config prefixes if installed in "lib/x86_64-linux-gnu" later.
@@ -258,7 +260,7 @@ export CC="clang"
build build-arm64-host-clang $f ABI $use_shared
unset CC
# some gcc/arm configurations
-for f in $srcdir/config/arm/arm64_[bdo]*gcc ; do
+for f in $srcdir/config/arm/arm64_[do]*gcc ; do
export CC="$CCACHE gcc"
targetdir=build-$(basename $f | tr '_' '-' | cut -d'-' -f-2)
build $targetdir $f skipABI $use_shared
--
2.23.0
^ permalink raw reply [relevance 4%]
* [PATCH v18 0/8] eal: Add EAL API for threading
2021-11-10 3:01 3% ` [dpdk-dev] [PATCH v17 00/13] eal: Add EAL API for threading Narcisa Ana Maria Vasile
@ 2021-11-11 1:33 3% ` Narcisa Ana Maria Vasile
0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-11-11 1:33 UTC (permalink / raw)
To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
talshn, ocardona
Cc: bruce.richardson, david.marchand, pallavi.kadam
From: Narcisa Vasile <navasile@microsoft.com>
EAL thread API
**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.
**Goals**
* Introduce a generic EAL API for threading support that will remove
the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
3rd party thread library through a configuration option.
**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)
**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();
lib/librte_eal/common/rte_thread.c
int rte_thread_create()
{
return pthread_create();
}
lib/librte_eal/windows/rte_thread.c
int rte_thread_create()
{
return CreateThread();
}
-----------------------------------------------------
**Thread attributes**
When or after a thread is created, specific characteristics of the thread
can be adjusted. Currently in DPDK most threads operate at the OS-default
priority level but there are cases when increasing the priority is useful.
For example, high-performance applications require elevated priority to
avoid being preempted by other threads on the system.
The following structure that represents thread attributes has been
defined:
typedef struct
{
enum rte_thread_priority priority;
rte_cpuset_t cpuset;
} rte_thread_attr_t;
The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.
*Priority* is represented through an enum that currently advertises
two values for priority:
- RTE_THREAD_PRIORITY_NORMAL
- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority - sets the priority of a thread
rte_thread_get_priority - retrieves the priority of a thread
from the OS
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
with a new value for priority
*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
rte_thread_attr_t object
rte_thread_set/get_affinity – sets/gets the affinity of a thread
**Errors**
As different platforms have different error codes, the approach here
is to translate the Windows error to POSIX-style ones to have
uniformity over the values returned.
**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Additional functionality offered by pthread_*
(such as pthread_setname_np, etc.)
v18:
- Squash unit tests in corresponding patches.
- Prevent priority to be set to realtime on non-Windows systems.
- Use already existing affinity function in rte_thread_create()
v17:
- Move unrelated changes to the correct patch.
- Rename RTE_STATIC_MUTEX to avoid confusion, since
the mutex is still dynamically initialized behind the scenes.
- Break down the unit tests into smaller patches and reorder them.
- Remove duplicated code in header
- Improve commit messages and cover letter.
v16:
- Fix warning on freebsd by adding cast
- Change affinity unit test to consider ases when the requested CPU
are not available on the system.
- Fix priority unit test to avoid termination of thread before the
priority is checked.
v15:
- Add try_lock mutex functionality. If the mutex is already owned by a
different thread, the function returns immediately. Otherwise,
the mutex will be acquired.
- Add function for getting the priority of a thread.
An auxiliary function that translates the OS priority to the
EAL accepted ones is added.
- Fix unit tests logging, add descriptive asserts that mark test failures.
Verify mutex locking, verify barrier return values. Add test for
statically initialized mutexes.
- Fix Alpine build by removing the use of pthread_attr_set_affinity() and
using pthread_set_affinity() after the thread is created.
v14:
- Remove patch "eal: add EAL argument for setting thread priority"
This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
as the default.
- Fix issue with thread return value.
v13:
- Fix syntax error in unit tests
v12:
- Fix freebsd warning about initializer in unit tests
v11:
- Add unit tests for thread API
- Rebase
v10:
- Remove patch no. 10. It will be broken down in subpatches
and sent as a different patchset that depends on this one.
This is done due to the ABI breaks that would be caused by patch 10.
- Replace unix/rte_thread.c with common/rte_thread.c
- Remove initializations that may prevent compiler from issuing useful
warnings.
- Remove rte_thread_types.h and rte_windows_thread_types.h
- Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
- Remove functions that retrieves thread handle from process handle
- Remove rte_thread_cancel() until same behavior is obtained on
all platforms.
- Fix rte_thread_detach() function description,
return value and remove empty line.
- Reimplement mutex functions. Add compatible representation for mutex
identifier. Add macro to replace static mutex initialization instances.
- Fix commit messages (lines too long, remove unicode symbols)
v9:
- Sign patches
v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value
v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.
v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()
v5:
- update cover letter with more details on the priority argument
v4:
- fix function description
- rebase
v3:
- rebase
v2:
- revert changes that break ABI
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c
Narcisa Vasile (8):
eal: add basic threading functions
eal: add thread attributes
eal/windows: translate Windows errors to errno-style errors
eal: implement functions for thread affinity management
eal: implement thread priority management functions
eal: add thread lifetime management
eal: implement functions for thread barrier management
eal: implement functions for mutex management
app/test/meson.build | 2 +
app/test/test_threads.c | 372 ++++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/common/rte_thread.c | 511 +++++++++++++++++++++++++
lib/eal/include/rte_thread.h | 412 +++++++++++++++++++-
lib/eal/unix/meson.build | 1 -
lib/eal/unix/rte_thread.c | 92 -----
lib/eal/version.map | 22 ++
lib/eal/windows/eal_lcore.c | 176 ++++++---
lib/eal/windows/eal_windows.h | 10 +
lib/eal/windows/include/sched.h | 2 +-
lib/eal/windows/rte_thread.c | 656 ++++++++++++++++++++++++++++++--
12 files changed, 2084 insertions(+), 173 deletions(-)
create mode 100644 app/test/test_threads.c
create mode 100644 lib/eal/common/rte_thread.c
delete mode 100644 lib/eal/unix/rte_thread.c
--
2.31.0.vfs.0.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] doc: propose correction rte_{bsf, fls} inline functions type use
@ 2021-11-11 11:54 3% ` Thomas Monjalon
2021-11-11 12:41 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-11-11 11:54 UTC (permalink / raw)
To: Morten Brørup, Tyler Retzlaff
Cc: stephen, dev, anatoly.burakov, ranjit.menon, mdr, david.marchand,
dmitry.kozliuk, bruce.richardson
11/11/2021 05:15, Tyler Retzlaff:
> On Tue, Oct 26, 2021 at 09:45:20AM +0200, Morten Brørup wrote:
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> > > Sent: Monday, 25 October 2021 21.14
> > >
> > > 15/03/2021 20:34, Tyler Retzlaff:
> > > > The proposal has resulted from request to review [1] the following
> > > > functions where there appeared to be inconsistency in return type
> > > > or parameter type selections for the following inline functions.
> > > >
> > > > rte_bsf32()
> > > > rte_bsf32_safe()
> > > > rte_bsf64()
> > > > rte_bsf64_safe()
> > > > rte_fls_u32()
> > > > rte_fls_u64()
> > > > rte_log2_u32()
> > > > rte_log2_u64()
> > > >
> > > > [1] http://mails.dpdk.org/archives/dev/2021-March/201590.html
> > > >
> > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > > ---
> > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > +* eal: Fix inline function return and parameter types for
> > > rte_{bsf,fls}
> > > > + inline functions to be consistent.
> > > > + Change ``rte_bsf32_safe`` parameter ``v`` from ``uint64_t`` to
> > > ``uint32_t``.
> > > > + Change ``rte_bsf64`` return type to ``uint32_t`` instead of
> > > ``int``.
> > > > + Change ``rte_fls_u32`` return type to ``uint32_t`` instead of
> > > ``int``.
> > > > + Change ``rte_fls_u64`` return type to ``uint32_t`` instead of
> > > ``int``.
> > >
> > > It seems we completely forgot this.
> > > How critical is it?
> >
>
> our organization as a matter of internal security policy requires these
> sorts of things to be fixed. while i didn't see any bugs in the dpdk
> code there is an opportunity for users of these functions to
> accidentally write code that is prone to integer and buffer overflow
> class bugs.
>
> there is no urgency, but why leave things sloppy? though i do wish this
> had been responded to in a more timely manner 7 months for something
> that should have almost been rubber stamped.
It's difficult to be on all topics.
The best way to avoid such miss is to ping when you see no progress.
So what's next?
They are only inline functions, right? so no ABI breakage.
Is it going to require any change on application-side? I guess no.
Is it acceptable in 21.11-rc3? maybe too late?
Is it acceptable in 22.02?
^ permalink raw reply [relevance 3%]
* RE: [dpdk-dev] [PATCH v2] doc: propose correction rte_{bsf, fls} inline functions type use
2021-11-11 11:54 3% ` Thomas Monjalon
@ 2021-11-11 12:41 0% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2021-11-11 12:41 UTC (permalink / raw)
To: Thomas Monjalon, Tyler Retzlaff
Cc: stephen, dev, anatoly.burakov, ranjit.menon, mdr, david.marchand,
dmitry.kozliuk, bruce.richardson
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, 11 November 2021 12.55
>
> 11/11/2021 05:15, Tyler Retzlaff:
> > On Tue, Oct 26, 2021 at 09:45:20AM +0200, Morten Brørup wrote:
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas
> Monjalon
> > > > Sent: Monday, 25 October 2021 21.14
> > > >
> > > > 15/03/2021 20:34, Tyler Retzlaff:
> > > > > The proposal has resulted from request to review [1] the
> following
> > > > > functions where there appeared to be inconsistency in return
> type
> > > > > or parameter type selections for the following inline
> functions.
> > > > >
> > > > > rte_bsf32()
> > > > > rte_bsf32_safe()
> > > > > rte_bsf64()
> > > > > rte_bsf64_safe()
> > > > > rte_fls_u32()
> > > > > rte_fls_u64()
> > > > > rte_log2_u32()
> > > > > rte_log2_u64()
> > > > >
> > > > > [1] http://mails.dpdk.org/archives/dev/2021-March/201590.html
> > > > >
> > > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > > > ---
> > > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > > +* eal: Fix inline function return and parameter types for
> > > > rte_{bsf,fls}
> > > > > + inline functions to be consistent.
> > > > > + Change ``rte_bsf32_safe`` parameter ``v`` from ``uint64_t``
> to
> > > > ``uint32_t``.
> > > > > + Change ``rte_bsf64`` return type to ``uint32_t`` instead of
> > > > ``int``.
> > > > > + Change ``rte_fls_u32`` return type to ``uint32_t`` instead
> of
> > > > ``int``.
> > > > > + Change ``rte_fls_u64`` return type to ``uint32_t`` instead
> of
> > > > ``int``.
> > > >
> > > > It seems we completely forgot this.
> > > > How critical is it?
> > >
> >
> > our organization as a matter of internal security policy requires
> these
> > sorts of things to be fixed. while i didn't see any bugs in the dpdk
> > code there is an opportunity for users of these functions to
> > accidentally write code that is prone to integer and buffer overflow
> > class bugs.
> >
> > there is no urgency, but why leave things sloppy? though i do wish
> this
> > had been responded to in a more timely manner 7 months for something
> > that should have almost been rubber stamped.
>
> It's difficult to be on all topics.
> The best way to avoid such miss is to ping when you see no progress.
>
> So what's next?
> They are only inline functions, right? so no ABI breakage.
> Is it going to require any change on application-side? I guess no.
> Is it acceptable in 21.11-rc3? maybe too late?
> Is it acceptable in 22.02?
If Microsoft (represented by Tyler in this case) considers this a bug, I would prefer getting it into 21.11 - especially because it is an LTS release.
-Morten
^ permalink raw reply [relevance 0%]
* [PATCH v4 08/18] eal: fix typos in comments
@ 2021-11-12 0:02 4% ` Stephen Hemminger
2021-11-12 15:22 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-11-12 0:02 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Ray Kinsella, Dmitry Kozlyuk,
Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam
Minor spelling errors.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/eal/include/rte_function_versioning.h | 2 +-
lib/eal/windows/include/fnmatch.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/eal/include/rte_function_versioning.h b/lib/eal/include/rte_function_versioning.h
index 746a1e19923e..eb6dd2bc1727 100644
--- a/lib/eal/include/rte_function_versioning.h
+++ b/lib/eal/include/rte_function_versioning.h
@@ -15,7 +15,7 @@
/*
* Provides backwards compatibility when updating exported functions.
- * When a symol is exported from a library to provide an API, it also provides a
+ * When a symbol is exported from a library to provide an API, it also provides a
* calling convention (ABI) that is embodied in its name, return type,
* arguments, etc. On occasion that function may need to change to accommodate
* new functionality, behavior, etc. When that occurs, it is desirable to
diff --git a/lib/eal/windows/include/fnmatch.h b/lib/eal/windows/include/fnmatch.h
index 142753c3568d..c272f65ccdc3 100644
--- a/lib/eal/windows/include/fnmatch.h
+++ b/lib/eal/windows/include/fnmatch.h
@@ -30,7 +30,7 @@ extern "C" {
* with the given regular expression pattern.
*
* @param pattern
- * regular expression notation decribing the pattern to match
+ * regular expression notation describing the pattern to match
*
* @param string
* source string to searcg for the pattern
--
2.30.2
^ permalink raw reply [relevance 4%]
* Re: [PATCH v4 08/18] eal: fix typos in comments
2021-11-12 0:02 4% ` [PATCH v4 08/18] eal: fix typos in comments Stephen Hemminger
@ 2021-11-12 15:22 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-11-12 15:22 UTC (permalink / raw)
To: Stephen Hemminger, dev
Cc: Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam
On 12/11/2021 00:02, Stephen Hemminger wrote:
> Minor spelling errors.
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> lib/eal/include/rte_function_versioning.h | 2 +-
> lib/eal/windows/include/fnmatch.h | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/lib/eal/include/rte_function_versioning.h b/lib/eal/include/rte_function_versioning.h
> index 746a1e19923e..eb6dd2bc1727 100644
> --- a/lib/eal/include/rte_function_versioning.h
> +++ b/lib/eal/include/rte_function_versioning.h
> @@ -15,7 +15,7 @@
>
> /*
> * Provides backwards compatibility when updating exported functions.
> - * When a symol is exported from a library to provide an API, it also provides a
> + * When a symbol is exported from a library to provide an API, it also provides a
> * calling convention (ABI) that is embodied in its name, return type,
> * arguments, etc. On occasion that function may need to change to accommodate
> * new functionality, behavior, etc. When that occurs, it is desirable to
> diff --git a/lib/eal/windows/include/fnmatch.h b/lib/eal/windows/include/fnmatch.h
> index 142753c3568d..c272f65ccdc3 100644
> --- a/lib/eal/windows/include/fnmatch.h
> +++ b/lib/eal/windows/include/fnmatch.h
> @@ -30,7 +30,7 @@ extern "C" {
> * with the given regular expression pattern.
> *
> * @param pattern
> - * regular expression notation decribing the pattern to match
> + * regular expression notation describing the pattern to match
> *
> * @param string
> * source string to searcg for the pattern
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* [PATCH v4 0/5] cleanup more stuff on shutdown
@ 2021-11-13 0:28 3% ` Stephen Hemminger
2021-11-13 3:32 3% ` [PATCH v5 0/5] cleanup DPDK resources via eal_cleanup Stephen Hemminger
2021-11-13 17:22 3% ` [PATCH v6 0/5] cleanup more resources on eal_cleanup Stephen Hemminger
2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-11-13 0:28 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Started using valgrind with DPDK, and there are lots of leftover
memory and file descriptors. This makes it hard to find application
leaks versus DPDK leaks.
The DPDK has a function that applications can use to tell it
to cleanup resources on shutdown (rte_eal_cleanup). But the
current coverage of that API is spotty. Many internal parts of
DPDK leave files and allocated memory behind.
This patch set is a first step at getting the sub-parts of
DPDK to cleanup after themselves. These are the easier ones,
the harder and more critical ones are in the drivers
and the memory subsystem.
There should be no new exposed API or ABI changes here.
v4
- rebase to 20.11-rc
- drop one patch (alarm cleanup is implemented)
- drop patch that ends worker threads on cleanup.
the test is calling rte_exit/eal_cleanup in a forked process.
(could argue this is a test bug)!
v3
- fix a couple of minor checkpatch complaints
v2
- rebase after 20.05 file renames
- incorporate review comment feedback
- hold off some of the more involved patches for later
Stephen Hemminger (5):
eal: close log in eal_cleanup
eal: mp: end the multiprocess thread during cleanup
eal: vfio: cleanup the mp sync handle
eal: hotplug: cleanup multiprocess resources
eal: malloc: cleanup mp resources
lib/eal/common/eal_common_log.c | 13 +++++++++++++
lib/eal/common/eal_common_proc.c | 20 +++++++++++++++++---
lib/eal/common/eal_private.h | 7 +++++++
lib/eal/common/hotplug_mp.c | 5 +++++
lib/eal/common/hotplug_mp.h | 6 ++++++
lib/eal/common/malloc_heap.c | 6 ++++++
lib/eal/common/malloc_heap.h | 3 +++
lib/eal/common/malloc_mp.c | 12 ++++++++++++
lib/eal/common/malloc_mp.h | 3 +++
lib/eal/linux/eal.c | 7 +++++++
lib/eal/linux/eal_log.c | 8 ++++++++
lib/eal/linux/eal_vfio.h | 1 +
lib/eal/linux/eal_vfio_mp_sync.c | 8 ++++++++
13 files changed, 96 insertions(+), 3 deletions(-)
--
2.30.2
^ permalink raw reply [relevance 3%]
* [PATCH v5 0/5] cleanup DPDK resources via eal_cleanup
2021-11-13 0:28 3% ` [PATCH v4 0/5] cleanup more stuff " Stephen Hemminger
@ 2021-11-13 3:32 3% ` Stephen Hemminger
2021-11-13 17:22 3% ` [PATCH v6 0/5] cleanup more resources on eal_cleanup Stephen Hemminger
2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-11-13 3:32 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
When testing using ASAN or valgrind with DPDK; there are lots of leftover
memory and file descriptors. This makes it hard to find application
leaks versus internal DPDK leaks.
The DPDK has a function that applications can use to tell it
to cleanup resources on shutdown (rte_eal_cleanup). But the
current coverage of that API is spotty. Many internal parts of
DPDK leave files and allocated memory behind.
This patch set is a first step at getting the sub-parts of
DPDK to cleanup after themselves. These are the easier ones,
the harder and more critical ones are in the drivers
and the memory subsystem.
There should be no new exposed API or ABI changes here.
v5
- add stub for windows build in rte_malloc cleanup
v4
- rebase to 20.11-rc
- drop one patch (alarm cleanup is implemented)
- drop patch that ends worker threads on cleanup.
the test is calling rte_exit/eal_cleanup in a forked process.
(could argue this is a test bug)!
v3
- fix a couple of minor checkpatch complaints
v2
- rebase after 20.05 file renames
- incorporate review comment feedback
- hold off some of the more involved patches for later
Stephen Hemminger (5):
eal: close log in eal_cleanup
eal: mp: end the multiprocess thread during cleanup
eal: vfio: cleanup the mp sync handle
eal: hotplug: cleanup multiprocess resources
eal: malloc: cleanup mp resources
lib/eal/common/eal_common_log.c | 13 +++++++++++++
lib/eal/common/eal_common_proc.c | 20 +++++++++++++++++---
lib/eal/common/eal_private.h | 7 +++++++
lib/eal/common/hotplug_mp.c | 5 +++++
lib/eal/common/hotplug_mp.h | 6 ++++++
lib/eal/common/malloc_heap.c | 6 ++++++
lib/eal/common/malloc_heap.h | 3 +++
lib/eal/common/malloc_mp.c | 12 ++++++++++++
lib/eal/common/malloc_mp.h | 3 +++
lib/eal/linux/eal.c | 7 +++++++
lib/eal/linux/eal_log.c | 8 ++++++++
lib/eal/linux/eal_vfio.h | 1 +
lib/eal/linux/eal_vfio_mp_sync.c | 8 ++++++++
lib/eal/windows/eal_mp.c | 7 +++++++
14 files changed, 103 insertions(+), 3 deletions(-)
--
2.30.2
^ permalink raw reply [relevance 3%]
* [PATCH v6 0/5] cleanup more resources on eal_cleanup
2021-11-13 0:28 3% ` [PATCH v4 0/5] cleanup more stuff " Stephen Hemminger
2021-11-13 3:32 3% ` [PATCH v5 0/5] cleanup DPDK resources via eal_cleanup Stephen Hemminger
@ 2021-11-13 17:22 3% ` Stephen Hemminger
2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-11-13 17:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
When testing using ASAN or valgrind with DPDK; there are lots of leftover
memory and file descriptors. This makes it hard to find application
leaks versus internal DPDK leaks.
The DPDK has a function that applications can use to tell it
to cleanup resources on shutdown (rte_eal_cleanup). But the
current coverage of that API is spotty. Many internal parts of
DPDK leave files and allocated memory behind.
This patch set is a first step at getting the sub-parts of
DPDK to cleanup after themselves. These are the easier ones,
the harder and more critical ones are in the drivers
and the memory subsystem.
There should be no new exposed API or ABI changes here.
v6 - fix windows stub
v5 - add stub for windows build in rte_malloc cleanup
v4
- rebase to 20.11-rc
- drop one patch (alarm cleanup is implemented)
- drop patch that ends worker threads on cleanup.
the test is calling rte_exit/eal_cleanup in a forked process.
(could argue this is a test bug)!
Stephen Hemminger (5):
eal: close log in eal_cleanup
eal: mp: end the multiprocess thread during cleanup
eal: vfio: cleanup the mp sync handle
eal: hotplug: cleanup multiprocess resources
eal: malloc: cleanup mp resources
lib/eal/common/eal_common_log.c | 13 +++++++++++++
lib/eal/common/eal_common_proc.c | 20 +++++++++++++++++---
lib/eal/common/eal_private.h | 7 +++++++
lib/eal/common/hotplug_mp.c | 5 +++++
lib/eal/common/hotplug_mp.h | 6 ++++++
lib/eal/common/malloc_heap.c | 6 ++++++
lib/eal/common/malloc_heap.h | 3 +++
lib/eal/common/malloc_mp.c | 12 ++++++++++++
lib/eal/common/malloc_mp.h | 3 +++
lib/eal/linux/eal.c | 7 +++++++
lib/eal/linux/eal_log.c | 8 ++++++++
lib/eal/linux/eal_vfio.h | 1 +
lib/eal/linux/eal_vfio_mp_sync.c | 8 ++++++++
lib/eal/windows/eal_mp.c | 7 +++++++
14 files changed, 103 insertions(+), 3 deletions(-)
--
2.30.2
^ permalink raw reply [relevance 3%]
* ethdev: hide internal structures
@ 2021-11-16 0:24 4% Tyler Retzlaff
2021-11-16 9:32 0% ` Ferruh Yigit
2021-11-16 10:32 3% ` Ananyev, Konstantin
0 siblings, 2 replies; 200+ results
From: Tyler Retzlaff @ 2021-11-16 0:24 UTC (permalink / raw)
To: dev
hi folks,
I don't understand the text of this change. would you mind explaining?
commit f9bdee267ab84fd12dc288419aba341310b6ae08
Author: Konstantin Ananyev <konstantin.ananyev@intel.com>
Date: Wed Oct 13 14:37:04 2021 +0100
ethdev: hide internal structures
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback`` + private data structures. ``rte_eth_devices[]`` can't be accessed directly
+ by user any more. While it is an ABI breakage, this change is intended
+ to be transparent for both users (no changes in user app is required) and + PMD developers (no changes in PMD is required).
if it is an ABI break (and it is also an API break) how is it that
this change could be "transparent" to the user application?
* existing binaries will not run. (they need to be recompiled)
* existing code will not compile. (code changes are required)
in order to cope with this change an application will have to have the
code modified and will need to be re-compiled. so i don't understand how
that is transparent?
thanks
^ permalink raw reply [relevance 4%]
* Re: ethdev: hide internal structures
2021-11-16 0:24 4% ethdev: hide internal structures Tyler Retzlaff
@ 2021-11-16 9:32 0% ` Ferruh Yigit
2021-11-16 17:54 4% ` Tyler Retzlaff
2021-11-16 10:32 3% ` Ananyev, Konstantin
1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-11-16 9:32 UTC (permalink / raw)
To: Tyler Retzlaff; +Cc: Konstantin Ananyev, dev
On 11/16/2021 12:24 AM, Tyler Retzlaff wrote:
> hi folks,
>
> I don't understand the text of this change. would you mind explaining?
>
> commit f9bdee267ab84fd12dc288419aba341310b6ae08
> Author: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Date: Wed Oct 13 14:37:04 2021 +0100
> ethdev: hide internal structures
>
> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback`` > + private data structures. ``rte_eth_devices[]`` can't be accessed directly
> + by user any more. While it is an ABI breakage, this change is intended
> + to be transparent for both users (no changes in user app is required) and> + PMD developers (no changes in PMD is required).
>
>
> if it is an ABI break (and it is also an API break) how is it that
> this change could be "transparent" to the user application?
>
> * existing binaries will not run. (they need to be recompiled)
> * existing code will not compile. (code changes are required)
>
> in order to cope with this change an application will have to have the
> code modified and will need to be re-compiled. so i don't understand how
> that is transparent?
>
Hi Tyler,
It shouldn't be an API change, which API is changed?
Existing binaries won't run and needs recompile, but shouldn't need to change
the code.
Unless application is accessing *internal* DPDK structs (which were exposed
to application because of some technical issues that above commit fixes).
What code change do you require, in driver or application?
^ permalink raw reply [relevance 0%]
* RE: ethdev: hide internal structures
2021-11-16 0:24 4% ethdev: hide internal structures Tyler Retzlaff
2021-11-16 9:32 0% ` Ferruh Yigit
@ 2021-11-16 10:32 3% ` Ananyev, Konstantin
2021-11-16 19:10 0% ` Tyler Retzlaff
1 sibling, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-11-16 10:32 UTC (permalink / raw)
To: Tyler Retzlaff, dev
> hi folks,
>
> I don't understand the text of this change. would you mind explaining?
>
> commit f9bdee267ab84fd12dc288419aba341310b6ae08
> Author: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Date: Wed Oct 13 14:37:04 2021 +0100
> ethdev: hide internal structures
>
> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback`` + private data structures. ``rte_eth_devices[]`` can't
> be accessed directly
> + by user any more. While it is an ABI breakage, this change is intended
> + to be transparent for both users (no changes in user app is required) and + PMD developers (no changes in PMD is required).
>
>
> if it is an ABI break (and it is also an API break) how is it that
> this change could be "transparent" to the user application?
>
> * existing binaries will not run. (they need to be recompiled)
> * existing code will not compile. (code changes are required)
>
> in order to cope with this change an application will have to have the
> code modified and will need to be re-compiled. so i don't understand how
> that is transparent?
rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback are internal
data structures that were used by public inline ethdev functions.
Well behaving app should not access these data structures directly.
So, for well behaving app there should no changes in the code required.
That what I meant by 'transparent' above.
But it is still an ABI change, so yes, the app has to be re-compiled.
Konstantin
^ permalink raw reply [relevance 3%]
* Re: ethdev: hide internal structures
2021-11-16 9:32 0% ` Ferruh Yigit
@ 2021-11-16 17:54 4% ` Tyler Retzlaff
2021-11-16 20:07 4% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-11-16 17:54 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: Konstantin Ananyev, dev
On Tue, Nov 16, 2021 at 09:32:15AM +0000, Ferruh Yigit wrote:
>
> Hi Tyler,
>
> It shouldn't be an API change, which API is changed?
exported declarations that were consumed by the application were removed
from an installed header. anything making reference to rte_eth_devices[]
will no longer compile.
any change that removes any identifier or macro visible to the application
from an installed header is an api break.
> Existing binaries won't run and needs recompile, but shouldn't need to change
> the code.
> Unless application is accessing *internal* DPDK structs (which were exposed
> to application because of some technical issues that above commit fixes).
the application was, but the access was to a symbol and identifier that
had not been previously marked __rte_internal or __rte_experimental and thus
assumed to be public.
just to be clear i agree with the change making these internal but there
was virtually no warning.
https://doc.dpdk.org/guides-19.11/contributing/abi_policy.html
the exports and declarations need to be marked deprecated to give ample
time before being removed in accordance with the abi policy.
i will ask that work be scheduled to identify the gap in the public api
surface that access to these structures was providing rather than
backing the change out. fortunately it is only schedule rather
than service impacting since the application hadn't been deployed yet.
i thought someone was responsible for reviewing abi/api related changes
on the board to understand the implications of changes like this?
thanks
^ permalink raw reply [relevance 4%]
* Re: ethdev: hide internal structures
2021-11-16 10:32 3% ` Ananyev, Konstantin
@ 2021-11-16 19:10 0% ` Tyler Retzlaff
2021-11-16 21:25 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-11-16 19:10 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: dev
On Tue, Nov 16, 2021 at 10:32:55AM +0000, Ananyev, Konstantin wrote:
>
> rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback are internal
> data structures that were used by public inline ethdev functions.
> Well behaving app should not access these data structures directly.
> So, for well behaving app there should no changes in the code required.
> That what I meant by 'transparent' above.
> But it is still an ABI change, so yes, the app has to be re-compiled.
so it appears the application was establishing a private context /
vendor extension between the application and a pmd. the application
was abusing access to the rte_eth_devices[] to get the private context
from the rte_eth_dev.
is there a proper / supported way of providing this functionality
through the public api?
>
> Konstantin
^ permalink raw reply [relevance 0%]
* Re: ethdev: hide internal structures
2021-11-16 17:54 4% ` Tyler Retzlaff
@ 2021-11-16 20:07 4% ` Ferruh Yigit
2021-11-16 20:44 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-11-16 20:07 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: Konstantin Ananyev, dev, Ray Kinsella, Thomas Monjalon, David Marchand
On 11/16/2021 5:54 PM, Tyler Retzlaff wrote:
> On Tue, Nov 16, 2021 at 09:32:15AM +0000, Ferruh Yigit wrote:
>>
>> Hi Tyler,
>>
>> It shouldn't be an API change, which API is changed?
>
> exported declarations that were consumed by the application were removed
> from an installed header. anything making reference to rte_eth_devices[]
> will no longer compile.
>
> any change that removes any identifier or macro visible to the application
> from an installed header is an api break.
>
>> Existing binaries won't run and needs recompile, but shouldn't need to change
>> the code.
>> Unless application is accessing *internal* DPDK structs (which were exposed
>> to application because of some technical issues that above commit fixes).
>
> the application was, but the access was to a symbol and identifier that
> had not been previously marked __rte_internal or __rte_experimental and thus
> assumed to be public.
>
> just to be clear i agree with the change making these internal but there
> was virtually no warning.
>
> https://doc.dpdk.org/guides-19.11/contributing/abi_policy.html
>
> the exports and declarations need to be marked deprecated to give ample
> time before being removed in accordance with the abi policy.
>
> i will ask that work be scheduled to identify the gap in the public api
> surface that access to these structures was providing rather than
> backing the change out. fortunately it is only schedule rather
> than service impacting since the application hadn't been deployed yet.
>
> i thought someone was responsible for reviewing abi/api related changes
> on the board to understand the implications of changes like this?
>
Sorry for the negative impact on your product, I can understand the
frustration.
The 'rte_eth_devices[]' was marked as '@internal' in the header file
since 2012 [1], so it is not new, but it was not marked programmatically,
only as comment in the header file.
Expectation was applications to not directly use it.
For long term ABI stability, this is a good step forward, although
the impact was known, best time for these kind of change is the 21.11
release, otherwise change needs to wait (at least) one more year.
This change has been discussed and accepted in the technical board [2],
and a deprecation notice has been sent to mail list [3] for notification.
Agree the announce was a little late than we normally do (although
only a month late than what defined in process), this is accepted by
the board to not miss the ABI break window (.11 release).
As you will recognize, not only ethdev, but a few more device abstraction
layer libraries had similar changes in this release.
[1]
f831c63cbe86 ("ethdev: minor changes")
[2]
https://mails.dpdk.org/archives/dev/2021-July/214662.html
[3]
https://patches.dpdk.org/project/dpdk/patch/20210826103500.2172550-1-ferruh.yigit@intel.com/
^ permalink raw reply [relevance 4%]
* Re: ethdev: hide internal structures
2021-11-16 20:07 4% ` Ferruh Yigit
@ 2021-11-16 20:44 0% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2021-11-16 20:44 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Konstantin Ananyev, dev, Ray Kinsella, Thomas Monjalon, David Marchand
On Tue, Nov 16, 2021 at 08:07:49PM +0000, Ferruh Yigit wrote:
> On 11/16/2021 5:54 PM, Tyler Retzlaff wrote:
> >
> >i thought someone was responsible for reviewing abi/api related changes
> >on the board to understand the implications of changes like this?
> >
>
> Sorry for the negative impact on your product, I can understand the
> frustration.
>
> The 'rte_eth_devices[]' was marked as '@internal' in the header file
> since 2012 [1], so it is not new, but it was not marked programmatically,
> only as comment in the header file.
> Expectation was applications to not directly use it.
unfortunately there are a lot of these expectations in the project code,
rarely do consuming applications get written in the way we would expect
and this is a lesson that if it is not mechanically enforced it isn't
prevented.
>
>
> For long term ABI stability, this is a good step forward, although
> the impact was known, best time for these kind of change is the 21.11
> release, otherwise change needs to wait (at least) one more year.
agreed, we appreciate what will be accomplished with the change.
>
> This change has been discussed and accepted in the technical board [2],
> and a deprecation notice has been sent to mail list [3] for notification.
the notes from [2] aren't that clear, but i think it is fair you point
out that if [3] were read carefully it was implied that it would impact
ethdev. anyway, it is moot now.
>
> Agree the announce was a little late than we normally do (although
> only a month late than what defined in process), this is accepted by
> the board to not miss the ABI break window (.11 release).
> As you will recognize, not only ethdev, but a few more device abstraction
> layer libraries had similar changes in this release.
yes, i understand. perhaps in the future it may be possible to introduce
some kind of __deprecation notice during compilation earlier than the
removal and it may have been noticed sooner. perhaps a patch that did
this near the time of the original notification [2].
i've left the details of the functional gap in my other reply to the
thread, hopefully you have a suggestion.
thanks Ferruh, appreciate it.
>
>
> [1]
> f831c63cbe86 ("ethdev: minor changes")
>
> [2]
> https://mails.dpdk.org/archives/dev/2021-July/214662.html
>
> [3]
> https://patches.dpdk.org/project/dpdk/patch/20210826103500.2172550-1-ferruh.yigit@intel.com/
^ permalink raw reply [relevance 0%]
* Re: ethdev: hide internal structures
2021-11-16 19:10 0% ` Tyler Retzlaff
@ 2021-11-16 21:25 0% ` Stephen Hemminger
2021-11-16 22:58 3% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-11-16 21:25 UTC (permalink / raw)
To: Tyler Retzlaff; +Cc: Ananyev, Konstantin, dev
On Tue, 16 Nov 2021 11:10:18 -0800
Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> On Tue, Nov 16, 2021 at 10:32:55AM +0000, Ananyev, Konstantin wrote:
> >
> > rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback are internal
> > data structures that were used by public inline ethdev functions.
> > Well behaving app should not access these data structures directly.
> > So, for well behaving app there should no changes in the code required.
> > That what I meant by 'transparent' above.
> > But it is still an ABI change, so yes, the app has to be re-compiled.
>
> so it appears the application was establishing a private context /
> vendor extension between the application and a pmd. the application
> was abusing access to the rte_eth_devices[] to get the private context
> from the rte_eth_dev.
>
> is there a proper / supported way of providing this functionality
> through the public api?
>
> >
> > Konstantin
Keep a array in application? Portid is universally
available.
struct my_portdata *my_ports[RTE_ETH_MAXPORTS];
^ permalink raw reply [relevance 0%]
* Re: ethdev: hide internal structures
2021-11-16 21:25 0% ` Stephen Hemminger
@ 2021-11-16 22:58 3% ` Tyler Retzlaff
2021-11-16 23:22 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-11-16 22:58 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Ananyev, Konstantin, dev
On Tue, Nov 16, 2021 at 01:25:10PM -0800, Stephen Hemminger wrote:
> On Tue, 16 Nov 2021 11:10:18 -0800
> Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
>
> > On Tue, Nov 16, 2021 at 10:32:55AM +0000, Ananyev, Konstantin wrote:
> > >
> > > rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback are internal
> > > data structures that were used by public inline ethdev functions.
> > > Well behaving app should not access these data structures directly.
> > > So, for well behaving app there should no changes in the code required.
> > > That what I meant by 'transparent' above.
> > > But it is still an ABI change, so yes, the app has to be re-compiled.
> >
> > so it appears the application was establishing a private context /
> > vendor extension between the application and a pmd. the application
> > was abusing access to the rte_eth_devices[] to get the private context
> > from the rte_eth_dev.
> >
> > is there a proper / supported way of providing this functionality
> > through the public api?
> >
> > >
> > > Konstantin
>
> Keep a array in application? Portid is universally
> available.
>
> struct my_portdata *my_ports[RTE_ETH_MAXPORTS];
i guess by this you mean maintain the storage in the application and
then export that storage for proprietary use in the pmd. ordinarily i
wouldn't want to have this hard-coded into the modules abi but since
we are talking about vendor extensions it has to be managed somewhere.
anyway, i guess i have my answer.
thanks stephen, appreciate it.
^ permalink raw reply [relevance 3%]
* Re: ethdev: hide internal structures
2021-11-16 22:58 3% ` Tyler Retzlaff
@ 2021-11-16 23:22 0% ` Stephen Hemminger
2021-11-17 22:05 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-11-16 23:22 UTC (permalink / raw)
To: Tyler Retzlaff; +Cc: Ananyev, Konstantin, dev
On Tue, 16 Nov 2021 14:58:08 -0800
Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> On Tue, Nov 16, 2021 at 01:25:10PM -0800, Stephen Hemminger wrote:
> > On Tue, 16 Nov 2021 11:10:18 -0800
> > Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> >
> > > On Tue, Nov 16, 2021 at 10:32:55AM +0000, Ananyev, Konstantin wrote:
> > > >
> > > > rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback are internal
> > > > data structures that were used by public inline ethdev functions.
> > > > Well behaving app should not access these data structures directly.
> > > > So, for well behaving app there should no changes in the code required.
> > > > That what I meant by 'transparent' above.
> > > > But it is still an ABI change, so yes, the app has to be re-compiled.
> > >
> > > so it appears the application was establishing a private context /
> > > vendor extension between the application and a pmd. the application
> > > was abusing access to the rte_eth_devices[] to get the private context
> > > from the rte_eth_dev.
> > >
> > > is there a proper / supported way of providing this functionality
> > > through the public api?
> > >
> > > >
> > > > Konstantin
> >
> > Keep a array in application? Portid is universally
> > available.
> >
> > struct my_portdata *my_ports[RTE_ETH_MAXPORTS];
>
> i guess by this you mean maintain the storage in the application and
> then export that storage for proprietary use in the pmd. ordinarily i
> wouldn't want to have this hard-coded into the modules abi but since
> we are talking about vendor extensions it has to be managed somewhere.
>
> anyway, i guess i have my answer.
>
> thanks stephen, appreciate it.
Don't understand, how are application and pmd exchanging extra data?
Maybe a non-standard PMD API?
^ permalink raw reply [relevance 0%]
* Re: ethdev: hide internal structures
2021-11-16 23:22 0% ` Stephen Hemminger
@ 2021-11-17 22:05 0% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2021-11-17 22:05 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Ananyev, Konstantin, dev
On Tue, Nov 16, 2021 at 03:22:01PM -0800, Stephen Hemminger wrote:
> On Tue, 16 Nov 2021 14:58:08 -0800
> Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
>
> > >
> > > Keep a array in application? Portid is universally
> > > available.
> > >
> > > struct my_portdata *my_ports[RTE_ETH_MAXPORTS];
> >
> > i guess by this you mean maintain the storage in the application and
> > then export that storage for proprietary use in the pmd. ordinarily i
> > wouldn't want to have this hard-coded into the modules abi but since
> > we are talking about vendor extensions it has to be managed somewhere.
> >
> > anyway, i guess i have my answer.
> >
> > thanks stephen, appreciate it.
>
> Don't understand, how are application and pmd exchanging extra data?
> Maybe a non-standard PMD API?
yes. consider the case of a "vendor extension" where for a specific pmd
driver it is possible that extra / non-standard operations are
supported.
in this instance we have a pmd that does some whiz-bang thing that isn't
something most hardware/pmds could do (or need to under ordinary
circumstances) so it doesn't make sense to adapt the generalized pmd api
for some one-off hardware/device. however, the vendor ships an
application that is extended to understand this extra functionality and
needs a way to hook up with and inter-operate with the non-standard api.
one example that is very common is some kind of advanced statistics that
most hardware aren't capable of producing. as long as the application
knows it is working with this advanced hardware it can present those
statistics.
in the code i'm looking at it isn't statistics but specialized control
operations that can't be expressed via the exported pmd api (and should
not be).
^ permalink raw reply [relevance 0%]
* [PATCH v1 1/3] fix PMD wording typo
@ 2021-11-18 14:46 1% ` Sean Morrissey
1 sibling, 0 replies; 200+ results
From: Sean Morrissey @ 2021-11-18 14:46 UTC (permalink / raw)
To: Xiaoyun Li, Nicolas Chautru, Jay Zhou, Ciara Loftus, Qi Zhang,
Steven Webster, Matt Peters, Apeksha Gupta, Sachin Saxena,
Xiao Wang, Haiyue Wang, Beilei Xing, Stephen Hemminger, Long Li,
Heinrich Kuhn, Jerin Jacob, Maciej Czekaj, Maxime Coquelin,
Chenbo Xia, Konstantin Ananyev, Andrew Rybchenko, Fiona Trahe,
Ashish Gupta, John Griffin, Deepak Kumar Jain, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Rosen Xu, Tianfei zhang, Akhil Goyal,
Declan Doherty, Chengwen Feng, Kevin Laatz, Bruce Richardson,
Thomas Monjalon, Ferruh Yigit
Cc: dev, Sean Morrissey, Conor Fogarty
Removing the use of driver following PMD as its
unnecessary.
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
---
app/test-pmd/cmdline.c | 4 +--
doc/guides/bbdevs/turbo_sw.rst | 2 +-
doc/guides/cryptodevs/virtio.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/nics/af_packet.rst | 2 +-
doc/guides/nics/af_xdp.rst | 2 +-
doc/guides/nics/avp.rst | 4 +--
doc/guides/nics/enetfec.rst | 2 +-
doc/guides/nics/fm10k.rst | 4 +--
doc/guides/nics/intel_vf.rst | 2 +-
doc/guides/nics/netvsc.rst | 2 +-
doc/guides/nics/nfp.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/nics/virtio.rst | 4 +--
.../prog_guide/writing_efficient_code.rst | 4 +--
doc/guides/rel_notes/known_issues.rst | 2 +-
doc/guides/rel_notes/release_16_04.rst | 2 +-
doc/guides/rel_notes/release_19_05.rst | 6 ++--
doc/guides/rel_notes/release_19_11.rst | 2 +-
doc/guides/rel_notes/release_20_11.rst | 4 +--
doc/guides/rel_notes/release_21_02.rst | 2 +-
doc/guides/rel_notes/release_21_05.rst | 2 +-
doc/guides/rel_notes/release_21_08.rst | 2 +-
doc/guides/rel_notes/release_21_11.rst | 2 +-
doc/guides/rel_notes/release_2_2.rst | 4 +--
doc/guides/sample_app_ug/bbdev_app.rst | 2 +-
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
doc/guides/tools/testeventdev.rst | 2 +-
drivers/common/sfc_efx/efsys.h | 2 +-
drivers/compress/qat/qat_comp_pmd.h | 2 +-
drivers/crypto/qat/qat_asym_pmd.h | 2 +-
drivers/crypto/qat/qat_sym_pmd.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/base/hinic_pmd_cmdq.h | 2 +-
drivers/net/hns3/hns3_ethdev.c | 6 ++--
drivers/net/hns3/hns3_ethdev.h | 6 ++--
drivers/net/hns3/hns3_ethdev_vf.c | 28 +++++++++----------
| 4 +--
drivers/net/hns3/hns3_rxtx.c | 8 +++---
drivers/net/hns3/hns3_rxtx.h | 4 +--
drivers/net/i40e/i40e_ethdev.c | 2 +-
drivers/net/nfp/nfp_common.h | 2 +-
drivers/net/nfp/nfp_ethdev.c | 2 +-
drivers/net/nfp/nfp_ethdev_vf.c | 2 +-
drivers/raw/ifpga/base/README | 2 +-
lib/bbdev/rte_bbdev.h | 12 ++++----
lib/compressdev/rte_compressdev_pmd.h | 2 +-
lib/cryptodev/cryptodev_pmd.h | 2 +-
lib/dmadev/rte_dmadev_core.h | 2 +-
lib/eal/include/rte_dev.h | 2 +-
lib/eal/include/rte_devargs.h | 4 +--
lib/ethdev/rte_ethdev.h | 18 ++++++------
52 files changed, 97 insertions(+), 97 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index c43c85c591..6e10afeedd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2701,7 +2701,7 @@ cmd_config_rxtx_queue_parsed(void *parsed_result,
ret = rte_eth_dev_tx_queue_stop(res->portid, res->qid);
if (ret == -ENOTSUP)
- fprintf(stderr, "Function not supported in PMD driver\n");
+ fprintf(stderr, "Function not supported in PMD\n");
}
cmdline_parse_token_string_t cmd_config_rxtx_queue_port =
@@ -14700,7 +14700,7 @@ cmd_ddp_info_parsed(
free(proto);
#endif
if (ret == -ENOTSUP)
- fprintf(stderr, "Function not supported in PMD driver\n");
+ fprintf(stderr, "Function not supported in PMD\n");
close_file(pkg);
}
diff --git a/doc/guides/bbdevs/turbo_sw.rst b/doc/guides/bbdevs/turbo_sw.rst
index 43c5129fd7..1e23e37027 100644
--- a/doc/guides/bbdevs/turbo_sw.rst
+++ b/doc/guides/bbdevs/turbo_sw.rst
@@ -149,7 +149,7 @@ Example:
* For AVX512 machines with SDK libraries installed then both 4G and 5G can be enabled for full real time FEC capability.
For AVX2 machines it is possible to only enable the 4G libraries and the PMD capabilities will be limited to 4G FEC.
- If no library is present then the PMD driver will still build but its capabilities will be limited accordingly.
+ If no library is present then the PMD will still build but its capabilities will be limited accordingly.
To use the PMD in an application, user must:
diff --git a/doc/guides/cryptodevs/virtio.rst b/doc/guides/cryptodevs/virtio.rst
index 8b96446ff2..ce4d43519a 100644
--- a/doc/guides/cryptodevs/virtio.rst
+++ b/doc/guides/cryptodevs/virtio.rst
@@ -73,7 +73,7 @@ number of the virtio-crypto device:
echo -n 0000:00:04.0 > /sys/bus/pci/drivers/virtio-pci/unbind
echo "1af4 1054" > /sys/bus/pci/drivers/uio_pci_generic/new_id
-Finally the front-end virtio crypto PMD driver can be installed.
+Finally the front-end virtio crypto PMD can be installed.
Tests
-----
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index efd2dd23f1..4f99617233 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -66,7 +66,7 @@ The EAL options are as follows:
* ``-d``:
Add a driver or driver directory to be loaded.
- The application should use this option to load the pmd drivers
+ The application should use this option to load the PMDs
that are built as shared libraries.
* ``-m MB``:
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index 54feffdef4..8292369141 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -5,7 +5,7 @@ AF_PACKET Poll Mode Driver
==========================
The AF_PACKET socket in Linux allows an application to receive and send raw
-packets. This Linux-specific PMD driver binds to an AF_PACKET socket and allows
+packets. This Linux-specific PMD binds to an AF_PACKET socket and allows
a DPDK application to send and receive raw packets through the Kernel.
In order to improve Rx and Tx performance this implementation makes use of
diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst
index 8bf40b5f0f..c9d0e1ad6c 100644
--- a/doc/guides/nics/af_xdp.rst
+++ b/doc/guides/nics/af_xdp.rst
@@ -12,7 +12,7 @@ For the full details behind AF_XDP socket, you can refer to
`AF_XDP documentation in the Kernel
<https://www.kernel.org/doc/Documentation/networking/af_xdp.rst>`_.
-This Linux-specific PMD driver creates the AF_XDP socket and binds it to a
+This Linux-specific PMD creates the AF_XDP socket and binds it to a
specific netdev queue, it allows a DPDK application to send and receive raw
packets through the socket which would bypass the kernel network stack.
Current implementation only supports single queue, multi-queues feature will
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
index 1a194fc23c..a749f2a0f6 100644
--- a/doc/guides/nics/avp.rst
+++ b/doc/guides/nics/avp.rst
@@ -35,7 +35,7 @@ to another with minimal packet loss.
Features and Limitations of the AVP PMD
---------------------------------------
-The AVP PMD driver provides the following functionality.
+The AVP PMD provides the following functionality.
* Receive and transmit of both simple and chained mbuf packets,
@@ -74,7 +74,7 @@ Launching a VM with an AVP type network attachment
The following example will launch a VM with three network attachments. The
first attachment will have a default vif-model of "virtio". The next two
network attachments will have a vif-model of "avp" and may be used with a DPDK
-application which is built to include the AVP PMD driver.
+application which is built to include the AVP PMD.
.. code-block:: console
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index a64e72fdd6..381635e627 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -65,7 +65,7 @@ The diagram below shows a system level overview of ENETFEC:
| PHY |
+-----+
-ENETFEC Ethernet driver is traditional DPDK PMD driver running in userspace.
+ENETFEC Ethernet driver is traditional DPDK PMD running in userspace.
'fec-uio' is the kernel driver.
The MAC and PHY are the hardware blocks.
ENETFEC PMD uses standard UIO interface to access kernel
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index bba53f5a64..d6efac0917 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -114,9 +114,9 @@ Switch manager
~~~~~~~~~~~~~~
The Intel FM10000 family of NICs integrate a hardware switch and multiple host
-interfaces. The FM10000 PMD driver only manages host interfaces. For the
+interfaces. The FM10000 PMD only manages host interfaces. For the
switch component another switch driver has to be loaded prior to the
-FM10000 PMD driver. The switch driver can be acquired from Intel support.
+FM10000 PMD. The switch driver can be acquired from Intel support.
Only Testpoint is validated with DPDK, the latest version that has been
validated with DPDK is 4.1.6.
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index fd235e1463..648af39c22 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -571,7 +571,7 @@ Fast Host-based Packet Processing
Software Defined Network (SDN) trends are demanding fast host-based packet handling.
In a virtualization environment,
-the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
+the DPDK VF PMD performs the same throughput result as a non-VT native environment.
With such host instance fast packet processing, lots of services such as filtering, QoS,
DPI can be offloaded on the host fast path.
diff --git a/doc/guides/nics/netvsc.rst b/doc/guides/nics/netvsc.rst
index c0e218c743..77efe1dc91 100644
--- a/doc/guides/nics/netvsc.rst
+++ b/doc/guides/nics/netvsc.rst
@@ -14,7 +14,7 @@ checksum and segmentation offloads.
Features and Limitations of Hyper-V PMD
---------------------------------------
-In this release, the hyper PMD driver provides the basic functionality of packet reception and transmission.
+In this release, the hyper PMD provides the basic functionality of packet reception and transmission.
* It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
when transmitting packets. The packet size supported is from 64 to 65536.
diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
index bf8be723b0..30cdc69202 100644
--- a/doc/guides/nics/nfp.rst
+++ b/doc/guides/nics/nfp.rst
@@ -14,7 +14,7 @@ This document explains how to use DPDK with the Netronome Poll Mode
Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
(NFP-6xxx) and Netronome's Flow Processor 4xxx (NFP-4xxx).
-NFP is a SRIOV capable device and the PMD driver supports the physical
+NFP is a SRIOV capable device and the PMD supports the physical
function (PF) and the virtual functions (VFs).
Dependencies
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 98f23a2b2a..d96395dafa 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -199,7 +199,7 @@ Each port consists of a primary VF and n secondary VF(s). Each VF provides 8 Tx/
When a given port is configured to use more than 8 queues, it requires one (or more) secondary VF.
Each secondary VF adds 8 additional queues to the queue set.
-During PMD driver initialization, the primary VF's are enumerated by checking the
+During PMD initialization, the primary VF's are enumerated by checking the
specific flag (see sqs message in DPDK boot log - sqs indicates secondary queue set).
They are at the beginning of VF list (the remain ones are secondary VF's).
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index 98e0d012b7..7c0ae2b3af 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -17,7 +17,7 @@ With this enhancement, virtio could achieve quite promising performance.
For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
please refer to Chapter "Driver for VM Emulated Devices".
-In this chapter, we will demonstrate usage of virtio PMD driver with two backends,
+In this chapter, we will demonstrate usage of virtio PMD with two backends,
standard qemu vhost back end and vhost kni back end.
Virtio Implementation in DPDK
@@ -40,7 +40,7 @@ end if necessary.
Features and Limitations of virtio PMD
--------------------------------------
-In this release, the virtio PMD driver provides the basic functionality of packet reception and transmission.
+In this release, the virtio PMD provides the basic functionality of packet reception and transmission.
* It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
when transmitting packets. The packet size supported is from 64 to 1518.
diff --git a/doc/guides/prog_guide/writing_efficient_code.rst b/doc/guides/prog_guide/writing_efficient_code.rst
index a61e8320ae..e6c26efdd3 100644
--- a/doc/guides/prog_guide/writing_efficient_code.rst
+++ b/doc/guides/prog_guide/writing_efficient_code.rst
@@ -119,8 +119,8 @@ The code algorithm that dequeues messages may be something similar to the follow
my_process_bulk(obj_table, count);
}
-PMD Driver
-----------
+PMD
+---
The DPDK Poll Mode Driver (PMD) is also able to work in bulk/burst mode,
allowing the factorization of some code for each call in the send or receive function.
diff --git a/doc/guides/rel_notes/known_issues.rst b/doc/guides/rel_notes/known_issues.rst
index beea877bad..187d9c942e 100644
--- a/doc/guides/rel_notes/known_issues.rst
+++ b/doc/guides/rel_notes/known_issues.rst
@@ -250,7 +250,7 @@ PMD does not work with --no-huge EAL command line parameter
**Description**:
Currently, the DPDK does not store any information about memory allocated by ``malloc()` (for example, NUMA node,
- physical address), hence PMD drivers do not work when the ``--no-huge`` command line parameter is supplied to EAL.
+ physical address), hence PMDs do not work when the ``--no-huge`` command line parameter is supplied to EAL.
**Implication**:
Sending and receiving data with PMD will not work.
diff --git a/doc/guides/rel_notes/release_16_04.rst b/doc/guides/rel_notes/release_16_04.rst
index b7d07834e1..ac18e1dddb 100644
--- a/doc/guides/rel_notes/release_16_04.rst
+++ b/doc/guides/rel_notes/release_16_04.rst
@@ -56,7 +56,7 @@ New Features
* **Enabled Virtio 1.0 support.**
- Enabled Virtio 1.0 support for Virtio pmd driver.
+ Enabled Virtio 1.0 support for Virtio PMD.
* **Supported Virtio for ARM.**
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 30f704e204..89ae425bdb 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -46,13 +46,13 @@ New Features
Updated the KNI kernel module to set the ``max_mtu`` according to the given
initial MTU size. Without it, the maximum MTU was 1500.
- Updated the KNI PMD driver to set the ``mbuf_size`` and MTU based on
+ Updated the KNI PMD to set the ``mbuf_size`` and MTU based on
the given mb-pool. This provide the ability to pass jumbo frames
if the mb-pool contains a suitable buffer size.
* **Added the AF_XDP PMD.**
- Added a Linux-specific PMD driver for AF_XDP. This PMD can create an AF_XDP socket
+ Added a Linux-specific PMD for AF_XDP. This PMD can create an AF_XDP socket
and bind it to a specific netdev queue. It allows a DPDK application to send
and receive raw packets through the socket which would bypass the kernel
network stack to achieve high performance packet processing.
@@ -240,7 +240,7 @@ ABI Changes
The ``rte_eth_dev_info`` structure has had two extra fields
added: ``min_mtu`` and ``max_mtu``. Each of these are of type ``uint16_t``.
- The values of these fields can be set specifically by the PMD drivers as
+ The values of these fields can be set specifically by the PMDs as
supported values can vary from device to device.
* cryptodev: in 18.08 a new structure ``rte_crypto_asym_op`` was introduced and
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index b509a6dd28..302b3e5f37 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -189,7 +189,7 @@ New Features
* **Added Marvell OCTEON TX2 crypto PMD.**
- Added a new PMD driver for hardware crypto offload block on ``OCTEON TX2``
+ Added a new PMD for hardware crypto offload block on ``OCTEON TX2``
SoC.
See :doc:`../cryptodevs/octeontx2` for more details
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 90cc3ed680..af7ce90ba3 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -192,7 +192,7 @@ New Features
* **Added Wangxun txgbe PMD.**
- Added a new PMD driver for Wangxun 10 Gigabit Ethernet NICs.
+ Added a new PMD for Wangxun 10 Gigabit Ethernet NICs.
See the :doc:`../nics/txgbe` for more details.
@@ -288,7 +288,7 @@ New Features
* **Added Marvell OCTEON TX2 regex PMD.**
- Added a new PMD driver for the hardware regex offload block for OCTEON TX2 SoC.
+ Added a new PMD for the hardware regex offload block for OCTEON TX2 SoC.
See the :doc:`../regexdevs/octeontx2` for more details.
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 9d5e17758f..5fbf5b3d43 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -135,7 +135,7 @@ New Features
* **Added mlx5 compress PMD.**
- Added a new compress PMD driver for Bluefield 2 adapters.
+ Added a new compress PMD for Bluefield 2 adapters.
See the :doc:`../compressdevs/mlx5` for more details.
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8adb225a4d..49044ed422 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -78,7 +78,7 @@ New Features
* Updated ena_com (HAL) to the latest version.
* Added indication of the RSS hash presence in the mbuf.
-* **Updated Arkville PMD driver.**
+* **Updated Arkville PMD.**
Updated Arkville net driver with new features and improvements, including:
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index 6fb4e43346..ac1c081903 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -67,7 +67,7 @@ New Features
* **Added Wangxun ngbe PMD.**
- Added a new PMD driver for Wangxun 1Gb Ethernet NICs.
+ Added a new PMD for Wangxun 1Gb Ethernet NICs.
See the :doc:`../nics/ngbe` for more details.
* **Added inflight packets clear API in vhost library.**
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4d8c59472a..1d6774afc1 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -354,7 +354,7 @@ New Features
* **Added NXP LA12xx baseband PMD.**
- * Added a new baseband PMD driver for NXP LA12xx Software defined radio.
+ * Added a new baseband PMD for NXP LA12xx Software defined radio.
* See the :doc:`../bbdevs/la12xx` for more details.
* **Updated Mellanox compress driver.**
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 8273473ff4..029b758e90 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -10,8 +10,8 @@ New Features
* **Introduce ARMv7 and ARMv8 architectures.**
* It is now possible to build DPDK for the ARMv7 and ARMv8 platforms.
- * ARMv7 can be tested with virtual PMD drivers.
- * ARMv8 can be tested with virtual and physical PMD drivers.
+ * ARMv7 can be tested with virtual PMDs.
+ * ARMv8 can be tested with virtual and physical PMDs.
* **Enabled freeing of ring.**
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 45e69e36e2..7f02f0ed90 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -31,7 +31,7 @@ Limitations
Compiling the Application
-------------------------
-DPDK needs to be built with ``baseband_turbo_sw`` PMD driver enabled along
+DPDK needs to be built with ``baseband_turbo_sw`` PMD enabled along
with ``FLEXRAN SDK`` Libraries. Refer to *SW Turbo Poll Mode Driver*
documentation for more details on this.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 486247ac2e..ecb1c857c4 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -220,7 +220,7 @@ Once the application starts, it transitions through three phases:
* **Final Phase** - Perform the following tasks:
- Calls the EAL, PMD driver and ACL library to free resource, then quits.
+ Calls the EAL, PMD and ACL library to free resource, then quits.
Compiling the Application
-------------------------
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index 7b4cdeb43f..48efb9ea6e 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -239,7 +239,7 @@ to the ordered queue. The worker receives the events from ordered queue and
forwards to atomic queue. Since the events from an ordered queue can be
processed in parallel on the different workers, the ingress order of events
might have changed on the downstream atomic queue enqueue. On enqueue to the
-atomic queue, the eventdev PMD driver reorders the event to the original
+atomic queue, the eventdev PMD reorders the event to the original
ingress order(i.e producer ingress order).
When the event is dequeued from the atomic queue by the worker, this test
diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h
index b2109bf3c0..3860c2835a 100644
--- a/drivers/common/sfc_efx/efsys.h
+++ b/drivers/common/sfc_efx/efsys.h
@@ -609,7 +609,7 @@ typedef struct efsys_bar_s {
/* DMA SYNC */
/*
- * DPDK does not provide any DMA syncing API, and no PMD drivers
+ * DPDK does not provide any DMA syncing API, and no PMDs
* have any traces of explicit DMA syncing.
* DMA mapping is assumed to be coherent.
*/
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 86317a513c..3c8682a768 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -13,7 +13,7 @@
#include "qat_device.h"
#include "qat_comp.h"
-/**< Intel(R) QAT Compression PMD driver name */
+/**< Intel(R) QAT Compression PMD name */
#define COMPRESSDEV_NAME_QAT_PMD compress_qat
/* Private data structure for a QAT compression device capability. */
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index fd6b406248..f988d646e5 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -10,7 +10,7 @@
#include "qat_crypto.h"
#include "qat_device.h"
-/** Intel(R) QAT Asymmetric Crypto PMD driver name */
+/** Intel(R) QAT Asymmetric Crypto PMD name */
#define CRYPTODEV_NAME_QAT_ASYM_PMD crypto_qat_asym
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index 0dc0c6f0d9..59fbdefa12 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -16,7 +16,7 @@
#include "qat_crypto.h"
#include "qat_device.h"
-/** Intel(R) QAT Symmetric Crypto PMD driver name */
+/** Intel(R) QAT Symmetric Crypto PMD name */
#define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat
/* Internal capabilities */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 7c85a05746..43e1d13431 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -255,7 +255,7 @@ rx_queue_clean(struct fm10k_rx_queue *q)
for (i = 0; i < q->nb_fake_desc; ++i)
q->hw_ring[q->nb_desc + i] = zero;
- /* vPMD driver has a different way of releasing mbufs. */
+ /* vPMD has a different way of releasing mbufs. */
if (q->rx_using_sse) {
fm10k_rx_queue_release_mbufs_vec(q);
return;
diff --git a/drivers/net/hinic/base/hinic_pmd_cmdq.h b/drivers/net/hinic/base/hinic_pmd_cmdq.h
index 0d5e380123..58a1fbda71 100644
--- a/drivers/net/hinic/base/hinic_pmd_cmdq.h
+++ b/drivers/net/hinic/base/hinic_pmd_cmdq.h
@@ -9,7 +9,7 @@
#define HINIC_SCMD_DATA_LEN 16
-/* pmd driver uses 64, kernel l2nic use 4096 */
+/* PMD uses 64, kernel l2nic use 4096 */
#define HINIC_CMDQ_DEPTH 64
#define HINIC_CMDQ_BUF_SIZE 2048U
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 847e660f44..0bd12907d8 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -1060,7 +1060,7 @@ hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
return ret;
/*
* Only in HNS3_SW_SHIFT_AND_MODE the PVID related operation in Tx/Rx
- * need be processed by PMD driver.
+ * need be processed by PMD.
*/
if (pvid_en_state_change &&
hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
@@ -2592,7 +2592,7 @@ hns3_parse_cfg(struct hns3_cfg *cfg, struct hns3_cmd_desc *desc)
* Field ext_rss_size_max obtained from firmware will be more flexible
* for future changes and expansions, which is an exponent of 2, instead
* of reading out directly. If this field is not zero, hns3 PF PMD
- * driver uses it as rss_size_max under one TC. Device, whose revision
+ * uses it as rss_size_max under one TC. Device, whose revision
* id is greater than or equal to PCI_REVISION_ID_HIP09_A, obtains the
* maximum number of queues supported under a TC through this field.
*/
@@ -6311,7 +6311,7 @@ hns3_fec_set(struct rte_eth_dev *dev, uint32_t mode)
if (ret < 0)
return ret;
- /* HNS3 PMD driver only support one bit set mode, e.g. 0x1, 0x4 */
+ /* HNS3 PMD only support one bit set mode, e.g. 0x1, 0x4 */
if (!is_fec_mode_one_bit_set(mode)) {
hns3_err(hw, "FEC mode(0x%x) not supported in HNS3 PMD, "
"FEC mode should be only one bit set", mode);
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 6d30125dcc..488fe8dbbc 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -465,7 +465,7 @@ struct hns3_queue_intr {
* enable Rx interrupt.
*
* - HNS3_INTR_MAPPING_VEC_ALL
- * PMD driver can map/unmmap all interrupt vectors with queues When
+ * PMD can map/unmmap all interrupt vectors with queues When
* Rx interrupt in enabled.
*/
uint8_t mapping_mode;
@@ -575,14 +575,14 @@ struct hns3_hw {
*
* - HNS3_SW_SHIFT_AND_DISCARD_MODE
* For some versions of hardware network engine, because of the
- * hardware limitation, PMD driver needs to detect the PVID status
+ * hardware limitation, PMD needs to detect the PVID status
* to work with haredware to implement PVID-related functions.
* For example, driver need discard the stripped PVID tag to ensure
* the PVID will not report to mbuf and shift the inserted VLAN tag
* to avoid port based VLAN covering it.
*
* - HNS3_HW_SHIT_AND_DISCARD_MODE
- * PMD driver does not need to process PVID-related functions in
+ * PMD does not need to process PVID-related functions in
* I/O process, Hardware will adjust the sequence between port based
* VLAN tag and BD VLAN tag automatically and VLAN tag stripped by
* PVID will be invisible to driver. And in this mode, hns3 is able
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index d8a99693e0..7d6e251bbe 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -232,7 +232,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0);
if (ret) {
/*
- * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev
+ * The hns3 VF PMD depends on the hns3 PF kernel ethdev
* driver. When user has configured a MAC address for VF device
* by "ip link set ..." command based on the PF device, the hns3
* PF kernel ethdev driver does not allow VF driver to request
@@ -312,9 +312,9 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
/*
- * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev driver,
+ * The hns3 VF PMD depends on the hns3 PF kernel ethdev driver,
* so there are some features for promiscuous/allmulticast mode in hns3
- * VF PMD driver as below:
+ * VF PMD as below:
* 1. The promiscuous/allmulticast mode can be configured successfully
* only based on the trusted VF device. If based on the non trusted
* VF device, configuring promiscuous/allmulticast mode will fail.
@@ -322,14 +322,14 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
* kernel ethdev driver on the host by the following command:
* "ip link set <eth num> vf <vf id> turst on"
* 2. After the promiscuous mode is configured successfully, hns3 VF PMD
- * driver can receive the ingress and outgoing traffic. In the words,
+ * can receive the ingress and outgoing traffic. In the words,
* all the ingress packets, all the packets sent from the PF and
* other VFs on the same physical port.
* 3. Note: Because of the hardware constraints, By default vlan filter
* is enabled and couldn't be turned off based on VF device, so vlan
* filter is still effective even in promiscuous mode. If upper
* applications don't call rte_eth_dev_vlan_filter API function to
- * set vlan based on VF device, hns3 VF PMD driver will can't receive
+ * set vlan based on VF device, hns3 VF PMD will can't receive
* the packets with vlan tag in promiscuoue mode.
*/
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
@@ -553,9 +553,9 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The hns3 PF/VF devices on the same port share the hardware MTU
* configuration. Currently, we send mailbox to inform hns3 PF kernel
- * ethdev driver to finish hardware MTU configuration in hns3 VF PMD
- * driver, there is no need to stop the port for hns3 VF device, and the
- * MTU value issued by hns3 VF PMD driver must be less than or equal to
+ * ethdev driver to finish hardware MTU configuration in hns3 VF PMD,
+ * there is no need to stop the port for hns3 VF device, and the
+ * MTU value issued by hns3 VF PMD must be less than or equal to
* PF's MTU.
*/
if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) {
@@ -565,8 +565,8 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/*
* when Rx of scattered packets is off, we have some possibility of
- * using vector Rx process function or simple Rx functions in hns3 PMD
- * driver. If the input MTU is increased and the maximum length of
+ * using vector Rx process function or simple Rx functions in hns3 PMD.
+ * If the input MTU is increased and the maximum length of
* received packets is greater than the length of a buffer for Rx
* packet, the hardware network engine needs to use multiple BDs and
* buffers to store these packets. This will cause problems when still
@@ -2075,7 +2075,7 @@ hns3vf_check_default_mac_change(struct hns3_hw *hw)
* ethdev driver sets the MAC address for VF device after the
* initialization of the related VF device, the PF driver will notify
* VF driver to reset VF device to make the new MAC address effective
- * immediately. The hns3 VF PMD driver should check whether the MAC
+ * immediately. The hns3 VF PMD should check whether the MAC
* address has been changed by the PF kernel ethdev driver, if changed
* VF driver should configure hardware using the new MAC address in the
* recovering hardware configuration stage of the reset process.
@@ -2416,12 +2416,12 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
/*
* The hns3 PF ethdev driver in kernel support setting VF MAC address
* on the host by "ip link set ..." command. To avoid some incorrect
- * scenes, for example, hns3 VF PMD driver fails to receive and send
+ * scenes, for example, hns3 VF PMD fails to receive and send
* packets after user configure the MAC address by using the
- * "ip link set ..." command, hns3 VF PMD driver keep the same MAC
+ * "ip link set ..." command, hns3 VF PMD keep the same MAC
* address strategy as the hns3 kernel ethdev driver in the
* initialization. If user configure a MAC address by the ip command
- * for VF device, then hns3 VF PMD driver will start with it, otherwise
+ * for VF device, then hns3 VF PMD will start with it, otherwise
* start with a random MAC address in the initialization.
*/
if (rte_is_zero_ether_addr((struct rte_ether_addr *)hw->mac.mac_addr))
--git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 85495bbe89..3a4b699ae2 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -667,7 +667,7 @@ hns3_rss_set_default_args(struct hns3_hw *hw)
}
/*
- * RSS initialization for hns3 pmd driver.
+ * RSS initialization for hns3 PMD.
*/
int
hns3_config_rss(struct hns3_adapter *hns)
@@ -739,7 +739,7 @@ hns3_config_rss(struct hns3_adapter *hns)
}
/*
- * RSS uninitialization for hns3 pmd driver.
+ * RSS uninitialization for hns3 PMD.
*/
void
hns3_rss_uninit(struct hns3_adapter *hns)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 40cc4e9c1a..f365daadf8 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1899,8 +1899,8 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
/*
* For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
* the pvid_sw_discard_en in the queue struct should not be changed,
- * because PVID-related operations do not need to be processed by PMD
- * driver. For hns3 VF device, whether it needs to process PVID depends
+ * because PVID-related operations do not need to be processed by PMD.
+ * For hns3 VF device, whether it needs to process PVID depends
* on the configuration of PF kernel mode netdevice driver. And the
* related PF configuration is delivered through the mailbox and finally
* reflectd in port_base_vlan_cfg.
@@ -3039,8 +3039,8 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
/*
* For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
* the pvid_sw_shift_en in the queue struct should not be changed,
- * because PVID-related operations do not need to be processed by PMD
- * driver. For hns3 VF device, whether it needs to process PVID depends
+ * because PVID-related operations do not need to be processed by PMD.
+ * For hns3 VF device, whether it needs to process PVID depends
* on the configuration of PF kernel mode netdev driver. And the
* related PF configuration is delivered through the mailbox and finally
* reflectd in port_base_vlan_cfg.
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index df731856ef..5423568cd0 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -318,7 +318,7 @@ struct hns3_rx_queue {
* should not be transitted to the upper-layer application. For hardware
* network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
* such as kunpeng 930, PVID will not be reported to the BDs. So, PMD
- * driver does not need to perform PVID-related operation in Rx. At this
+ * does not need to perform PVID-related operation in Rx. At this
* point, the pvid_sw_discard_en will be false.
*/
uint8_t pvid_sw_discard_en:1;
@@ -490,7 +490,7 @@ struct hns3_tx_queue {
* PVID will overwrite the outer VLAN field of Tx BD. For the hardware
* network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
* such as kunpeng 930, if the PVID is set, the hardware will shift the
- * VLAN field automatically. So, PMD driver does not need to do
+ * VLAN field automatically. So, PMD does not need to do
* PVID-related operations in Tx. And pvid_sw_shift_en will be false at
* this point.
*/
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 344cbd25d3..c0bfff43ee 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1922,7 +1922,7 @@ i40e_dev_configure(struct rte_eth_dev *dev)
goto err;
/* VMDQ setup.
- * General PMD driver call sequence are NIC init, configure,
+ * General PMD call sequence are NIC init, configure,
* rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it
* will try to lookup the VSI that specific queue belongs to if VMDQ
* applicable. So, VMDQ setting has to be done before
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 3556c9cd17..8b35fa119c 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -8,7 +8,7 @@
*
* @file dpdk/pmd/nfp_net_pmd.h
*
- * Netronome NFP_NET PMD driver
+ * Netronome NFP_NET PMD
*/
#ifndef _NFP_COMMON_H_
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 830863af28..8e81cc498f 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -342,7 +342,7 @@ nfp_net_close(struct rte_eth_dev *dev)
(void *)dev);
/*
- * The ixgbe PMD driver disables the pcie master on the
+ * The ixgbe PMD disables the pcie master on the
* device. The i40e does not...
*/
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 5557a1e002..303ef72b1b 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -238,7 +238,7 @@ nfp_netvf_close(struct rte_eth_dev *dev)
(void *)dev);
/*
- * The ixgbe PMD driver disables the pcie master on the
+ * The ixgbe PMD disables the pcie master on the
* device. The i40e does not...
*/
diff --git a/drivers/raw/ifpga/base/README b/drivers/raw/ifpga/base/README
index 6b2b171b01..55d92d590a 100644
--- a/drivers/raw/ifpga/base/README
+++ b/drivers/raw/ifpga/base/README
@@ -42,5 +42,5 @@ Some features added in this version:
3. Add altera SPI master driver and Intel MAX10 device driver.
4. Add Altera I2C master driver and AT24 eeprom driver.
5. Add Device Tree support to get the configuration from card.
-6. Instruding and exposing APIs to DPDK PMD driver to access networking
+6. Instruding and exposing APIs to DPDK PMD to access networking
functionality.
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index ff193f2d65..1dbcf73b0e 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -164,7 +164,7 @@ rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_start(uint16_t dev_id);
@@ -207,7 +207,7 @@ rte_bbdev_close(uint16_t dev_id);
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
@@ -222,7 +222,7 @@ rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id);
@@ -782,7 +782,7 @@ rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
@@ -798,7 +798,7 @@ rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
@@ -825,7 +825,7 @@ rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
* @return
* - 0 on success
* - ENOTSUP if interrupts are not supported by the identified device
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
diff --git a/lib/compressdev/rte_compressdev_pmd.h b/lib/compressdev/rte_compressdev_pmd.h
index 16b6bc6b35..945a991fd6 100644
--- a/lib/compressdev/rte_compressdev_pmd.h
+++ b/lib/compressdev/rte_compressdev_pmd.h
@@ -319,7 +319,7 @@ rte_compressdev_pmd_release_device(struct rte_compressdev *dev);
* PMD assist function to parse initialisation arguments for comp driver
* when creating a new comp PMD device instance.
*
- * PMD driver should set default values for that PMD before calling function,
+ * PMD should set default values for that PMD before calling function,
* these default values will be over-written with successfully parsed values
* from args string.
*
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 89bf2af399..a6b25d297b 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -483,7 +483,7 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
* PMD assist function to parse initialisation arguments for crypto driver
* when creating a new crypto PMD device instance.
*
- * PMD driver should set default values for that PMD before calling function,
+ * PMD should set default values for that PMD before calling function,
* these default values will be over-written with successfully parsed values
* from args string.
*
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index e42d8739ab..064785686f 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -59,7 +59,7 @@ typedef uint16_t (*rte_dma_burst_capacity_t)(const void *dev_private, uint16_t v
* functions.
*
* The 'dev_private' field was placed in the first cache line to optimize
- * performance because the PMD driver mainly depends on this field.
+ * performance because the PMD mainly depends on this field.
*/
struct rte_dma_fp_object {
/** PMD-specific private data. The driver should copy
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6c3f774672..448a41cb0e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -8,7 +8,7 @@
/**
* @file
*
- * RTE PMD Driver Registration Interface
+ * RTE PMD Registration Interface
*
* This file manages the list of device drivers.
*/
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index 71c8af9df3..37a0f042ab 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -35,7 +35,7 @@ extern "C" {
/**
* Class type key in global devargs syntax.
*
- * Legacy devargs parser doesn't parse class type. PMD driver is
+ * Legacy devargs parser doesn't parse class type. PMD is
* encouraged to use this key to resolve class type.
*/
#define RTE_DEVARGS_KEY_CLASS "class"
@@ -43,7 +43,7 @@ extern "C" {
/**
* Driver type key in global devargs syntax.
*
- * Legacy devargs parser doesn't parse driver type. PMD driver is
+ * Legacy devargs parser doesn't parse driver type. PMD is
* encouraged to use this key to resolve driver type.
*/
#define RTE_DEVARGS_KEY_DRIVER "driver"
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 096b676fc1..fa299c8ad7 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2610,7 +2610,7 @@ int rte_eth_tx_hairpin_queue_setup
* - (-EINVAL) if bad parameter.
* - (-ENODEV) if *port_id* invalid
* - (-ENOTSUP) if hardware doesn't support.
- * - Others detailed errors from PMD drivers.
+ * - Others detailed errors from PMDs.
*/
__rte_experimental
int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
@@ -2636,7 +2636,7 @@ int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
* - (-ENODEV) if Tx port ID is invalid.
* - (-EBUSY) if device is not in started state.
* - (-ENOTSUP) if hardware doesn't support.
- * - Others detailed errors from PMD drivers.
+ * - Others detailed errors from PMDs.
*/
__rte_experimental
int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port);
@@ -2663,7 +2663,7 @@ int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port);
* - (-ENODEV) if Tx port ID is invalid.
* - (-EBUSY) if device is in stopped state.
* - (-ENOTSUP) if hardware doesn't support.
- * - Others detailed errors from PMD drivers.
+ * - Others detailed errors from PMDs.
*/
__rte_experimental
int rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port);
@@ -2706,7 +2706,7 @@ int rte_eth_dev_is_valid_port(uint16_t port_id);
* - -ENODEV: if *port_id* is invalid.
* - -EINVAL: The queue_id out of range or belong to hairpin.
* - -EIO: if device is removed.
- * - -ENOTSUP: The function not supported in PMD driver.
+ * - -ENOTSUP: The function not supported in PMD.
*/
int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id);
@@ -2724,7 +2724,7 @@ int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id);
* - -ENODEV: if *port_id* is invalid.
* - -EINVAL: The queue_id out of range or belong to hairpin.
* - -EIO: if device is removed.
- * - -ENOTSUP: The function not supported in PMD driver.
+ * - -ENOTSUP: The function not supported in PMD.
*/
int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id);
@@ -2743,7 +2743,7 @@ int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id);
* - -ENODEV: if *port_id* is invalid.
* - -EINVAL: The queue_id out of range or belong to hairpin.
* - -EIO: if device is removed.
- * - -ENOTSUP: The function not supported in PMD driver.
+ * - -ENOTSUP: The function not supported in PMD.
*/
int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id);
@@ -2761,7 +2761,7 @@ int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id);
* - -ENODEV: if *port_id* is invalid.
* - -EINVAL: The queue_id out of range or belong to hairpin.
* - -EIO: if device is removed.
- * - -ENOTSUP: The function not supported in PMD driver.
+ * - -ENOTSUP: The function not supported in PMD.
*/
int rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id);
@@ -2963,7 +2963,7 @@ int rte_eth_allmulticast_get(uint16_t port_id);
* Link information written back.
* @return
* - (0) if successful.
- * - (-ENOTSUP) if the function is not supported in PMD driver.
+ * - (-ENOTSUP) if the function is not supported in PMD.
* - (-ENODEV) if *port_id* invalid.
* - (-EINVAL) if bad parameter.
*/
@@ -2979,7 +2979,7 @@ int rte_eth_link_get(uint16_t port_id, struct rte_eth_link *link);
* Link information written back.
* @return
* - (0) if successful.
- * - (-ENOTSUP) if the function is not supported in PMD driver.
+ * - (-ENOTSUP) if the function is not supported in PMD.
* - (-ENODEV) if *port_id* invalid.
* - (-EINVAL) if bad parameter.
*/
--
2.25.1
^ permalink raw reply [relevance 1%]
* [PATCH v2 1/3] fix PMD wording typo
@ 2021-11-22 10:50 1% ` Sean Morrissey
0 siblings, 0 replies; 200+ results
From: Sean Morrissey @ 2021-11-22 10:50 UTC (permalink / raw)
To: Xiaoyun Li, Nicolas Chautru, Jay Zhou, Ciara Loftus, Qi Zhang,
Steven Webster, Matt Peters, Apeksha Gupta, Sachin Saxena,
Xiao Wang, Haiyue Wang, Beilei Xing, Stephen Hemminger, Long Li,
Heinrich Kuhn, Jerin Jacob, Maciej Czekaj, Maxime Coquelin,
Chenbo Xia, Konstantin Ananyev, Andrew Rybchenko, Fiona Trahe,
Ashish Gupta, John Griffin, Deepak Kumar Jain, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Rosen Xu, Tianfei zhang, Akhil Goyal,
Declan Doherty, Chengwen Feng, Kevin Laatz, Bruce Richardson,
Thomas Monjalon, Ferruh Yigit
Cc: dev, Sean Morrissey, Conor Fogarty, John McNamara, Conor Walsh
Removing the use of driver following PMD as its
unnecessary.
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
app/test-pmd/cmdline.c | 4 +--
doc/guides/bbdevs/turbo_sw.rst | 2 +-
doc/guides/cryptodevs/virtio.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/nics/af_packet.rst | 2 +-
doc/guides/nics/af_xdp.rst | 2 +-
doc/guides/nics/avp.rst | 4 +--
doc/guides/nics/enetfec.rst | 2 +-
doc/guides/nics/fm10k.rst | 4 +--
doc/guides/nics/intel_vf.rst | 2 +-
doc/guides/nics/netvsc.rst | 2 +-
doc/guides/nics/nfp.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/nics/virtio.rst | 4 +--
.../prog_guide/writing_efficient_code.rst | 4 +--
doc/guides/rel_notes/known_issues.rst | 2 +-
doc/guides/rel_notes/release_16_04.rst | 2 +-
doc/guides/rel_notes/release_19_05.rst | 6 ++--
doc/guides/rel_notes/release_19_11.rst | 2 +-
doc/guides/rel_notes/release_20_11.rst | 4 +--
doc/guides/rel_notes/release_21_02.rst | 2 +-
doc/guides/rel_notes/release_21_05.rst | 2 +-
doc/guides/rel_notes/release_21_08.rst | 2 +-
doc/guides/rel_notes/release_21_11.rst | 2 +-
doc/guides/rel_notes/release_2_2.rst | 4 +--
doc/guides/sample_app_ug/bbdev_app.rst | 2 +-
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
doc/guides/tools/testeventdev.rst | 2 +-
drivers/common/sfc_efx/efsys.h | 2 +-
drivers/compress/qat/qat_comp_pmd.h | 2 +-
drivers/crypto/qat/qat_asym_pmd.h | 2 +-
drivers/crypto/qat/qat_sym_pmd.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/base/hinic_pmd_cmdq.h | 2 +-
drivers/net/hns3/hns3_ethdev.c | 6 ++--
drivers/net/hns3/hns3_ethdev.h | 8 +++---
drivers/net/hns3/hns3_ethdev_vf.c | 28 +++++++++----------
| 4 +--
drivers/net/hns3/hns3_rxtx.c | 8 +++---
drivers/net/hns3/hns3_rxtx.h | 4 +--
drivers/net/i40e/i40e_ethdev.c | 2 +-
drivers/net/nfp/nfp_common.h | 2 +-
drivers/net/nfp/nfp_ethdev.c | 2 +-
drivers/net/nfp/nfp_ethdev_vf.c | 2 +-
drivers/raw/ifpga/base/README | 2 +-
lib/bbdev/rte_bbdev.h | 12 ++++----
lib/compressdev/rte_compressdev_pmd.h | 2 +-
lib/cryptodev/cryptodev_pmd.h | 2 +-
lib/dmadev/rte_dmadev_core.h | 2 +-
lib/eal/include/rte_dev.h | 2 +-
lib/eal/include/rte_devargs.h | 4 +--
lib/ethdev/rte_ethdev.h | 18 ++++++------
52 files changed, 98 insertions(+), 98 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index c43c85c591..6e10afeedd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2701,7 +2701,7 @@ cmd_config_rxtx_queue_parsed(void *parsed_result,
ret = rte_eth_dev_tx_queue_stop(res->portid, res->qid);
if (ret == -ENOTSUP)
- fprintf(stderr, "Function not supported in PMD driver\n");
+ fprintf(stderr, "Function not supported in PMD\n");
}
cmdline_parse_token_string_t cmd_config_rxtx_queue_port =
@@ -14700,7 +14700,7 @@ cmd_ddp_info_parsed(
free(proto);
#endif
if (ret == -ENOTSUP)
- fprintf(stderr, "Function not supported in PMD driver\n");
+ fprintf(stderr, "Function not supported in PMD\n");
close_file(pkg);
}
diff --git a/doc/guides/bbdevs/turbo_sw.rst b/doc/guides/bbdevs/turbo_sw.rst
index 43c5129fd7..1e23e37027 100644
--- a/doc/guides/bbdevs/turbo_sw.rst
+++ b/doc/guides/bbdevs/turbo_sw.rst
@@ -149,7 +149,7 @@ Example:
* For AVX512 machines with SDK libraries installed then both 4G and 5G can be enabled for full real time FEC capability.
For AVX2 machines it is possible to only enable the 4G libraries and the PMD capabilities will be limited to 4G FEC.
- If no library is present then the PMD driver will still build but its capabilities will be limited accordingly.
+ If no library is present then the PMD will still build but its capabilities will be limited accordingly.
To use the PMD in an application, user must:
diff --git a/doc/guides/cryptodevs/virtio.rst b/doc/guides/cryptodevs/virtio.rst
index 8b96446ff2..ce4d43519a 100644
--- a/doc/guides/cryptodevs/virtio.rst
+++ b/doc/guides/cryptodevs/virtio.rst
@@ -73,7 +73,7 @@ number of the virtio-crypto device:
echo -n 0000:00:04.0 > /sys/bus/pci/drivers/virtio-pci/unbind
echo "1af4 1054" > /sys/bus/pci/drivers/uio_pci_generic/new_id
-Finally the front-end virtio crypto PMD driver can be installed.
+Finally the front-end virtio crypto PMD can be installed.
Tests
-----
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index efd2dd23f1..4f99617233 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -66,7 +66,7 @@ The EAL options are as follows:
* ``-d``:
Add a driver or driver directory to be loaded.
- The application should use this option to load the pmd drivers
+ The application should use this option to load the PMDs
that are built as shared libraries.
* ``-m MB``:
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index 54feffdef4..8292369141 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -5,7 +5,7 @@ AF_PACKET Poll Mode Driver
==========================
The AF_PACKET socket in Linux allows an application to receive and send raw
-packets. This Linux-specific PMD driver binds to an AF_PACKET socket and allows
+packets. This Linux-specific PMD binds to an AF_PACKET socket and allows
a DPDK application to send and receive raw packets through the Kernel.
In order to improve Rx and Tx performance this implementation makes use of
diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst
index 8bf40b5f0f..c9d0e1ad6c 100644
--- a/doc/guides/nics/af_xdp.rst
+++ b/doc/guides/nics/af_xdp.rst
@@ -12,7 +12,7 @@ For the full details behind AF_XDP socket, you can refer to
`AF_XDP documentation in the Kernel
<https://www.kernel.org/doc/Documentation/networking/af_xdp.rst>`_.
-This Linux-specific PMD driver creates the AF_XDP socket and binds it to a
+This Linux-specific PMD creates the AF_XDP socket and binds it to a
specific netdev queue, it allows a DPDK application to send and receive raw
packets through the socket which would bypass the kernel network stack.
Current implementation only supports single queue, multi-queues feature will
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
index 1a194fc23c..a749f2a0f6 100644
--- a/doc/guides/nics/avp.rst
+++ b/doc/guides/nics/avp.rst
@@ -35,7 +35,7 @@ to another with minimal packet loss.
Features and Limitations of the AVP PMD
---------------------------------------
-The AVP PMD driver provides the following functionality.
+The AVP PMD provides the following functionality.
* Receive and transmit of both simple and chained mbuf packets,
@@ -74,7 +74,7 @@ Launching a VM with an AVP type network attachment
The following example will launch a VM with three network attachments. The
first attachment will have a default vif-model of "virtio". The next two
network attachments will have a vif-model of "avp" and may be used with a DPDK
-application which is built to include the AVP PMD driver.
+application which is built to include the AVP PMD.
.. code-block:: console
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index a64e72fdd6..381635e627 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -65,7 +65,7 @@ The diagram below shows a system level overview of ENETFEC:
| PHY |
+-----+
-ENETFEC Ethernet driver is traditional DPDK PMD driver running in userspace.
+ENETFEC Ethernet driver is traditional DPDK PMD running in userspace.
'fec-uio' is the kernel driver.
The MAC and PHY are the hardware blocks.
ENETFEC PMD uses standard UIO interface to access kernel
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index bba53f5a64..d6efac0917 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -114,9 +114,9 @@ Switch manager
~~~~~~~~~~~~~~
The Intel FM10000 family of NICs integrate a hardware switch and multiple host
-interfaces. The FM10000 PMD driver only manages host interfaces. For the
+interfaces. The FM10000 PMD only manages host interfaces. For the
switch component another switch driver has to be loaded prior to the
-FM10000 PMD driver. The switch driver can be acquired from Intel support.
+FM10000 PMD. The switch driver can be acquired from Intel support.
Only Testpoint is validated with DPDK, the latest version that has been
validated with DPDK is 4.1.6.
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index fd235e1463..648af39c22 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -571,7 +571,7 @@ Fast Host-based Packet Processing
Software Defined Network (SDN) trends are demanding fast host-based packet handling.
In a virtualization environment,
-the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
+the DPDK VF PMD performs the same throughput result as a non-VT native environment.
With such host instance fast packet processing, lots of services such as filtering, QoS,
DPI can be offloaded on the host fast path.
diff --git a/doc/guides/nics/netvsc.rst b/doc/guides/nics/netvsc.rst
index c0e218c743..77efe1dc91 100644
--- a/doc/guides/nics/netvsc.rst
+++ b/doc/guides/nics/netvsc.rst
@@ -14,7 +14,7 @@ checksum and segmentation offloads.
Features and Limitations of Hyper-V PMD
---------------------------------------
-In this release, the hyper PMD driver provides the basic functionality of packet reception and transmission.
+In this release, the hyper PMD provides the basic functionality of packet reception and transmission.
* It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
when transmitting packets. The packet size supported is from 64 to 65536.
diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
index bf8be723b0..30cdc69202 100644
--- a/doc/guides/nics/nfp.rst
+++ b/doc/guides/nics/nfp.rst
@@ -14,7 +14,7 @@ This document explains how to use DPDK with the Netronome Poll Mode
Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
(NFP-6xxx) and Netronome's Flow Processor 4xxx (NFP-4xxx).
-NFP is a SRIOV capable device and the PMD driver supports the physical
+NFP is a SRIOV capable device and the PMD supports the physical
function (PF) and the virtual functions (VFs).
Dependencies
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 98f23a2b2a..d96395dafa 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -199,7 +199,7 @@ Each port consists of a primary VF and n secondary VF(s). Each VF provides 8 Tx/
When a given port is configured to use more than 8 queues, it requires one (or more) secondary VF.
Each secondary VF adds 8 additional queues to the queue set.
-During PMD driver initialization, the primary VF's are enumerated by checking the
+During PMD initialization, the primary VF's are enumerated by checking the
specific flag (see sqs message in DPDK boot log - sqs indicates secondary queue set).
They are at the beginning of VF list (the remain ones are secondary VF's).
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index 98e0d012b7..7c0ae2b3af 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -17,7 +17,7 @@ With this enhancement, virtio could achieve quite promising performance.
For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
please refer to Chapter "Driver for VM Emulated Devices".
-In this chapter, we will demonstrate usage of virtio PMD driver with two backends,
+In this chapter, we will demonstrate usage of virtio PMD with two backends,
standard qemu vhost back end and vhost kni back end.
Virtio Implementation in DPDK
@@ -40,7 +40,7 @@ end if necessary.
Features and Limitations of virtio PMD
--------------------------------------
-In this release, the virtio PMD driver provides the basic functionality of packet reception and transmission.
+In this release, the virtio PMD provides the basic functionality of packet reception and transmission.
* It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
when transmitting packets. The packet size supported is from 64 to 1518.
diff --git a/doc/guides/prog_guide/writing_efficient_code.rst b/doc/guides/prog_guide/writing_efficient_code.rst
index a61e8320ae..e6c26efdd3 100644
--- a/doc/guides/prog_guide/writing_efficient_code.rst
+++ b/doc/guides/prog_guide/writing_efficient_code.rst
@@ -119,8 +119,8 @@ The code algorithm that dequeues messages may be something similar to the follow
my_process_bulk(obj_table, count);
}
-PMD Driver
-----------
+PMD
+---
The DPDK Poll Mode Driver (PMD) is also able to work in bulk/burst mode,
allowing the factorization of some code for each call in the send or receive function.
diff --git a/doc/guides/rel_notes/known_issues.rst b/doc/guides/rel_notes/known_issues.rst
index beea877bad..187d9c942e 100644
--- a/doc/guides/rel_notes/known_issues.rst
+++ b/doc/guides/rel_notes/known_issues.rst
@@ -250,7 +250,7 @@ PMD does not work with --no-huge EAL command line parameter
**Description**:
Currently, the DPDK does not store any information about memory allocated by ``malloc()` (for example, NUMA node,
- physical address), hence PMD drivers do not work when the ``--no-huge`` command line parameter is supplied to EAL.
+ physical address), hence PMDs do not work when the ``--no-huge`` command line parameter is supplied to EAL.
**Implication**:
Sending and receiving data with PMD will not work.
diff --git a/doc/guides/rel_notes/release_16_04.rst b/doc/guides/rel_notes/release_16_04.rst
index b7d07834e1..ac18e1dddb 100644
--- a/doc/guides/rel_notes/release_16_04.rst
+++ b/doc/guides/rel_notes/release_16_04.rst
@@ -56,7 +56,7 @@ New Features
* **Enabled Virtio 1.0 support.**
- Enabled Virtio 1.0 support for Virtio pmd driver.
+ Enabled Virtio 1.0 support for Virtio PMD.
* **Supported Virtio for ARM.**
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 30f704e204..89ae425bdb 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -46,13 +46,13 @@ New Features
Updated the KNI kernel module to set the ``max_mtu`` according to the given
initial MTU size. Without it, the maximum MTU was 1500.
- Updated the KNI PMD driver to set the ``mbuf_size`` and MTU based on
+ Updated the KNI PMD to set the ``mbuf_size`` and MTU based on
the given mb-pool. This provide the ability to pass jumbo frames
if the mb-pool contains a suitable buffer size.
* **Added the AF_XDP PMD.**
- Added a Linux-specific PMD driver for AF_XDP. This PMD can create an AF_XDP socket
+ Added a Linux-specific PMD for AF_XDP. This PMD can create an AF_XDP socket
and bind it to a specific netdev queue. It allows a DPDK application to send
and receive raw packets through the socket which would bypass the kernel
network stack to achieve high performance packet processing.
@@ -240,7 +240,7 @@ ABI Changes
The ``rte_eth_dev_info`` structure has had two extra fields
added: ``min_mtu`` and ``max_mtu``. Each of these are of type ``uint16_t``.
- The values of these fields can be set specifically by the PMD drivers as
+ The values of these fields can be set specifically by the PMDs as
supported values can vary from device to device.
* cryptodev: in 18.08 a new structure ``rte_crypto_asym_op`` was introduced and
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index b509a6dd28..302b3e5f37 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -189,7 +189,7 @@ New Features
* **Added Marvell OCTEON TX2 crypto PMD.**
- Added a new PMD driver for hardware crypto offload block on ``OCTEON TX2``
+ Added a new PMD for hardware crypto offload block on ``OCTEON TX2``
SoC.
See :doc:`../cryptodevs/octeontx2` for more details
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 90cc3ed680..af7ce90ba3 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -192,7 +192,7 @@ New Features
* **Added Wangxun txgbe PMD.**
- Added a new PMD driver for Wangxun 10 Gigabit Ethernet NICs.
+ Added a new PMD for Wangxun 10 Gigabit Ethernet NICs.
See the :doc:`../nics/txgbe` for more details.
@@ -288,7 +288,7 @@ New Features
* **Added Marvell OCTEON TX2 regex PMD.**
- Added a new PMD driver for the hardware regex offload block for OCTEON TX2 SoC.
+ Added a new PMD for the hardware regex offload block for OCTEON TX2 SoC.
See the :doc:`../regexdevs/octeontx2` for more details.
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 9d5e17758f..5fbf5b3d43 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -135,7 +135,7 @@ New Features
* **Added mlx5 compress PMD.**
- Added a new compress PMD driver for Bluefield 2 adapters.
+ Added a new compress PMD for Bluefield 2 adapters.
See the :doc:`../compressdevs/mlx5` for more details.
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8adb225a4d..49044ed422 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -78,7 +78,7 @@ New Features
* Updated ena_com (HAL) to the latest version.
* Added indication of the RSS hash presence in the mbuf.
-* **Updated Arkville PMD driver.**
+* **Updated Arkville PMD.**
Updated Arkville net driver with new features and improvements, including:
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index 6fb4e43346..ac1c081903 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -67,7 +67,7 @@ New Features
* **Added Wangxun ngbe PMD.**
- Added a new PMD driver for Wangxun 1Gb Ethernet NICs.
+ Added a new PMD for Wangxun 1Gb Ethernet NICs.
See the :doc:`../nics/ngbe` for more details.
* **Added inflight packets clear API in vhost library.**
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4d8c59472a..1d6774afc1 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -354,7 +354,7 @@ New Features
* **Added NXP LA12xx baseband PMD.**
- * Added a new baseband PMD driver for NXP LA12xx Software defined radio.
+ * Added a new baseband PMD for NXP LA12xx Software defined radio.
* See the :doc:`../bbdevs/la12xx` for more details.
* **Updated Mellanox compress driver.**
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 8273473ff4..029b758e90 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -10,8 +10,8 @@ New Features
* **Introduce ARMv7 and ARMv8 architectures.**
* It is now possible to build DPDK for the ARMv7 and ARMv8 platforms.
- * ARMv7 can be tested with virtual PMD drivers.
- * ARMv8 can be tested with virtual and physical PMD drivers.
+ * ARMv7 can be tested with virtual PMDs.
+ * ARMv8 can be tested with virtual and physical PMDs.
* **Enabled freeing of ring.**
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 45e69e36e2..7f02f0ed90 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -31,7 +31,7 @@ Limitations
Compiling the Application
-------------------------
-DPDK needs to be built with ``baseband_turbo_sw`` PMD driver enabled along
+DPDK needs to be built with ``baseband_turbo_sw`` PMD enabled along
with ``FLEXRAN SDK`` Libraries. Refer to *SW Turbo Poll Mode Driver*
documentation for more details on this.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 486247ac2e..ecb1c857c4 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -220,7 +220,7 @@ Once the application starts, it transitions through three phases:
* **Final Phase** - Perform the following tasks:
- Calls the EAL, PMD driver and ACL library to free resource, then quits.
+ Calls the EAL, PMD and ACL library to free resource, then quits.
Compiling the Application
-------------------------
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index 7b4cdeb43f..48efb9ea6e 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -239,7 +239,7 @@ to the ordered queue. The worker receives the events from ordered queue and
forwards to atomic queue. Since the events from an ordered queue can be
processed in parallel on the different workers, the ingress order of events
might have changed on the downstream atomic queue enqueue. On enqueue to the
-atomic queue, the eventdev PMD driver reorders the event to the original
+atomic queue, the eventdev PMD reorders the event to the original
ingress order(i.e producer ingress order).
When the event is dequeued from the atomic queue by the worker, this test
diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h
index b2109bf3c0..3860c2835a 100644
--- a/drivers/common/sfc_efx/efsys.h
+++ b/drivers/common/sfc_efx/efsys.h
@@ -609,7 +609,7 @@ typedef struct efsys_bar_s {
/* DMA SYNC */
/*
- * DPDK does not provide any DMA syncing API, and no PMD drivers
+ * DPDK does not provide any DMA syncing API, and no PMDs
* have any traces of explicit DMA syncing.
* DMA mapping is assumed to be coherent.
*/
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 86317a513c..3c8682a768 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -13,7 +13,7 @@
#include "qat_device.h"
#include "qat_comp.h"
-/**< Intel(R) QAT Compression PMD driver name */
+/**< Intel(R) QAT Compression PMD name */
#define COMPRESSDEV_NAME_QAT_PMD compress_qat
/* Private data structure for a QAT compression device capability. */
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index fd6b406248..f988d646e5 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -10,7 +10,7 @@
#include "qat_crypto.h"
#include "qat_device.h"
-/** Intel(R) QAT Asymmetric Crypto PMD driver name */
+/** Intel(R) QAT Asymmetric Crypto PMD name */
#define CRYPTODEV_NAME_QAT_ASYM_PMD crypto_qat_asym
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index 0dc0c6f0d9..59fbdefa12 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -16,7 +16,7 @@
#include "qat_crypto.h"
#include "qat_device.h"
-/** Intel(R) QAT Symmetric Crypto PMD driver name */
+/** Intel(R) QAT Symmetric Crypto PMD name */
#define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat
/* Internal capabilities */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 7c85a05746..43e1d13431 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -255,7 +255,7 @@ rx_queue_clean(struct fm10k_rx_queue *q)
for (i = 0; i < q->nb_fake_desc; ++i)
q->hw_ring[q->nb_desc + i] = zero;
- /* vPMD driver has a different way of releasing mbufs. */
+ /* vPMD has a different way of releasing mbufs. */
if (q->rx_using_sse) {
fm10k_rx_queue_release_mbufs_vec(q);
return;
diff --git a/drivers/net/hinic/base/hinic_pmd_cmdq.h b/drivers/net/hinic/base/hinic_pmd_cmdq.h
index 0d5e380123..58a1fbda71 100644
--- a/drivers/net/hinic/base/hinic_pmd_cmdq.h
+++ b/drivers/net/hinic/base/hinic_pmd_cmdq.h
@@ -9,7 +9,7 @@
#define HINIC_SCMD_DATA_LEN 16
-/* pmd driver uses 64, kernel l2nic use 4096 */
+/* PMD uses 64, kernel l2nic use 4096 */
#define HINIC_CMDQ_DEPTH 64
#define HINIC_CMDQ_BUF_SIZE 2048U
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 847e660f44..0bd12907d8 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -1060,7 +1060,7 @@ hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
return ret;
/*
* Only in HNS3_SW_SHIFT_AND_MODE the PVID related operation in Tx/Rx
- * need be processed by PMD driver.
+ * need be processed by PMD.
*/
if (pvid_en_state_change &&
hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
@@ -2592,7 +2592,7 @@ hns3_parse_cfg(struct hns3_cfg *cfg, struct hns3_cmd_desc *desc)
* Field ext_rss_size_max obtained from firmware will be more flexible
* for future changes and expansions, which is an exponent of 2, instead
* of reading out directly. If this field is not zero, hns3 PF PMD
- * driver uses it as rss_size_max under one TC. Device, whose revision
+ * uses it as rss_size_max under one TC. Device, whose revision
* id is greater than or equal to PCI_REVISION_ID_HIP09_A, obtains the
* maximum number of queues supported under a TC through this field.
*/
@@ -6311,7 +6311,7 @@ hns3_fec_set(struct rte_eth_dev *dev, uint32_t mode)
if (ret < 0)
return ret;
- /* HNS3 PMD driver only support one bit set mode, e.g. 0x1, 0x4 */
+ /* HNS3 PMD only support one bit set mode, e.g. 0x1, 0x4 */
if (!is_fec_mode_one_bit_set(mode)) {
hns3_err(hw, "FEC mode(0x%x) not supported in HNS3 PMD, "
"FEC mode should be only one bit set", mode);
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 6d30125dcc..aa45b31261 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -465,8 +465,8 @@ struct hns3_queue_intr {
* enable Rx interrupt.
*
* - HNS3_INTR_MAPPING_VEC_ALL
- * PMD driver can map/unmmap all interrupt vectors with queues When
- * Rx interrupt in enabled.
+ * PMD can map/unmmap all interrupt vectors with queues when
+ * Rx interrupt is enabled.
*/
uint8_t mapping_mode;
/*
@@ -575,14 +575,14 @@ struct hns3_hw {
*
* - HNS3_SW_SHIFT_AND_DISCARD_MODE
* For some versions of hardware network engine, because of the
- * hardware limitation, PMD driver needs to detect the PVID status
+ * hardware limitation, PMD needs to detect the PVID status
* to work with haredware to implement PVID-related functions.
* For example, driver need discard the stripped PVID tag to ensure
* the PVID will not report to mbuf and shift the inserted VLAN tag
* to avoid port based VLAN covering it.
*
* - HNS3_HW_SHIT_AND_DISCARD_MODE
- * PMD driver does not need to process PVID-related functions in
+ * PMD does not need to process PVID-related functions in
* I/O process, Hardware will adjust the sequence between port based
* VLAN tag and BD VLAN tag automatically and VLAN tag stripped by
* PVID will be invisible to driver. And in this mode, hns3 is able
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index d8a99693e0..805abd4543 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -232,7 +232,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0);
if (ret) {
/*
- * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev
+ * The hns3 VF PMD depends on the hns3 PF kernel ethdev
* driver. When user has configured a MAC address for VF device
* by "ip link set ..." command based on the PF device, the hns3
* PF kernel ethdev driver does not allow VF driver to request
@@ -312,9 +312,9 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
/*
- * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev driver,
+ * The hns3 VF PMD depends on the hns3 PF kernel ethdev driver,
* so there are some features for promiscuous/allmulticast mode in hns3
- * VF PMD driver as below:
+ * VF PMD as below:
* 1. The promiscuous/allmulticast mode can be configured successfully
* only based on the trusted VF device. If based on the non trusted
* VF device, configuring promiscuous/allmulticast mode will fail.
@@ -322,14 +322,14 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
* kernel ethdev driver on the host by the following command:
* "ip link set <eth num> vf <vf id> turst on"
* 2. After the promiscuous mode is configured successfully, hns3 VF PMD
- * driver can receive the ingress and outgoing traffic. In the words,
+ * can receive the ingress and outgoing traffic. This includes
* all the ingress packets, all the packets sent from the PF and
* other VFs on the same physical port.
* 3. Note: Because of the hardware constraints, By default vlan filter
* is enabled and couldn't be turned off based on VF device, so vlan
* filter is still effective even in promiscuous mode. If upper
* applications don't call rte_eth_dev_vlan_filter API function to
- * set vlan based on VF device, hns3 VF PMD driver will can't receive
+ * set vlan based on VF device, hns3 VF PMD will can't receive
* the packets with vlan tag in promiscuoue mode.
*/
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
@@ -553,9 +553,9 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The hns3 PF/VF devices on the same port share the hardware MTU
* configuration. Currently, we send mailbox to inform hns3 PF kernel
- * ethdev driver to finish hardware MTU configuration in hns3 VF PMD
- * driver, there is no need to stop the port for hns3 VF device, and the
- * MTU value issued by hns3 VF PMD driver must be less than or equal to
+ * ethdev driver to finish hardware MTU configuration in hns3 VF PMD,
+ * there is no need to stop the port for hns3 VF device, and the
+ * MTU value issued by hns3 VF PMD must be less than or equal to
* PF's MTU.
*/
if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) {
@@ -565,8 +565,8 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/*
* when Rx of scattered packets is off, we have some possibility of
- * using vector Rx process function or simple Rx functions in hns3 PMD
- * driver. If the input MTU is increased and the maximum length of
+ * using vector Rx process function or simple Rx functions in hns3 PMD.
+ * If the input MTU is increased and the maximum length of
* received packets is greater than the length of a buffer for Rx
* packet, the hardware network engine needs to use multiple BDs and
* buffers to store these packets. This will cause problems when still
@@ -2075,7 +2075,7 @@ hns3vf_check_default_mac_change(struct hns3_hw *hw)
* ethdev driver sets the MAC address for VF device after the
* initialization of the related VF device, the PF driver will notify
* VF driver to reset VF device to make the new MAC address effective
- * immediately. The hns3 VF PMD driver should check whether the MAC
+ * immediately. The hns3 VF PMD should check whether the MAC
* address has been changed by the PF kernel ethdev driver, if changed
* VF driver should configure hardware using the new MAC address in the
* recovering hardware configuration stage of the reset process.
@@ -2416,12 +2416,12 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
/*
* The hns3 PF ethdev driver in kernel support setting VF MAC address
* on the host by "ip link set ..." command. To avoid some incorrect
- * scenes, for example, hns3 VF PMD driver fails to receive and send
+ * scenes, for example, hns3 VF PMD fails to receive and send
* packets after user configure the MAC address by using the
- * "ip link set ..." command, hns3 VF PMD driver keep the same MAC
+ * "ip link set ..." command, hns3 VF PMD keep the same MAC
* address strategy as the hns3 kernel ethdev driver in the
* initialization. If user configure a MAC address by the ip command
- * for VF device, then hns3 VF PMD driver will start with it, otherwise
+ * for VF device, then hns3 VF PMD will start with it, otherwise
* start with a random MAC address in the initialization.
*/
if (rte_is_zero_ether_addr((struct rte_ether_addr *)hw->mac.mac_addr))
--git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 85495bbe89..3a4b699ae2 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -667,7 +667,7 @@ hns3_rss_set_default_args(struct hns3_hw *hw)
}
/*
- * RSS initialization for hns3 pmd driver.
+ * RSS initialization for hns3 PMD.
*/
int
hns3_config_rss(struct hns3_adapter *hns)
@@ -739,7 +739,7 @@ hns3_config_rss(struct hns3_adapter *hns)
}
/*
- * RSS uninitialization for hns3 pmd driver.
+ * RSS uninitialization for hns3 PMD.
*/
void
hns3_rss_uninit(struct hns3_adapter *hns)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 40cc4e9c1a..f365daadf8 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1899,8 +1899,8 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
/*
* For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
* the pvid_sw_discard_en in the queue struct should not be changed,
- * because PVID-related operations do not need to be processed by PMD
- * driver. For hns3 VF device, whether it needs to process PVID depends
+ * because PVID-related operations do not need to be processed by PMD.
+ * For hns3 VF device, whether it needs to process PVID depends
* on the configuration of PF kernel mode netdevice driver. And the
* related PF configuration is delivered through the mailbox and finally
* reflectd in port_base_vlan_cfg.
@@ -3039,8 +3039,8 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
/*
* For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
* the pvid_sw_shift_en in the queue struct should not be changed,
- * because PVID-related operations do not need to be processed by PMD
- * driver. For hns3 VF device, whether it needs to process PVID depends
+ * because PVID-related operations do not need to be processed by PMD.
+ * For hns3 VF device, whether it needs to process PVID depends
* on the configuration of PF kernel mode netdev driver. And the
* related PF configuration is delivered through the mailbox and finally
* reflectd in port_base_vlan_cfg.
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index df731856ef..5423568cd0 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -318,7 +318,7 @@ struct hns3_rx_queue {
* should not be transitted to the upper-layer application. For hardware
* network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
* such as kunpeng 930, PVID will not be reported to the BDs. So, PMD
- * driver does not need to perform PVID-related operation in Rx. At this
+ * does not need to perform PVID-related operation in Rx. At this
* point, the pvid_sw_discard_en will be false.
*/
uint8_t pvid_sw_discard_en:1;
@@ -490,7 +490,7 @@ struct hns3_tx_queue {
* PVID will overwrite the outer VLAN field of Tx BD. For the hardware
* network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
* such as kunpeng 930, if the PVID is set, the hardware will shift the
- * VLAN field automatically. So, PMD driver does not need to do
+ * VLAN field automatically. So, PMD does not need to do
* PVID-related operations in Tx. And pvid_sw_shift_en will be false at
* this point.
*/
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 344cbd25d3..c0bfff43ee 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1922,7 +1922,7 @@ i40e_dev_configure(struct rte_eth_dev *dev)
goto err;
/* VMDQ setup.
- * General PMD driver call sequence are NIC init, configure,
+ * General PMD call sequence are NIC init, configure,
* rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it
* will try to lookup the VSI that specific queue belongs to if VMDQ
* applicable. So, VMDQ setting has to be done before
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 3556c9cd17..8b35fa119c 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -8,7 +8,7 @@
*
* @file dpdk/pmd/nfp_net_pmd.h
*
- * Netronome NFP_NET PMD driver
+ * Netronome NFP_NET PMD
*/
#ifndef _NFP_COMMON_H_
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 830863af28..8e81cc498f 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -342,7 +342,7 @@ nfp_net_close(struct rte_eth_dev *dev)
(void *)dev);
/*
- * The ixgbe PMD driver disables the pcie master on the
+ * The ixgbe PMD disables the pcie master on the
* device. The i40e does not...
*/
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 5557a1e002..303ef72b1b 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -238,7 +238,7 @@ nfp_netvf_close(struct rte_eth_dev *dev)
(void *)dev);
/*
- * The ixgbe PMD driver disables the pcie master on the
+ * The ixgbe PMD disables the pcie master on the
* device. The i40e does not...
*/
diff --git a/drivers/raw/ifpga/base/README b/drivers/raw/ifpga/base/README
index 6b2b171b01..55d92d590a 100644
--- a/drivers/raw/ifpga/base/README
+++ b/drivers/raw/ifpga/base/README
@@ -42,5 +42,5 @@ Some features added in this version:
3. Add altera SPI master driver and Intel MAX10 device driver.
4. Add Altera I2C master driver and AT24 eeprom driver.
5. Add Device Tree support to get the configuration from card.
-6. Instruding and exposing APIs to DPDK PMD driver to access networking
+6. Instruding and exposing APIs to DPDK PMD to access networking
functionality.
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index ff193f2d65..1dbcf73b0e 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -164,7 +164,7 @@ rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_start(uint16_t dev_id);
@@ -207,7 +207,7 @@ rte_bbdev_close(uint16_t dev_id);
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
@@ -222,7 +222,7 @@ rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id);
@@ -782,7 +782,7 @@ rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
@@ -798,7 +798,7 @@ rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
*
* @return
* - 0 on success
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
@@ -825,7 +825,7 @@ rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
* @return
* - 0 on success
* - ENOTSUP if interrupts are not supported by the identified device
- * - negative value on failure - as returned from PMD driver
+ * - negative value on failure - as returned from PMD
*/
int
rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
diff --git a/lib/compressdev/rte_compressdev_pmd.h b/lib/compressdev/rte_compressdev_pmd.h
index 16b6bc6b35..945a991fd6 100644
--- a/lib/compressdev/rte_compressdev_pmd.h
+++ b/lib/compressdev/rte_compressdev_pmd.h
@@ -319,7 +319,7 @@ rte_compressdev_pmd_release_device(struct rte_compressdev *dev);
* PMD assist function to parse initialisation arguments for comp driver
* when creating a new comp PMD device instance.
*
- * PMD driver should set default values for that PMD before calling function,
+ * PMD should set default values for that PMD before calling function,
* these default values will be over-written with successfully parsed values
* from args string.
*
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 89bf2af399..a6b25d297b 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -483,7 +483,7 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
* PMD assist function to parse initialisation arguments for crypto driver
* when creating a new crypto PMD device instance.
*
- * PMD driver should set default values for that PMD before calling function,
+ * PMD should set default values for that PMD before calling function,
* these default values will be over-written with successfully parsed values
* from args string.
*
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index e42d8739ab..064785686f 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -59,7 +59,7 @@ typedef uint16_t (*rte_dma_burst_capacity_t)(const void *dev_private, uint16_t v
* functions.
*
* The 'dev_private' field was placed in the first cache line to optimize
- * performance because the PMD driver mainly depends on this field.
+ * performance because the PMD mainly depends on this field.
*/
struct rte_dma_fp_object {
/** PMD-specific private data. The driver should copy
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6c3f774672..448a41cb0e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -8,7 +8,7 @@
/**
* @file
*
- * RTE PMD Driver Registration Interface
+ * RTE PMD Registration Interface
*
* This file manages the list of device drivers.
*/
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index 71c8af9df3..37a0f042ab 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -35,7 +35,7 @@ extern "C" {
/**
* Class type key in global devargs syntax.
*
- * Legacy devargs parser doesn't parse class type. PMD driver is
+ * Legacy devargs parser doesn't parse class type. PMD is
* encouraged to use this key to resolve class type.
*/
#define RTE_DEVARGS_KEY_CLASS "class"
@@ -43,7 +43,7 @@ extern "C" {
/**
* Driver type key in global devargs syntax.
*
- * Legacy devargs parser doesn't parse driver type. PMD driver is
+ * Legacy devargs parser doesn't parse driver type. PMD is
* encouraged to use this key to resolve driver type.
*/
#define RTE_DEVARGS_KEY_DRIVER "driver"
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 096b676fc1..fa299c8ad7 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2610,7 +2610,7 @@ int rte_eth_tx_hairpin_queue_setup
* - (-EINVAL) if bad parameter.
* - (-ENODEV) if *port_id* invalid
* - (-ENOTSUP) if hardware doesn't support.
- * - Others detailed errors from PMD drivers.
+ * - Others detailed errors from PMDs.
*/
__rte_experimental
int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
@@ -2636,7 +2636,7 @@ int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
* - (-ENODEV) if Tx port ID is invalid.
* - (-EBUSY) if device is not in started state.
* - (-ENOTSUP) if hardware doesn't support.
- * - Others detailed errors from PMD drivers.
+ * - Others detailed errors from PMDs.
*/
__rte_experimental
int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port);
@@ -2663,7 +2663,7 @@ int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port);
* - (-ENODEV) if Tx port ID is invalid.
* - (-EBUSY) if device is in stopped state.
* - (-ENOTSUP) if hardware doesn't support.
- * - Others detailed errors from PMD drivers.
+ * - Others detailed errors from PMDs.
*/
__rte_experimental
int rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port);
@@ -2706,7 +2706,7 @@ int rte_eth_dev_is_valid_port(uint16_t port_id);
* - -ENODEV: if *port_id* is invalid.
* - -EINVAL: The queue_id out of range or belong to hairpin.
* - -EIO: if device is removed.
- * - -ENOTSUP: The function not supported in PMD driver.
+ * - -ENOTSUP: The function not supported in PMD.
*/
int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id);
@@ -2724,7 +2724,7 @@ int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id);
* - -ENODEV: if *port_id* is invalid.
* - -EINVAL: The queue_id out of range or belong to hairpin.
* - -EIO: if device is removed.
- * - -ENOTSUP: The function not supported in PMD driver.
+ * - -ENOTSUP: The function not supported in PMD.
*/
int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id);
@@ -2743,7 +2743,7 @@ int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id);
* - -ENODEV: if *port_id* is invalid.
* - -EINVAL: The queue_id out of range or belong to hairpin.
* - -EIO: if device is removed.
- * - -ENOTSUP: The function not supported in PMD driver.
+ * - -ENOTSUP: The function not supported in PMD.
*/
int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id);
@@ -2761,7 +2761,7 @@ int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id);
* - -ENODEV: if *port_id* is invalid.
* - -EINVAL: The queue_id out of range or belong to hairpin.
* - -EIO: if device is removed.
- * - -ENOTSUP: The function not supported in PMD driver.
+ * - -ENOTSUP: The function not supported in PMD.
*/
int rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id);
@@ -2963,7 +2963,7 @@ int rte_eth_allmulticast_get(uint16_t port_id);
* Link information written back.
* @return
* - (0) if successful.
- * - (-ENOTSUP) if the function is not supported in PMD driver.
+ * - (-ENOTSUP) if the function is not supported in PMD.
* - (-ENODEV) if *port_id* invalid.
* - (-EINVAL) if bad parameter.
*/
@@ -2979,7 +2979,7 @@ int rte_eth_link_get(uint16_t port_id, struct rte_eth_link *link);
* Link information written back.
* @return
* - (0) if successful.
- * - (-ENOTSUP) if the function is not supported in PMD driver.
+ * - (-ENOTSUP) if the function is not supported in PMD.
* - (-ENODEV) if *port_id* invalid.
* - (-EINVAL) if bad parameter.
*/
--
2.25.1
^ permalink raw reply [relevance 1%]
* [PATCH v1] doc: update release notes for 21.11
@ 2021-11-22 17:00 12% John McNamara
2021-11-22 17:05 0% ` Ajit Khaparde
0 siblings, 1 reply; 200+ results
From: John McNamara @ 2021-11-22 17:00 UTC (permalink / raw)
To: dev; +Cc: thomas, John McNamara
Fix grammar, spelling and formatting of DPDK 21.11 release notes.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 123 +++++++++++++------------
1 file changed, 65 insertions(+), 58 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4d8c59472a..7008c5e907 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -57,14 +57,14 @@ New Features
* **Enabled new devargs parser.**
- * Enabled devargs syntax
- ``bus=X,paramX=x/class=Y,paramY=y/driver=Z,paramZ=z``
+ * Enabled devargs syntax:
+ ``bus=X,paramX=x/class=Y,paramY=y/driver=Z,paramZ=z``.
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
* **Updated EAL hugetlbfs mount handling for Linux.**
- * Modified to allow ``--huge-dir`` option to specify a sub-directory
+ * Modified EAL to allow ``--huge-dir`` option to specify a sub-directory
within a hugetlbfs mountpoint.
* **Added dmadev library.**
@@ -82,7 +82,7 @@ New Features
* **Added IDXD dmadev driver implementation.**
- The IDXD dmadev driver provide device drivers for the Intel DSA devices.
+ The IDXD dmadev driver provides device drivers for the Intel DSA devices.
This device driver can be used through the generic dmadev API.
* **Added IOAT dmadev driver implementation.**
@@ -98,29 +98,34 @@ New Features
* **Added NXP DPAA DMA driver.**
- Added a new dmadev driver for NXP DPAA platform.
+ Added a new dmadev driver for the NXP DPAA platform.
* **Added support to get all MAC addresses of a device.**
- Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
- addresses assigned to given ethernet port.
+ Added ``rte_eth_macaddrs_get`` to allow a user to retrieve all Ethernet
+ addresses assigned to a given Ethernet port.
-* **Introduced GPU device class with first features:**
+* **Introduced GPU device class.**
- * Device information
- * Memory management
- * Communication flag & list
+ Introduced the GPU device class with initial features:
+
+ * Device information.
+ * Memory management.
+ * Communication flag and list.
* **Added NVIDIA GPU driver implemented with CUDA library.**
+ Added NVIDIA GPU driver implemented with CUDA library under the new
+ GPU device interface.
+
* **Added new RSS offload types for IPv4/L4 checksum in RSS flow.**
- Added macros ETH_RSS_IPV4_CHKSUM and ETH_RSS_L4_CHKSUM, now IPv4 and
- TCP/UDP/SCTP header checksum field can be used as input set for RSS.
+ Added macros ``ETH_RSS_IPV4_CHKSUM`` and ``ETH_RSS_L4_CHKSUM``. The IPv4 and
+ TCP/UDP/SCTP header checksum field can now be used as input set for RSS.
* **Added L2TPv2 and PPP protocol support in flow API.**
- Added flow pattern items and header formats of L2TPv2 and PPP protocol.
+ Added flow pattern items and header formats for the L2TPv2 and PPP protocols.
* **Added flow flex item.**
@@ -146,11 +151,11 @@ New Features
* Added new device capability flag and Rx domain field to switch info.
* Added share group and share queue ID to Rx queue configuration.
- * Added testpmd support and dedicate forwarding engine.
+ * Added testpmd support and dedicated forwarding engine.
* **Updated af_packet ethdev driver.**
- * Default VLAN strip behavior was changed. VLAN tag won't be stripped
+ * The default VLAN strip behavior has changed. The VLAN tag won't be stripped
unless ``DEV_RX_OFFLOAD_VLAN_STRIP`` offload is enabled.
* **Added API to get device configuration in ethdev.**
@@ -159,28 +164,30 @@ New Features
* **Updated AF_XDP PMD.**
- * Disabled secondary process support.
+ * Disabled secondary process support due to insufficient state shared
+ between processes which causes a crash. This will be fixed/re-enabled
+ in the next release.
* **Updated Amazon ENA PMD.**
Updated the Amazon ENA PMD. The new driver version (v2.5.0) introduced
bug fixes and improvements, including:
- * Support for the tx_free_thresh and rx_free_thresh configuration parameters.
+ * Support for the ``tx_free_thresh`` and ``rx_free_thresh`` configuration parameters.
* NUMA aware allocations for the queue helper structures.
- * Watchdog's feature which is checking for missing Tx completions.
+ * A Watchdog feature which is checking for missing Tx completions.
* **Updated Broadcom bnxt PMD.**
* Added flow offload support for Thor.
* Added TruFlow and AFM SRAM partitioning support.
- * Implement support for tunnel offload.
+ * Implemented support for tunnel offload.
* Updated HWRM API to version 1.10.2.68.
- * Added NAT support for dest IP and port combination.
+ * Added NAT support for destination IP and port combination.
* Added support for socket redirection.
* Added wildcard match support for ingress flows.
* Added support for inner IP header for GRE tunnel flows.
- * Updated support for RSS action in flow rule.
+ * Updated support for RSS action in flow rules.
* Removed devargs option for stats accumulation.
* **Updated Cisco enic driver.**
@@ -202,9 +209,9 @@ New Features
* Added protocol agnostic flow offloading support in Flow Director.
* Added protocol agnostic flow offloading support in RSS hash.
- * Added 1PPS out support by a devargs.
+ * Added 1PPS out support via devargs.
* Added IPv4 and L4 (TCP/UDP/SCTP) checksum hash support in RSS flow.
- * Added DEV_RX_OFFLOAD_TIMESTAMP support.
+ * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support.
* Added timesync API support under scalar path.
* Added DCF reset API support.
@@ -225,7 +232,7 @@ New Features
Updated the Mellanox mlx5 driver with new features and improvements, including:
* Added implicit mempool registration to avoid data path hiccups (opt-out).
- * Added delay drop support for Rx queue.
+ * Added delay drop support for Rx queues.
* Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
* Added socket direct mode bonding support.
@@ -275,7 +282,7 @@ New Features
Added a new Xilinx vDPA (``sfc_vdpa``) PMD.
See the :doc:`../vdpadevs/sfc` guide for more details on this driver.
-* **Added telemetry callbacks to cryptodev library.**
+* **Added telemetry callbacks to the cryptodev library.**
Added telemetry callback functions which allow a list of crypto devices,
stats for a crypto device, and other device information to be queried.
@@ -300,7 +307,7 @@ New Features
* **Added support for event crypto adapter on Marvell CN10K and CN9K.**
- * Added event crypto adapter OP_FORWARD mode support.
+ * Added event crypto adapter ``OP_FORWARD`` mode support.
* **Updated Mellanox mlx5 crypto driver.**
@@ -309,7 +316,7 @@ New Features
* **Updated NXP dpaa_sec crypto PMD.**
- * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algo support.
+ * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algorithm support.
* Added PDCP short MAC-I support.
* Added raw vector datapath API support.
@@ -322,16 +329,16 @@ New Features
* The IPsec_MB framework was added to share common code between Intel
SW Crypto PMDs that depend on the intel-ipsec-mb library.
- * Multiprocess support was added for the consolidated PMDs,
+ * Multiprocess support was added for the consolidated PMDs
which requires v1.1 of the intel-ipsec-mb library.
- * The following PMDs were moved into a single source folder,
- however their usage and EAL options remain unchanged.
+ * The following PMDs were moved into a single source folder
+ while their usage and EAL options remain unchanged.
* AESNI_MB PMD.
* AESNI_GCM PMD.
* KASUMI PMD.
* SNOW3G PMD.
* ZUC PMD.
- * CHACHA20_POLY1305 - A new PMD added.
+ * CHACHA20_POLY1305 - a new PMD.
* **Updated the aesni_mb crypto PMD.**
@@ -381,7 +388,7 @@ New Features
* **Added multi-process support for testpmd.**
Added command-line options to specify total number of processes and
- current process ID. Each process owns subset of Rx and Tx queues.
+ current process ID. Each process owns a subset of Rx and Tx queues.
* **Updated test-crypto-perf application with new cases.**
@@ -404,8 +411,8 @@ New Features
* **Updated l3fwd sample application.**
- * Increased number of routes to 16 for all lookup modes (LPM, EM and FIB),
- this helps in validating SoC with many ethernet devices.
+ * Increased number of routes to 16 for all lookup modes (LPM, EM and FIB).
+ This helps in validating SoC with many Ethernet devices.
* Updated EM mode to use RFC2544 reserved IP address space with RFC863
UDP discard protocol.
@@ -431,8 +438,8 @@ New Features
* **Added ASan support.**
- `AddressSanitizer
- <https://github.com/google/sanitizers/wiki/AddressSanitizer>`_ (ASan)
+ Added ASan/AddressSanitizer support. `AddressSanitizer
+ <https://github.com/google/sanitizers/wiki/AddressSanitizer>`_
is a widely-used debugging tool to detect memory access errors.
It helps to detect issues like use-after-free, various kinds of buffer
overruns in C/C++ programs, and other similar errors, as well as
@@ -454,12 +461,12 @@ Removed Items
* eal: Removed the deprecated function ``rte_get_master_lcore()``
and the iterator macro ``RTE_LCORE_FOREACH_SLAVE``.
-* eal: The old api arguments that were deprecated for
+* eal: The old API arguments that were deprecated for
blacklist/whitelist are removed. Users must use the new
block/allow list arguments.
* mbuf: Removed offload flag ``PKT_RX_EIP_CKSUM_BAD``.
- ``PKT_RX_OUTER_IP_CKSUM_BAD`` should be used as a replacement.
+ The ``PKT_RX_OUTER_IP_CKSUM_BAD`` flag should be used as a replacement.
* ethdev: Removed the port mirroring API. A more fine-grain flow API
action ``RTE_FLOW_ACTION_TYPE_SAMPLE`` should be used instead.
@@ -468,9 +475,9 @@ Removed Items
``rte_eth_mirror_rule_reset`` along with the associated macros
``ETH_MIRROR_*`` are removed.
-* ethdev: Removed ``rte_eth_rx_descriptor_done`` API function and its
+* ethdev: Removed the ``rte_eth_rx_descriptor_done()`` API function and its
driver callback. It is replaced by the more complete function
- ``rte_eth_rx_descriptor_status``.
+ ``rte_eth_rx_descriptor_status()``.
* ethdev: Removed deprecated ``shared`` attribute of the
``struct rte_flow_action_count``. Shared counters should be managed
@@ -548,21 +555,21 @@ API Changes
* ethdev: ``rte_flow_action_modify_data`` structure updated, immediate data
array is extended, data pointer field is explicitly added to union, the
- action behavior is defined in more strict fashion and documentation updated.
+ action behavior is defined in a more strict fashion and documentation updated.
The immediate value behavior has been changed, the entire immediate field
should be provided, and offset for immediate source bitfield is assigned
- from destination one.
+ from the destination one.
* vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
driver interface are marked as internal.
-* cryptodev: The API rte_cryptodev_pmd_is_valid_dev is modified to
- rte_cryptodev_is_valid_dev as it can be used by the application as
- well as PMD to check whether the device is valid or not.
+* cryptodev: The API ``rte_cryptodev_pmd_is_valid_dev()`` is modified to
+ ``rte_cryptodev_is_valid_dev()`` as it can be used by the application as
+ well as the PMD to check whether the device is valid or not.
-* cryptodev: The rte_cryptodev_pmd.* files are renamed as cryptodev_pmd.*
- as it is for drivers only and should be private to DPDK, and not
+* cryptodev: The ``rte_cryptodev_pmd.*`` files are renamed to ``cryptodev_pmd.*``
+ since they are for drivers only and should be private to DPDK, and not
installed for app use.
* cryptodev: A ``reserved`` byte from structure ``rte_crypto_op`` was
@@ -590,8 +597,8 @@ API Changes
* ip_frag: All macros updated to have ``RTE_IP_FRAG_`` prefix.
Obsolete macros are kept for compatibility.
DPDK components updated to use new names.
- Experimental function ``rte_frag_table_del_expired_entries`` was renamed
- to ``rte_ip_frag_table_del_expired_entries``
+ Experimental function ``rte_frag_table_del_expired_entries()`` was renamed
+ to ``rte_ip_frag_table_del_expired_entries()``
to comply with other public API naming convention.
@@ -610,14 +617,14 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
-* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+* ethdev: All enums and macros updated to have ``RTE_ETH`` prefix and structures
updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
-* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
- Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
- to internal queue data as input parameter. While this change is transparent
- to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
- is used by public inline function ``rte_eth_rx_queue_count``.
+* ethdev: The input parameters for ``eth_rx_queue_count_t`` were changed.
+ Instead of a pointer to ``rte_eth_dev`` and queue index, it now accepts a pointer
+ to internal queue data as an input parameter. While this change is transparent
+ to the user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+ is used by the public inline function ``rte_eth_rx_queue_count``.
* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
private data structures. ``rte_eth_devices[]`` can't be accessed directly
@@ -663,7 +670,7 @@ ABI Changes
* security: A new structure ``esn`` was added in structure
``rte_security_ipsec_xform`` to set an initial ESN value. This permits
- application to start from an arbitrary ESN value for debug and SA lifetime
+ applications to start from an arbitrary ESN value for debug and SA lifetime
enforcement purposes.
* security: A new structure ``udp`` was added in structure
@@ -689,7 +696,7 @@ ABI Changes
``RTE_LIBRTE_IP_FRAG_MAX_FRAG`` from ``4`` to ``8``.
This parameter controls maximum number of fragments per packet
in IP reassembly table. Increasing this value from ``4`` to ``8``
- will allow to cover common case with jumbo packet size of ``9KB``
+ will allow covering the common case with jumbo packet size of ``9000B``
and fragments with default frame size ``(1500B)``.
--
2.25.1
^ permalink raw reply [relevance 12%]
* Re: [PATCH v1] doc: update release notes for 21.11
2021-11-22 17:00 12% [PATCH v1] doc: update release notes for 21.11 John McNamara
@ 2021-11-22 17:05 0% ` Ajit Khaparde
0 siblings, 0 replies; 200+ results
From: Ajit Khaparde @ 2021-11-22 17:05 UTC (permalink / raw)
To: John McNamara; +Cc: dpdk-dev, Thomas Monjalon
On Mon, Nov 22, 2021 at 9:01 AM John McNamara <john.mcnamara@intel.com> wrote:
>
> Fix grammar, spelling and formatting of DPDK 21.11 release notes.
>
> Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 123 +++++++++++++------------
> 1 file changed, 65 insertions(+), 58 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 4d8c59472a..7008c5e907 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -57,14 +57,14 @@ New Features
>
> * **Enabled new devargs parser.**
>
> - * Enabled devargs syntax
> - ``bus=X,paramX=x/class=Y,paramY=y/driver=Z,paramZ=z``
> + * Enabled devargs syntax:
> + ``bus=X,paramX=x/class=Y,paramY=y/driver=Z,paramZ=z``.
> * Added bus-level parsing of the devargs syntax.
> * Kept compatibility with the legacy syntax as parsing fallback.
>
> * **Updated EAL hugetlbfs mount handling for Linux.**
>
> - * Modified to allow ``--huge-dir`` option to specify a sub-directory
> + * Modified EAL to allow ``--huge-dir`` option to specify a sub-directory
> within a hugetlbfs mountpoint.
>
> * **Added dmadev library.**
> @@ -82,7 +82,7 @@ New Features
>
> * **Added IDXD dmadev driver implementation.**
>
> - The IDXD dmadev driver provide device drivers for the Intel DSA devices.
> + The IDXD dmadev driver provides device drivers for the Intel DSA devices.
> This device driver can be used through the generic dmadev API.
>
> * **Added IOAT dmadev driver implementation.**
> @@ -98,29 +98,34 @@ New Features
>
> * **Added NXP DPAA DMA driver.**
>
> - Added a new dmadev driver for NXP DPAA platform.
> + Added a new dmadev driver for the NXP DPAA platform.
>
> * **Added support to get all MAC addresses of a device.**
>
> - Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
> - addresses assigned to given ethernet port.
> + Added ``rte_eth_macaddrs_get`` to allow a user to retrieve all Ethernet
> + addresses assigned to a given Ethernet port.
>
> -* **Introduced GPU device class with first features:**
> +* **Introduced GPU device class.**
>
> - * Device information
> - * Memory management
> - * Communication flag & list
> + Introduced the GPU device class with initial features:
> +
> + * Device information.
> + * Memory management.
> + * Communication flag and list.
>
> * **Added NVIDIA GPU driver implemented with CUDA library.**
>
> + Added NVIDIA GPU driver implemented with CUDA library under the new
> + GPU device interface.
> +
> * **Added new RSS offload types for IPv4/L4 checksum in RSS flow.**
>
> - Added macros ETH_RSS_IPV4_CHKSUM and ETH_RSS_L4_CHKSUM, now IPv4 and
> - TCP/UDP/SCTP header checksum field can be used as input set for RSS.
> + Added macros ``ETH_RSS_IPV4_CHKSUM`` and ``ETH_RSS_L4_CHKSUM``. The IPv4 and
> + TCP/UDP/SCTP header checksum field can now be used as input set for RSS.
>
> * **Added L2TPv2 and PPP protocol support in flow API.**
>
> - Added flow pattern items and header formats of L2TPv2 and PPP protocol.
> + Added flow pattern items and header formats for the L2TPv2 and PPP protocols.
>
> * **Added flow flex item.**
>
> @@ -146,11 +151,11 @@ New Features
>
> * Added new device capability flag and Rx domain field to switch info.
> * Added share group and share queue ID to Rx queue configuration.
> - * Added testpmd support and dedicate forwarding engine.
> + * Added testpmd support and dedicated forwarding engine.
>
> * **Updated af_packet ethdev driver.**
>
> - * Default VLAN strip behavior was changed. VLAN tag won't be stripped
> + * The default VLAN strip behavior has changed. The VLAN tag won't be stripped
> unless ``DEV_RX_OFFLOAD_VLAN_STRIP`` offload is enabled.
>
> * **Added API to get device configuration in ethdev.**
> @@ -159,28 +164,30 @@ New Features
>
> * **Updated AF_XDP PMD.**
>
> - * Disabled secondary process support.
> + * Disabled secondary process support due to insufficient state shared
> + between processes which causes a crash. This will be fixed/re-enabled
> + in the next release.
>
> * **Updated Amazon ENA PMD.**
>
> Updated the Amazon ENA PMD. The new driver version (v2.5.0) introduced
> bug fixes and improvements, including:
>
> - * Support for the tx_free_thresh and rx_free_thresh configuration parameters.
> + * Support for the ``tx_free_thresh`` and ``rx_free_thresh`` configuration parameters.
> * NUMA aware allocations for the queue helper structures.
> - * Watchdog's feature which is checking for missing Tx completions.
> + * A Watchdog feature which is checking for missing Tx completions.
>
> * **Updated Broadcom bnxt PMD.**
>
> * Added flow offload support for Thor.
> * Added TruFlow and AFM SRAM partitioning support.
> - * Implement support for tunnel offload.
> + * Implemented support for tunnel offload.
> * Updated HWRM API to version 1.10.2.68.
> - * Added NAT support for dest IP and port combination.
> + * Added NAT support for destination IP and port combination.
> * Added support for socket redirection.
> * Added wildcard match support for ingress flows.
> * Added support for inner IP header for GRE tunnel flows.
> - * Updated support for RSS action in flow rule.
> + * Updated support for RSS action in flow rules.
> * Removed devargs option for stats accumulation.
>
> * **Updated Cisco enic driver.**
> @@ -202,9 +209,9 @@ New Features
>
> * Added protocol agnostic flow offloading support in Flow Director.
> * Added protocol agnostic flow offloading support in RSS hash.
> - * Added 1PPS out support by a devargs.
> + * Added 1PPS out support via devargs.
> * Added IPv4 and L4 (TCP/UDP/SCTP) checksum hash support in RSS flow.
> - * Added DEV_RX_OFFLOAD_TIMESTAMP support.
> + * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support.
> * Added timesync API support under scalar path.
> * Added DCF reset API support.
>
> @@ -225,7 +232,7 @@ New Features
> Updated the Mellanox mlx5 driver with new features and improvements, including:
>
> * Added implicit mempool registration to avoid data path hiccups (opt-out).
> - * Added delay drop support for Rx queue.
> + * Added delay drop support for Rx queues.
> * Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
> * Added socket direct mode bonding support.
>
> @@ -275,7 +282,7 @@ New Features
> Added a new Xilinx vDPA (``sfc_vdpa``) PMD.
> See the :doc:`../vdpadevs/sfc` guide for more details on this driver.
>
> -* **Added telemetry callbacks to cryptodev library.**
> +* **Added telemetry callbacks to the cryptodev library.**
>
> Added telemetry callback functions which allow a list of crypto devices,
> stats for a crypto device, and other device information to be queried.
> @@ -300,7 +307,7 @@ New Features
>
> * **Added support for event crypto adapter on Marvell CN10K and CN9K.**
>
> - * Added event crypto adapter OP_FORWARD mode support.
> + * Added event crypto adapter ``OP_FORWARD`` mode support.
>
> * **Updated Mellanox mlx5 crypto driver.**
>
> @@ -309,7 +316,7 @@ New Features
>
> * **Updated NXP dpaa_sec crypto PMD.**
>
> - * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algo support.
> + * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algorithm support.
> * Added PDCP short MAC-I support.
> * Added raw vector datapath API support.
>
> @@ -322,16 +329,16 @@ New Features
>
> * The IPsec_MB framework was added to share common code between Intel
> SW Crypto PMDs that depend on the intel-ipsec-mb library.
> - * Multiprocess support was added for the consolidated PMDs,
> + * Multiprocess support was added for the consolidated PMDs
> which requires v1.1 of the intel-ipsec-mb library.
> - * The following PMDs were moved into a single source folder,
> - however their usage and EAL options remain unchanged.
> + * The following PMDs were moved into a single source folder
> + while their usage and EAL options remain unchanged.
> * AESNI_MB PMD.
> * AESNI_GCM PMD.
> * KASUMI PMD.
> * SNOW3G PMD.
> * ZUC PMD.
> - * CHACHA20_POLY1305 - A new PMD added.
> + * CHACHA20_POLY1305 - a new PMD.
>
> * **Updated the aesni_mb crypto PMD.**
>
> @@ -381,7 +388,7 @@ New Features
> * **Added multi-process support for testpmd.**
>
> Added command-line options to specify total number of processes and
> - current process ID. Each process owns subset of Rx and Tx queues.
> + current process ID. Each process owns a subset of Rx and Tx queues.
>
> * **Updated test-crypto-perf application with new cases.**
>
> @@ -404,8 +411,8 @@ New Features
>
> * **Updated l3fwd sample application.**
>
> - * Increased number of routes to 16 for all lookup modes (LPM, EM and FIB),
> - this helps in validating SoC with many ethernet devices.
> + * Increased number of routes to 16 for all lookup modes (LPM, EM and FIB).
> + This helps in validating SoC with many Ethernet devices.
> * Updated EM mode to use RFC2544 reserved IP address space with RFC863
> UDP discard protocol.
>
> @@ -431,8 +438,8 @@ New Features
>
> * **Added ASan support.**
>
> - `AddressSanitizer
> - <https://github.com/google/sanitizers/wiki/AddressSanitizer>`_ (ASan)
> + Added ASan/AddressSanitizer support. `AddressSanitizer
> + <https://github.com/google/sanitizers/wiki/AddressSanitizer>`_
> is a widely-used debugging tool to detect memory access errors.
> It helps to detect issues like use-after-free, various kinds of buffer
> overruns in C/C++ programs, and other similar errors, as well as
> @@ -454,12 +461,12 @@ Removed Items
> * eal: Removed the deprecated function ``rte_get_master_lcore()``
> and the iterator macro ``RTE_LCORE_FOREACH_SLAVE``.
>
> -* eal: The old api arguments that were deprecated for
> +* eal: The old API arguments that were deprecated for
> blacklist/whitelist are removed. Users must use the new
> block/allow list arguments.
>
> * mbuf: Removed offload flag ``PKT_RX_EIP_CKSUM_BAD``.
> - ``PKT_RX_OUTER_IP_CKSUM_BAD`` should be used as a replacement.
> + The ``PKT_RX_OUTER_IP_CKSUM_BAD`` flag should be used as a replacement.
>
> * ethdev: Removed the port mirroring API. A more fine-grain flow API
> action ``RTE_FLOW_ACTION_TYPE_SAMPLE`` should be used instead.
> @@ -468,9 +475,9 @@ Removed Items
> ``rte_eth_mirror_rule_reset`` along with the associated macros
> ``ETH_MIRROR_*`` are removed.
>
> -* ethdev: Removed ``rte_eth_rx_descriptor_done`` API function and its
> +* ethdev: Removed the ``rte_eth_rx_descriptor_done()`` API function and its
> driver callback. It is replaced by the more complete function
> - ``rte_eth_rx_descriptor_status``.
> + ``rte_eth_rx_descriptor_status()``.
>
> * ethdev: Removed deprecated ``shared`` attribute of the
> ``struct rte_flow_action_count``. Shared counters should be managed
> @@ -548,21 +555,21 @@ API Changes
>
> * ethdev: ``rte_flow_action_modify_data`` structure updated, immediate data
> array is extended, data pointer field is explicitly added to union, the
> - action behavior is defined in more strict fashion and documentation updated.
> + action behavior is defined in a more strict fashion and documentation updated.
> The immediate value behavior has been changed, the entire immediate field
> should be provided, and offset for immediate source bitfield is assigned
> - from destination one.
> + from the destination one.
>
> * vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
> ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
> driver interface are marked as internal.
>
> -* cryptodev: The API rte_cryptodev_pmd_is_valid_dev is modified to
> - rte_cryptodev_is_valid_dev as it can be used by the application as
> - well as PMD to check whether the device is valid or not.
> +* cryptodev: The API ``rte_cryptodev_pmd_is_valid_dev()`` is modified to
> + ``rte_cryptodev_is_valid_dev()`` as it can be used by the application as
> + well as the PMD to check whether the device is valid or not.
>
> -* cryptodev: The rte_cryptodev_pmd.* files are renamed as cryptodev_pmd.*
> - as it is for drivers only and should be private to DPDK, and not
> +* cryptodev: The ``rte_cryptodev_pmd.*`` files are renamed to ``cryptodev_pmd.*``
> + since they are for drivers only and should be private to DPDK, and not
> installed for app use.
>
> * cryptodev: A ``reserved`` byte from structure ``rte_crypto_op`` was
> @@ -590,8 +597,8 @@ API Changes
> * ip_frag: All macros updated to have ``RTE_IP_FRAG_`` prefix.
> Obsolete macros are kept for compatibility.
> DPDK components updated to use new names.
> - Experimental function ``rte_frag_table_del_expired_entries`` was renamed
> - to ``rte_ip_frag_table_del_expired_entries``
> + Experimental function ``rte_frag_table_del_expired_entries()`` was renamed
> + to ``rte_ip_frag_table_del_expired_entries()``
> to comply with other public API naming convention.
>
>
> @@ -610,14 +617,14 @@ ABI Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> -* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
> +* ethdev: All enums and macros updated to have ``RTE_ETH`` prefix and structures
> updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
>
> -* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
> - Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
> - to internal queue data as input parameter. While this change is transparent
> - to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
> - is used by public inline function ``rte_eth_rx_queue_count``.
> +* ethdev: The input parameters for ``eth_rx_queue_count_t`` were changed.
> + Instead of a pointer to ``rte_eth_dev`` and queue index, it now accepts a pointer
> + to internal queue data as an input parameter. While this change is transparent
> + to the user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
> + is used by the public inline function ``rte_eth_rx_queue_count``.
>
> * ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
> private data structures. ``rte_eth_devices[]`` can't be accessed directly
> @@ -663,7 +670,7 @@ ABI Changes
>
> * security: A new structure ``esn`` was added in structure
> ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
> - application to start from an arbitrary ESN value for debug and SA lifetime
> + applications to start from an arbitrary ESN value for debug and SA lifetime
> enforcement purposes.
>
> * security: A new structure ``udp`` was added in structure
> @@ -689,7 +696,7 @@ ABI Changes
> ``RTE_LIBRTE_IP_FRAG_MAX_FRAG`` from ``4`` to ``8``.
> This parameter controls maximum number of fragments per packet
> in IP reassembly table. Increasing this value from ``4`` to ``8``
> - will allow to cover common case with jumbo packet size of ``9KB``
> + will allow covering the common case with jumbo packet size of ``9000B``
> and fragments with default frame size ``(1500B)``.
>
>
> --
> 2.25.1
>
^ permalink raw reply [relevance 0%]
* [PATCH v2 2/2] doc: announce KNI deprecation
@ 2021-11-23 12:08 5% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-11-23 12:08 UTC (permalink / raw)
To: dev, Ray Kinsella
Cc: Ferruh Yigit, Olivier Matz, David Marchand, Stephen Hemminger,
Elad Nachman, Igor Ryzhov, Dan Gora
Announce the KNI kernel module move to out of dpdk repo and announce
long term plan to deprecate the KNI.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Olivier Matz <olivier.matz@6wind.com>
Cc: David Marchand <david.marchand@redhat.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Elad Nachman <eladv6@gmail.com>
Cc: Igor Ryzhov <iryzhov@nfware.com>
Cc: Dan Gora <dg@adax.com>
Dates are not discussed before, the patch aims to trigger a discussion
for the dates.
---
doc/guides/prog_guide/kernel_nic_interface.rst | 2 ++
doc/guides/rel_notes/deprecation.rst | 6 ++++++
2 files changed, 8 insertions(+)
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
index 70e92687d711..276014fe28bb 100644
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ b/doc/guides/prog_guide/kernel_nic_interface.rst
@@ -7,6 +7,8 @@ Kernel NIC Interface
====================
.. Note::
+ KNI kernel module will be moved from main git repository to `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_ repository.
+ There is a long term plan to deprecate the KNI. See :doc:`../rel_notes/deprecation`
:ref:`virtio_user_as_exceptional_path` alternative is the preferred way for
interfacing with the Linux network stack as it is an in-kernel solution and
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 6d087c64ef28..62fd991e4eb4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -48,6 +48,12 @@ Deprecation Notices
in the header will not be considered as ABI anymore. This change is inspired
by the RFC https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
+* kni: KNI kernel module will be moved to `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_
+ repository by the `DPDK technical board decision
+ <https://mails.dpdk.org/archives/dev/2021-January/197077.html>`_, on v22.11.
+* kni: will be depreciated, will remove all kni lib, kernel module and example code
+ on v23.11.
+
* lib: will fix extending some enum/define breaking the ABI. There are multiple
samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
used by iterators, and arrays holding these values are sized with this
--
2.31.1
^ permalink raw reply [relevance 5%]
* Minutes of Technical Board Meeting, 2021-11-17
@ 2021-11-24 13:00 4% Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2021-11-24 13:00 UTC (permalink / raw)
To: dev
Members Attending
-----------------
- Aaron
- Bruce
- Ferruh
- Honnappa
- Jerin
- Kevin
- Konstantin
- Maxime
- Olivier (Chair)
- Stephen
- Thomas
NOTE: The technical board meetings every second Wednesday at
https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.
NOTE: Next meeting will be on Wednesday 2021-12-01 @3pm UTC, and will
be chaired by Stephen.
1. Switch to 3 releases per year instead of 4
=============================================
Reference: http://inbox.dpdk.org/dev/5786413.XMpytKYiJR@thomas
Only good feedback on the mailing list up to now.
This proposal is therefore accepted - so DPDK will only have 3 releases
in 2022 - unless there is strong opposition, with suitable
justification, raised on the DPDK Dev mailing list ahead of the final
DPDK 21.11 release.
2. Raise the maximum number of lcores
=====================================
References:
- https://inbox.dpdk.org/dev/1902057.C4l9sbjloW@thomas/
- https://inbox.dpdk.org/dev/CAJFAV8z-5amvEnr3mazkTqH-7SZX_C6EqCua6UdMXXHgrcmT6g@mail.gmail.com/
Modifying this value is an ABI change and has an impact on memory
consumption. There is no identified use-case where a single
application requires more than 128 lcores.
- Ideally, this configuration should be dynamic at runtime, but it would
require a lot of changes
- It is possible with the --lcores EAL option to bind up to 128 lcores to
any lcore id (even higher than 128). If "-l 129" is passed to EAL, a
message giving the alternative syntax ("--lcores 0@129") is
displayed. An option to rebind automatically could help for usability.
- If a case a use-case exists for a single application that uses
more than 128 lcores, the TB is ok to update the default config value.
Note that it is already possible to change the value at compilation
time with -Dmax_lcores in meson.
3. New threading API
====================
References:
- https://patches.dpdk.org/project/dpdk/list/?series=20472&state=*
- https://inbox.dpdk.org/dev/1636594425-9692-1-git-send-email-navasile@linux.microsoft.com/
The DPDK relies on the pthread interface for eal threads, which is not
supported in windows. Windows DPDK code currently emulates pthread. A
patchset has been proposed which, among others:
- makes the eal thread API rely on OS-specific
- removes direct call to pthread in dpdk
This patchset (not for 21.11) needs more reviews. People from TB should
take a look at it.
The TB provided some guidelines:
- the EAL thread API level should be similar to pthread API
(it would mostly be a namespace change for posix)
- the API/ABI should remain compatible. It is possible to make use of
rte_function_versioning.h for that
4. DTS Co-maintenance
=====================
Owen Hilyard from UNH proposes himself to be the co-maintainer for DTS.
This would for instance help to ensure that the interface between CI
and DTS remains stable.
The TB welcomes this proposition, as long as there is no opposition from
current DTS maintainer and DTS community.
By the way, the TB asks for volunteers to help to make the transition to
DPDK repository.
5. Spell checking in the CI infrastructure and patchwork
========================================================
The spell checking was done with aspell on documentation. The problem is
that check is done on everything including code or acronyms, resulting
on constant failures.
The TB recommends to focus on per-patch basis checks, on rst files
first. A tool should be provided in dpdk/devtools, so it can also be
used by developpers.
Spelling errors should be considered as warning given code or acronyms
may trigger false-positives.
^ permalink raw reply [relevance 4%]
* [PATCH v3 2/2] doc: announce KNI deprecation
@ 2021-11-24 17:16 5% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-11-24 17:16 UTC (permalink / raw)
To: Ray Kinsella
Cc: Ferruh Yigit, dev, Olivier Matz Olivier Matz,
David Marchand David Marchand,
Stephen Hemminger Stephen Hemminger, Elad Nachman, Igor Ryzhov,
Dan Gora
Announce the KNI kernel module move to out of dpdk repo and announce
long term plan to deprecate the KNI.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Olivier Matz Olivier Matz <olivier.matz@6wind.com>
Cc: David Marchand David Marchand <david.marchand@redhat.com>
Cc: Stephen Hemminger Stephen Hemminger <stephen@networkplumber.org>
Cc: Elad Nachman <eladv6@gmail.com>
Cc: Igor Ryzhov <iryzhov@nfware.com>
Cc: Dan Gora <dg@adax.com>
Dates are not discussed before, the patch aims to trigger a discussion
for the dates.
---
doc/guides/prog_guide/kernel_nic_interface.rst | 2 ++
doc/guides/rel_notes/deprecation.rst | 6 ++++++
2 files changed, 8 insertions(+)
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
index f5a8b7c0782c..d1c5ccd0851d 100644
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ b/doc/guides/prog_guide/kernel_nic_interface.rst
@@ -7,6 +7,8 @@ Kernel NIC Interface
====================
.. Note::
+ KNI kernel module will be moved from main git repository to `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_ repository.
+ There is a long term plan to deprecate the KNI. See :doc:`../rel_notes/deprecation`
:ref:`virtio_user_as_exceptional_path` alternative is the preferred way for
interfacing with the Linux network stack as it is an in-kernel solution and
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 2262b8de6093..f20852504319 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -48,6 +48,12 @@ Deprecation Notices
in the header will not be considered as ABI anymore. This change is inspired
by the RFC https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
+* kni: KNI kernel module will be moved to `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_
+ repository by the `DPDK technical board decision
+ <https://mails.dpdk.org/archives/dev/2021-January/197077.html>`_, on v22.11.
+* kni: will be depreciated, will remove all kni lib, kernel module and example code
+ on v23.11.
+
* lib: will fix extending some enum/define breaking the ABI. There are multiple
samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
used by iterators, and arrays holding these values are sized with this
--
2.31.1
^ permalink raw reply [relevance 5%]
* Re: [PATCH v1] gpudev: return EINVAL if invalid input pointer for free and unregister
@ 2021-11-24 17:24 3% ` Tyler Retzlaff
2021-11-24 18:04 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-11-24 17:24 UTC (permalink / raw)
To: Thomas Monjalon
Cc: eagostini, techboard, dev, Andrew Rybchenko, David Marchand,
Ferruh Yigit
On Fri, Nov 19, 2021 at 10:56:36AM +0100, Thomas Monjalon wrote:
> 19/11/2021 10:34, Ferruh Yigit:
> > >> + if (ptr == NULL) {
> > >> + rte_errno = EINVAL;
> > >> + return -rte_errno;
> > >> + }
> > >
> > > in general dpdk has real problems with how it indicates that an error
> > > occurred and what error occurred consistently.
> > >
> > > some api's return 0 on success
> > > and maybe return -errno if ! 0
> > > and maybe return errno if ! 0
>
> Which function returns a positive errno?
i may have mispoke about this variant, it may be something i recall
seeing in a posted patch that was resolved before integration.
>
> > > and maybe set rte_errno if ! 0
> > >
> > > some api's return -1 on failure
> > > and set rte_errno if -1
> > >
> > > some api's return < 0 on failure
> > > and maybe set rte_errno
> > > and maybe return -errno
> > > and maybe set rte_errno and return -rte_errno
> >
> > This is a generic comment, cc'ed a few more folks to make the comment more
> > visible.
> >
> > > this isn't isiolated to only this change but since additions and context
> > > in this patch highlight it maybe it's a good time to bring it up.
> > >
> > > it's frustrating to have to carefully read the implementation every time
> > > you want to make a function call to make sure you're handling the flavor
> > > of error reporting for a particular function.
> > >
> > > if this is new code could we please clearly identify the current best
> > > practice and follow it as a standard going forward for all new public
> > > apis.
>
> I think this patch is following the best practice.
> 1/ Return negative value in case of error
> 2/ Set rte_errno
> 3/ Set same absolute value in rte_errno and return code
with the approach proposed as best practice above it results in at least the
applicaiton code variations as follows.
int rv = rte_func_call();
1. if (rv < 0 && rte_errno == EAGAIN)
2. if (rv == -1 && rte_errno == EAGAIN)
3. if (rv < 0 && -rv == EAGAIN)
4. if (rv < 0 && rv == -EAGAIN)
(and incorrectly)
5. // ignore rv
if (rte_errno == EAGAIN)
it might be better practice if indication that an error occurs is
signaled distinctly from the error that occurred. otherwise why use
rte_errno at all instead returning -rte_errno always?
this philosophy would align better with modern posix / unix platform
apis. often documented in the RETURN VALUE section of the manpage as:
``Upon successful completion, somefunction() shall return 0;
otherwise, -1 shall be returned and errno set to indicate the
error.''
therefore returning a value outside of the set {0, -1} is an abi break.
separately i have misgivings about how many patches have been integrated
and in some instances backported to dpdk stable that have resulted in
new return values and / or set new values to rte_errno outside of the
set of values initially possible when the dpdk release was made.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1] gpudev: return EINVAL if invalid input pointer for free and unregister
2021-11-24 17:24 3% ` Tyler Retzlaff
@ 2021-11-24 18:04 0% ` Bruce Richardson
2021-12-01 21:37 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-11-24 18:04 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: Thomas Monjalon, eagostini, techboard, dev, Andrew Rybchenko,
David Marchand, Ferruh Yigit
On Wed, Nov 24, 2021 at 09:24:42AM -0800, Tyler Retzlaff wrote:
> On Fri, Nov 19, 2021 at 10:56:36AM +0100, Thomas Monjalon wrote:
> > 19/11/2021 10:34, Ferruh Yigit:
> > > >> + if (ptr == NULL) {
> > > >> + rte_errno = EINVAL;
> > > >> + return -rte_errno;
> > > >> + }
> > > >
> > > > in general dpdk has real problems with how it indicates that an error
> > > > occurred and what error occurred consistently.
> > > >
> > > > some api's return 0 on success
> > > > and maybe return -errno if ! 0
> > > > and maybe return errno if ! 0
> >
> > Which function returns a positive errno?
>
> i may have mispoke about this variant, it may be something i recall
> seeing in a posted patch that was resolved before integration.
>
> >
> > > > and maybe set rte_errno if ! 0
> > > >
> > > > some api's return -1 on failure
> > > > and set rte_errno if -1
> > > >
> > > > some api's return < 0 on failure
> > > > and maybe set rte_errno
> > > > and maybe return -errno
> > > > and maybe set rte_errno and return -rte_errno
> > >
> > > This is a generic comment, cc'ed a few more folks to make the comment more
> > > visible.
> > >
> > > > this isn't isiolated to only this change but since additions and context
> > > > in this patch highlight it maybe it's a good time to bring it up.
> > > >
> > > > it's frustrating to have to carefully read the implementation every time
> > > > you want to make a function call to make sure you're handling the flavor
> > > > of error reporting for a particular function.
> > > >
> > > > if this is new code could we please clearly identify the current best
> > > > practice and follow it as a standard going forward for all new public
> > > > apis.
> >
> > I think this patch is following the best practice.
> > 1/ Return negative value in case of error
> > 2/ Set rte_errno
> > 3/ Set same absolute value in rte_errno and return code
>
> with the approach proposed as best practice above it results in at least the
> applicaiton code variations as follows.
>
> int rv = rte_func_call();
>
> 1. if (rv < 0 && rte_errno == EAGAIN)
>
> 2. if (rv == -1 && rte_errno == EAGAIN)
>
> 3. if (rv < 0 && -rv == EAGAIN)
>
> 4. if (rv < 0 && rv == -EAGAIN)
>
> (and incorrectly)
>
> 5. // ignore rv
> if (rte_errno == EAGAIN)
>
> it might be better practice if indication that an error occurs is
> signaled distinctly from the error that occurred. otherwise why use
> rte_errno at all instead returning -rte_errno always?
>
> this philosophy would align better with modern posix / unix platform
> apis. often documented in the RETURN VALUE section of the manpage as:
>
> ``Upon successful completion, somefunction() shall return 0;
> otherwise, -1 shall be returned and errno set to indicate the
> error.''
>
> therefore returning a value outside of the set {0, -1} is an abi break.
I like using this standard, because it also allows consistent behaviour for
non-integer returning functions, e.g. object creation functions returning
pointers.
if (ret < 0 && rte_errno == EAGAIN)
becomes for a pointer:
if (ret == NULL && rte_errno == EAGAIN)
Regards,
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3] ethdev: deprecate header fields and metadata flow actions
@ 2021-11-25 12:31 4% ` Ferruh Yigit
2021-11-25 12:50 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-11-25 12:31 UTC (permalink / raw)
To: Ray Kinsella, Thomas Monjalon, Ori Kam
Cc: thomas, dev, Viacheslav Ovsiienko, Andrew Rybchenko, David Marchand
On 11/24/2021 3:37 PM, Viacheslav Ovsiienko wrote:
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 6d087c64ef..d04a606b7d 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -101,6 +101,20 @@ Deprecation Notices
> is deprecated as ambiguous with respect to the embedded switch. The use of
> these attributes will become invalid starting from DPDK 22.11.
>
> +* ethdev: Actions ``OF_SET_MPLS_TTL``, ``OF_DEC_MPLS_TTL``, ``OF_SET_NW_TTL``,
> + ``OF_COPY_TTL_OUT``, ``OF_COPY_TTL_IN`` are deprecated as not supported by
> + PMDs, will be removed in DPDK 22.11.
> +
> +* ethdev: Actions ``OF_DEC_NW_TTL``, ``SET_IPV4_SRC``, ``SET_IPV4_DST``,
> + ``SET_IPV6_SRC``, ``SET_IPV6_DST``, ``SET_TP_SRC``, ``SET_TP_DST``,
> + ``DEC_TTL``, ``SET_TTL``, ``SET_MAC_SRC``, ``SET_MAC_DST``, ``INC_TCP_SEQ``,
> + ``DEC_TCP_SEQ``, ``INC_TCP_ACK``, ``DEC_TCP_ACK``, ``SET_IPV4_DSCP``,
> + ``SET_IPV6_DSCP``, ``SET_TAG``, ``SET_META`` are deprecated as superseded
> + by generic MODIFY_FIELD action, will be removed in DPDK 22.11.
> +
> +* ethdev: Actions ``OF_SET_VLAN_VID``, ``OF_SET_VLAN_PCP`` are deprecated
> + as superseded by generic MODIFY_FIELD action.
> +
I have a question about ABI/API related issue for rte_flow support,
If a driver removes an flow API item/action support, it directly impacts
the user application. The application previously working may stop working
and require code update, this is something we want to prevent with
ABI policy. And this kind of changes are not caught by our tools.
Do we have a process to deprecate/remove a flow API item/action support?
Like they can be only removed on ABI break release...
Thanks,
ferruh
^ permalink raw reply [relevance 4%]
* Re: [PATCH v3] ethdev: deprecate header fields and metadata flow actions
2021-11-25 12:31 4% ` Ferruh Yigit
@ 2021-11-25 12:50 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-11-25 12:50 UTC (permalink / raw)
To: Ray Kinsella, Ori Kam, Ferruh Yigit
Cc: dev, Viacheslav Ovsiienko, Andrew Rybchenko, David Marchand
25/11/2021 13:31, Ferruh Yigit:
> On 11/24/2021 3:37 PM, Viacheslav Ovsiienko wrote:
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 6d087c64ef..d04a606b7d 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -101,6 +101,20 @@ Deprecation Notices
> > is deprecated as ambiguous with respect to the embedded switch. The use of
> > these attributes will become invalid starting from DPDK 22.11.
> >
> > +* ethdev: Actions ``OF_SET_MPLS_TTL``, ``OF_DEC_MPLS_TTL``, ``OF_SET_NW_TTL``,
> > + ``OF_COPY_TTL_OUT``, ``OF_COPY_TTL_IN`` are deprecated as not supported by
> > + PMDs, will be removed in DPDK 22.11.
> > +
> > +* ethdev: Actions ``OF_DEC_NW_TTL``, ``SET_IPV4_SRC``, ``SET_IPV4_DST``,
> > + ``SET_IPV6_SRC``, ``SET_IPV6_DST``, ``SET_TP_SRC``, ``SET_TP_DST``,
> > + ``DEC_TTL``, ``SET_TTL``, ``SET_MAC_SRC``, ``SET_MAC_DST``, ``INC_TCP_SEQ``,
> > + ``DEC_TCP_SEQ``, ``INC_TCP_ACK``, ``DEC_TCP_ACK``, ``SET_IPV4_DSCP``,
> > + ``SET_IPV6_DSCP``, ``SET_TAG``, ``SET_META`` are deprecated as superseded
> > + by generic MODIFY_FIELD action, will be removed in DPDK 22.11.
> > +
> > +* ethdev: Actions ``OF_SET_VLAN_VID``, ``OF_SET_VLAN_PCP`` are deprecated
> > + as superseded by generic MODIFY_FIELD action.
> > +
>
>
> I have a question about ABI/API related issue for rte_flow support,
>
> If a driver removes an flow API item/action support, it directly impacts
> the user application. The application previously working may stop working
> and require code update, this is something we want to prevent with
> ABI policy. And this kind of changes are not caught by our tools.
>
> Do we have a process to deprecate/remove a flow API item/action support?
> Like they can be only removed on ABI break release...
If possible, we should avoid removing them, or dropping support in a driver.
I think removing a feature could be considered only if not too many drivers
use it, or if it becomes a real burden to maintain.
^ permalink raw reply [relevance 0%]
* DPDK 21.11 released!
@ 2021-11-26 20:34 4% David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-11-26 20:34 UTC (permalink / raw)
To: announce; +Cc: Thomas Monjalon
A new major release is available:
https://fast.dpdk.org/rel/dpdk-21.11.tar.xz
This is a big DPDK release.
1875 commits from 204 authors
2413 files changed, 259559 insertions(+), 87876 deletions(-)
The branch 21.11 should be supported for at least two years,
making it recommended for system integration and deployment.
The new major ABI version is 22.
The next releases 22.03 and 22.07 will be ABI compatible with 21.11.
As you probably noticed, the year 2022 will see only two intermediate
releases before the next 22.11 LTS.
Below are some new features, grouped by category.
* General
- hugetlbfs subdirectories
- AddressSanitizer (ASan) integration for debug
- mempool flag for non-IO usages
- device class for DMA accelerators and drivers for
HiSilicon, Intel DSA, Intel IOAT, Marvell CNXK and NXP DPAA
- device class for GPU devices and driver for NVIDIA CUDA
- Toeplitz hash using Galois Fields New Instructions (GFNI)
* Networking
- MTU handling rework
- get all MAC addresses of a port
- RSS based on L3/L4 checksum fields
- flow match on L2TPv2 and PPP
- flow flex parser for custom header
- control delivery of HW Rx metadata
- transfer flows API rework
- shared Rx queue
- Windows support of Intel e1000, ixgbe and iavf
- driver for NXP ENETFEC
- vDPA driver for Xilinx devices
- virtio RSS
- vhost power monitor wakeup
- testpmd multi-process
- pcapng library and dumpcap tool
* API/ABI
- API namespace improvements and cleanups
- API internals hidden
- flags check for future ABI compatibility
More details in the release notes:
http://doc.dpdk.org/guides/rel_notes/release_21_11.html
There are 55 new contributors (including authors, reviewers and testers).
Welcome to Abhijit Sinha, Ady Agbarih, Alexander Bechikov, Alice Michael,
Artur Tyminski, Ben Magistro, Ben Pfaff, Charles Brett, Chengfeng Ye,
Christopher Pau, Daniel Martin Buckley, Danny Patel, Dariusz Sosnowski,
David George, Elena Agostini, Ganapati Kundapura, Georg Sauthoff,
Hanumanth Reddy Pothula, Harneet Singh, Huichao Cai, Idan Hackmon,
Ilyes Ben Hamouda, Jilei Chen, Jonathan Erb, Kumara Parameshwaran,
Lewei Yang, Liang Longfeng, Longfeng Liang, Maciej Fijalkowski,
Maciej Paczkowski, Maciej Szwed, Marcin Domagala, Miao Li,
Michal Berger, Michal Michalik, Mihai Pogonaru, Mohamad Noor Alim Hussin,
Nikhil Vasoya, Pawel Malinowski, Pei Zhang, Pravin Pathak,
Przemyslaw Zegan, Qiming Chen, Rashmi Shetty, Richard Eklycke,
Sean Zhang, Siddaraju DH, Steve Rempe, Sylwester Dziedziuch,
Volodymyr Fialko, Wojciech Drewek, Wojciech Liguzinski, Xingguang He,
Yu Wenjun, Yvonne Yang.
Below is the number of commits per employer (with authors count):
525 Intel (64)
331 NVIDIA (29)
312 Marvell (28)
155 OKTET Labs (5)
91 Huawei (7)
89 Red Hat (6)
75 Broadcom (11)
67 NXP (8)
49 Arm (5)
34 Trustnet (1)
29 Microsoft (4)
13 6WIND (2)
10 Xilinx (1)
A big thank to all courageous people who took on the non rewarding task
of reviewing other's job.
Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
113 Akhil Goyal <gakhil@marvell.com>
83 Ferruh Yigit <ferruh.yigit@intel.com>
70 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
51 Ray Kinsella <mdr@ashroe.eu>
50 Konstantin Ananyev <konstantin.ananyev@intel.com>
47 Bruce Richardson <bruce.richardson@intel.com>
46 Conor Walsh <conor.walsh@intel.com>
45 David Marchand <david.marchand@redhat.com>
39 Ruifeng Wang <ruifeng.wang@arm.com>
37 Jerin Jacob <jerinj@marvell.com>
36 Olivier Matz <olivier.matz@6wind.com>
36 Fan Zhang <roy.fan.zhang@intel.com>
32 Chenbo Xia <chenbo.xia@intel.com>
32 Ajit Khaparde <ajit.khaparde@broadcom.com>
25 Ori Kam <orika@nvidia.com>
23 Kevin Laatz <kevin.laatz@intel.com>
22 Ciara Power <ciara.power@intel.com>
20 Thomas Monjalon <thomas@monjalon.net>
19 Xiaoyun Li <xiaoyun.li@intel.com>
18 Maxime Coquelin <maxime.coquelin@redhat.com>
The new features for 22.03 may be submitted during the next 4 weeks so
that we can all enjoy a good break at the end of this year.
2022 will see a change in pace for releases timing, let's make the best
of it to make good reviews.
DPDK 22.03 is scheduled for early March:
http://core.dpdk.org/roadmap#dates
Please share your roadmap.
Thanks everyone!
--
David Marchand
^ permalink raw reply [relevance 4%]
* [PATCH] version: 22.03-rc0
@ 2021-11-29 13:16 11% David Marchand
2021-11-30 15:35 0% ` Thomas Monjalon
2021-12-02 18:11 11% ` [PATCH v2] " David Marchand
0 siblings, 2 replies; 200+ results
From: David Marchand @ 2021-11-29 13:16 UTC (permalink / raw)
To: dev; +Cc: Aaron Conole, Michael Santana
Start a new release cycle with empty release notes.
Bump version and ABI minor.
Enable ABI checks using latest libabigail.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
.github/workflows/build.yml | 6 +-
.travis.yml | 23 ++++-
ABI_VERSION | 2 +-
VERSION | 2 +-
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_22_03.rst | 138 +++++++++++++++++++++++++
6 files changed, 165 insertions(+), 7 deletions(-)
create mode 100644 doc/guides/rel_notes/release_22_03.rst
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 2e9c4be6d0..1a29e107be 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -20,10 +20,10 @@ jobs:
BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
CC: ccache ${{ matrix.config.compiler }}
DEF_LIB: ${{ matrix.config.library }}
- LIBABIGAIL_VERSION: libabigail-1.8
+ LIBABIGAIL_VERSION: libabigail-2.0
MINI: ${{ matrix.config.mini != '' }}
PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
- REF_GIT_TAG: none
+ REF_GIT_TAG: v21.11
RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
strategy:
@@ -40,7 +40,7 @@ jobs:
- os: ubuntu-18.04
compiler: gcc
library: shared
- checks: doc+tests
+ checks: abi+doc+tests
- os: ubuntu-18.04
compiler: clang
library: static
diff --git a/.travis.yml b/.travis.yml
index 4bb5bf629e..da5273048f 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -41,8 +41,8 @@ script: ./.ci/${TRAVIS_OS_NAME}-build.sh
env:
global:
- - LIBABIGAIL_VERSION=libabigail-1.8
- - REF_GIT_TAG=none
+ - LIBABIGAIL_VERSION=libabigail-2.0
+ - REF_GIT_TAG=v21.11
jobs:
include:
@@ -61,6 +61,14 @@ jobs:
packages:
- *required_packages
- *doc_packages
+ - env: DEF_LIB="shared" ABI_CHECKS=true
+ arch: amd64
+ compiler: gcc
+ addons:
+ apt:
+ packages:
+ - *required_packages
+ - *libabigail_build_packages
# x86_64 clang jobs
- env: DEF_LIB="static"
arch: amd64
@@ -137,6 +145,17 @@ jobs:
packages:
- *required_packages
- *doc_packages
+ - env: DEF_LIB="shared" ABI_CHECKS=true
+ dist: focal
+ arch: arm64-graviton2
+ virt: vm
+ group: edge
+ compiler: gcc
+ addons:
+ apt:
+ packages:
+ - *required_packages
+ - *libabigail_build_packages
# aarch64 clang jobs
- env: DEF_LIB="static"
dist: focal
diff --git a/ABI_VERSION b/ABI_VERSION
index b090fe57f6..70a91e23ec 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-22.0
+22.1
diff --git a/VERSION b/VERSION
index b570734337..25bb269237 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-21.11.0
+22.03.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 78861ee57b..876ffd28f6 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
:maxdepth: 1
:numbered:
+ release_22_03
release_21_11
release_21_08
release_21_05
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
new file mode 100644
index 0000000000..6d99d1eaa9
--- /dev/null
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 22.03
+==================
+
+.. **Read this first.**
+
+ The text in the sections below explains how to update the release notes.
+
+ Use proper spelling, capitalization and punctuation in all sections.
+
+ Variable and config names should be quoted as fixed width text:
+ ``LIKE_THIS``.
+
+ Build the docs and view the output file to ensure the changes are correct::
+
+ ninja -C build doc
+ xdg-open build/doc/guides/html/rel_notes/release_22_03.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+ Sample format:
+
+ * **Add a title in the past tense with a full stop.**
+
+ Add a short 1-2 sentence description in the past tense.
+ The description should be enough to allow someone scanning
+ the release notes to understand the new feature.
+
+ If the feature adds a lot of sub-features you can use a bullet list
+ like this:
+
+ * Added feature foo to do something.
+ * Enhanced feature bar to do something else.
+
+ Refer to the previous release notes for examples.
+
+ Suggested order in release notes items:
+ * Core libs (EAL, mempool, ring, mbuf, buses)
+ * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+ - ethdev (lib, PMDs)
+ - cryptodev (lib, PMDs)
+ - eventdev (lib, PMDs)
+ - etc
+ * Other libs
+ * Apps, Examples, Tools (if significant)
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item
+ in the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the API change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the ABI change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+* No ABI change that would break compatibility with 21.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+ * **Add title in present tense with full stop.**
+
+ Add a short 1-2 sentence description of the known issue
+ in the present tense. Add information on any known workarounds.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+ with this release.
+
+ The format is:
+
+ * <vendor> platform with <vendor> <type of devices> combinations
+
+ * List of CPU
+ * List of OS
+ * List of devices
+ * Other relevant details...
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
--
2.23.0
^ permalink raw reply [relevance 11%]
* Re: [PATCH] version: 22.03-rc0
2021-11-29 13:16 11% [PATCH] version: 22.03-rc0 David Marchand
@ 2021-11-30 15:35 0% ` Thomas Monjalon
2021-11-30 19:51 3% ` David Marchand
2021-12-02 18:11 11% ` [PATCH v2] " David Marchand
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-11-30 15:35 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Aaron Conole, Michael Santana
29/11/2021 14:16, David Marchand:
> Start a new release cycle with empty release notes.
> Bump version and ABI minor.
> Enable ABI checks using latest libabigail.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
[...]
> - LIBABIGAIL_VERSION: libabigail-1.8
> + LIBABIGAIL_VERSION: libabigail-2.0
What is the reason for this update? Can we still use the old version?
Maybe add a small comment in the commit log.
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Thanks
^ permalink raw reply [relevance 0%]
* Re: [PATCH] version: 22.03-rc0
2021-11-30 15:35 0% ` Thomas Monjalon
@ 2021-11-30 19:51 3% ` David Marchand
2021-12-02 16:13 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-11-30 19:51 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Aaron Conole, Michael Santana, Dodji Seketeli
On Tue, Nov 30, 2021 at 4:35 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 29/11/2021 14:16, David Marchand:
> > Start a new release cycle with empty release notes.
> > Bump version and ABI minor.
> > Enable ABI checks using latest libabigail.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> [...]
> > - LIBABIGAIL_VERSION: libabigail-1.8
> > + LIBABIGAIL_VERSION: libabigail-2.0
>
> What is the reason for this update? Can we still use the old version?
Nothing prevents from using the old version, I just used this chance
to bump the version.
I talked with Dodji, 2.0 is the version used in Fedora for ABI checks.
This version comes with enhancements and at least a fix for a bug we
got when writing exception rules in dpdk:
https://sourceware.org/bugzilla/show_bug.cgi?id=28060
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1] gpudev: return EINVAL if invalid input pointer for free and unregister
2021-11-24 18:04 0% ` Bruce Richardson
@ 2021-12-01 21:37 0% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2021-12-01 21:37 UTC (permalink / raw)
To: Bruce Richardson
Cc: Thomas Monjalon, eagostini, techboard, dev, Andrew Rybchenko,
David Marchand, Ferruh Yigit
On Wed, Nov 24, 2021 at 06:04:56PM +0000, Bruce Richardson wrote:
> On Wed, Nov 24, 2021 at 09:24:42AM -0800, Tyler Retzlaff wrote:
> > On Fri, Nov 19, 2021 at 10:56:36AM +0100, Thomas Monjalon wrote:
> > > 19/11/2021 10:34, Ferruh Yigit:
> > > > >> + if (ptr == NULL) {
> > > > >> + rte_errno = EINVAL;
> > > > >> + return -rte_errno;
> > > > >> + }
> > > > >
> > > > > in general dpdk has real problems with how it indicates that an error
> > > > > occurred and what error occurred consistently.
> > > > >
> > > > > some api's return 0 on success
> > > > > and maybe return -errno if ! 0
> > > > > and maybe return errno if ! 0
> > >
> > > Which function returns a positive errno?
> >
> > i may have mispoke about this variant, it may be something i recall
> > seeing in a posted patch that was resolved before integration.
> >
> > >
> > > > > and maybe set rte_errno if ! 0
> > > > >
> > > > > some api's return -1 on failure
> > > > > and set rte_errno if -1
> > > > >
> > > > > some api's return < 0 on failure
> > > > > and maybe set rte_errno
> > > > > and maybe return -errno
> > > > > and maybe set rte_errno and return -rte_errno
> > > >
> > > > This is a generic comment, cc'ed a few more folks to make the comment more
> > > > visible.
> > > >
> > > > > this isn't isiolated to only this change but since additions and context
> > > > > in this patch highlight it maybe it's a good time to bring it up.
> > > > >
> > > > > it's frustrating to have to carefully read the implementation every time
> > > > > you want to make a function call to make sure you're handling the flavor
> > > > > of error reporting for a particular function.
> > > > >
> > > > > if this is new code could we please clearly identify the current best
> > > > > practice and follow it as a standard going forward for all new public
> > > > > apis.
> > >
> > > I think this patch is following the best practice.
> > > 1/ Return negative value in case of error
> > > 2/ Set rte_errno
> > > 3/ Set same absolute value in rte_errno and return code
> >
> > with the approach proposed as best practice above it results in at least the
> > applicaiton code variations as follows.
> >
> > int rv = rte_func_call();
> >
> > 1. if (rv < 0 && rte_errno == EAGAIN)
> >
> > 2. if (rv == -1 && rte_errno == EAGAIN)
> >
> > 3. if (rv < 0 && -rv == EAGAIN)
> >
> > 4. if (rv < 0 && rv == -EAGAIN)
> >
> > (and incorrectly)
> >
> > 5. // ignore rv
> > if (rte_errno == EAGAIN)
> >
> > it might be better practice if indication that an error occurs is
> > signaled distinctly from the error that occurred. otherwise why use
> > rte_errno at all instead returning -rte_errno always?
> >
> > this philosophy would align better with modern posix / unix platform
> > apis. often documented in the RETURN VALUE section of the manpage as:
> >
> > ``Upon successful completion, somefunction() shall return 0;
> > otherwise, -1 shall be returned and errno set to indicate the
> > error.''
> >
> > therefore returning a value outside of the set {0, -1} is an abi break.
>
> I like using this standard, because it also allows consistent behaviour for
> non-integer returning functions, e.g. object creation functions returning
> pointers.
>
> if (ret < 0 && rte_errno == EAGAIN)
i only urge that this be explicit as opposed to a range i.e. ret == -1
preferred over ret < 0
>
> becomes for a pointer:
>
> if (ret == NULL && rte_errno == EAGAIN)
>
> Regards,
> /Bruce
but otherwise i agree, ret indicates an error happened and rte_errno
provides the detail.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] version: 22.03-rc0
2021-11-30 19:51 3% ` David Marchand
@ 2021-12-02 16:13 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-12-02 16:13 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Aaron Conole, Michael Santana, Dodji Seketeli
On Tue, Nov 30, 2021 at 8:51 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> On Tue, Nov 30, 2021 at 4:35 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 29/11/2021 14:16, David Marchand:
> > > Start a new release cycle with empty release notes.
> > > Bump version and ABI minor.
> > > Enable ABI checks using latest libabigail.
> > >
> > > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > [...]
> > > - LIBABIGAIL_VERSION: libabigail-1.8
> > > + LIBABIGAIL_VERSION: libabigail-2.0
> >
> > What is the reason for this update? Can we still use the old version?
>
> Nothing prevents from using the old version, I just used this chance
> to bump the version.
>
> I talked with Dodji, 2.0 is the version used in Fedora for ABI checks.
> This version comes with enhancements and at least a fix for a bug we
> got when writing exception rules in dpdk:
> https://sourceware.org/bugzilla/show_bug.cgi?id=28060
I ran more checks with 2.0 and unfortunately, I get an issue with dpdk
on Fedora 35 libabigail.
2.0 built in Ubuntu does not seem affected, but I prefer to be safe,
stick to 1.8 version and wait for Dodji to have a look.
v2 on the way.
--
David Marchand
^ permalink raw reply [relevance 0%]
* [PATCH v2] version: 22.03-rc0
2021-11-29 13:16 11% [PATCH] version: 22.03-rc0 David Marchand
2021-11-30 15:35 0% ` Thomas Monjalon
@ 2021-12-02 18:11 11% ` David Marchand
2021-12-02 19:34 0% ` Thomas Monjalon
2021-12-02 20:36 0% ` David Marchand
1 sibling, 2 replies; 200+ results
From: David Marchand @ 2021-12-02 18:11 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon, Aaron Conole, Michael Santana
Start a new release cycle with empty release notes.
Bump version and ABI minor.
Enable ABI checks.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
Changes since v1:
- stick to libabigail 1.8,
---
.github/workflows/build.yml | 4 +-
.travis.yml | 21 +++-
ABI_VERSION | 2 +-
VERSION | 2 +-
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_22_03.rst | 138 +++++++++++++++++++++++++
6 files changed, 163 insertions(+), 5 deletions(-)
create mode 100644 doc/guides/rel_notes/release_22_03.rst
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 2e9c4be6d0..6cf997d6ee 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -23,7 +23,7 @@ jobs:
LIBABIGAIL_VERSION: libabigail-1.8
MINI: ${{ matrix.config.mini != '' }}
PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
- REF_GIT_TAG: none
+ REF_GIT_TAG: v21.11
RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
strategy:
@@ -40,7 +40,7 @@ jobs:
- os: ubuntu-18.04
compiler: gcc
library: shared
- checks: doc+tests
+ checks: abi+doc+tests
- os: ubuntu-18.04
compiler: clang
library: static
diff --git a/.travis.yml b/.travis.yml
index 4bb5bf629e..0838f80d3c 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -42,7 +42,7 @@ script: ./.ci/${TRAVIS_OS_NAME}-build.sh
env:
global:
- LIBABIGAIL_VERSION=libabigail-1.8
- - REF_GIT_TAG=none
+ - REF_GIT_TAG=v21.11
jobs:
include:
@@ -61,6 +61,14 @@ jobs:
packages:
- *required_packages
- *doc_packages
+ - env: DEF_LIB="shared" ABI_CHECKS=true
+ arch: amd64
+ compiler: gcc
+ addons:
+ apt:
+ packages:
+ - *required_packages
+ - *libabigail_build_packages
# x86_64 clang jobs
- env: DEF_LIB="static"
arch: amd64
@@ -137,6 +145,17 @@ jobs:
packages:
- *required_packages
- *doc_packages
+ - env: DEF_LIB="shared" ABI_CHECKS=true
+ dist: focal
+ arch: arm64-graviton2
+ virt: vm
+ group: edge
+ compiler: gcc
+ addons:
+ apt:
+ packages:
+ - *required_packages
+ - *libabigail_build_packages
# aarch64 clang jobs
- env: DEF_LIB="static"
dist: focal
diff --git a/ABI_VERSION b/ABI_VERSION
index b090fe57f6..70a91e23ec 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-22.0
+22.1
diff --git a/VERSION b/VERSION
index b570734337..25bb269237 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-21.11.0
+22.03.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 78861ee57b..876ffd28f6 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
:maxdepth: 1
:numbered:
+ release_22_03
release_21_11
release_21_08
release_21_05
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
new file mode 100644
index 0000000000..6d99d1eaa9
--- /dev/null
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 22.03
+==================
+
+.. **Read this first.**
+
+ The text in the sections below explains how to update the release notes.
+
+ Use proper spelling, capitalization and punctuation in all sections.
+
+ Variable and config names should be quoted as fixed width text:
+ ``LIKE_THIS``.
+
+ Build the docs and view the output file to ensure the changes are correct::
+
+ ninja -C build doc
+ xdg-open build/doc/guides/html/rel_notes/release_22_03.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+ Sample format:
+
+ * **Add a title in the past tense with a full stop.**
+
+ Add a short 1-2 sentence description in the past tense.
+ The description should be enough to allow someone scanning
+ the release notes to understand the new feature.
+
+ If the feature adds a lot of sub-features you can use a bullet list
+ like this:
+
+ * Added feature foo to do something.
+ * Enhanced feature bar to do something else.
+
+ Refer to the previous release notes for examples.
+
+ Suggested order in release notes items:
+ * Core libs (EAL, mempool, ring, mbuf, buses)
+ * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+ - ethdev (lib, PMDs)
+ - cryptodev (lib, PMDs)
+ - eventdev (lib, PMDs)
+ - etc
+ * Other libs
+ * Apps, Examples, Tools (if significant)
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item
+ in the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the API change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the ABI change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+* No ABI change that would break compatibility with 21.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+ * **Add title in present tense with full stop.**
+
+ Add a short 1-2 sentence description of the known issue
+ in the present tense. Add information on any known workarounds.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+ with this release.
+
+ The format is:
+
+ * <vendor> platform with <vendor> <type of devices> combinations
+
+ * List of CPU
+ * List of OS
+ * List of devices
+ * Other relevant details...
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
--
2.23.0
^ permalink raw reply [relevance 11%]
* Re: [PATCH v2] version: 22.03-rc0
2021-12-02 18:11 11% ` [PATCH v2] " David Marchand
@ 2021-12-02 19:34 0% ` Thomas Monjalon
2021-12-02 20:36 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-12-02 19:34 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Aaron Conole, Michael Santana
02/12/2021 19:11, David Marchand:
> Start a new release cycle with empty release notes.
> Bump version and ABI minor.
> Enable ABI checks.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> Changes since v1:
> - stick to libabigail 1.8,
OK it looks reasonnable.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2] version: 22.03-rc0
2021-12-02 18:11 11% ` [PATCH v2] " David Marchand
2021-12-02 19:34 0% ` Thomas Monjalon
@ 2021-12-02 20:36 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-12-02 20:36 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon, Aaron Conole, Michael Santana
On Thu, Dec 2, 2021 at 7:11 PM David Marchand <david.marchand@redhat.com> wrote:
>
> Start a new release cycle with empty release notes.
> Bump version and ABI minor.
> Enable ABI checks.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
Applied, thanks.
--
David Marchand
^ permalink raw reply [relevance 0%]
* [RFC] cryptodev: asymmetric crypto random number source
@ 2021-12-03 10:03 3% Kusztal, ArkadiuszX
2021-12-13 8:14 3% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Kusztal, ArkadiuszX @ 2021-12-03 10:03 UTC (permalink / raw)
To: gakhil, Anoob Joseph, Zhang, Roy Fan; +Cc: dev
[-- Attachment #1: Type: text/plain, Size: 1126 bytes --]
ECDSA op:
rte_crypto_param k;
/**< The ECDSA per-message secret number, which is an integer
* in the interval (1, n-1)
*/
DSA op:
No 'k'.
This one I think have described some time ago:
Only PMD that verifies ECDSA is OCTEON which apparently needs 'k' provided by user.
Only PMD that verifies DSA is OpenSSL PMD which will generate its own random number internally.
So in case PMD supports one of these options (or especially when supports both) we need to give some information here.
The most obvious option would be to change rte_crypto_param k -> rte_crypto_param *k
In case (k == NULL) PMD should generate it itself if possible, otherwise it should push crypto_op to the response ring with appropriate error code.
Another options would be:
* Extend rte_cryptodev_config and rte_cryptodev_info with information about random number generator for specific device (though it would be ABI breakage)
* Provide some kind of callback to get random number from user (which could be useful for other things like RSA padding as well)
[-- Attachment #2: Type: text/html, Size: 6854 bytes --]
^ permalink raw reply [relevance 3%]
* [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
@ 2021-12-03 11:38 3% ` Xiaoyun Li
2021-12-15 11:33 0% ` Singh, Aman Deep
0 siblings, 1 reply; 200+ results
From: Xiaoyun Li @ 2021-12-03 11:38 UTC (permalink / raw)
To: ferruh.yigit, olivier.matz, mb, konstantin.ananyev, stephen,
vladimir.medvedkin
Cc: dev, Xiaoyun Li
Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
UDP/TCP checksum in mbuf which can be over multi-segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
---
doc/guides/rel_notes/release_22_03.rst | 10 ++
lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
lib/net/version.map | 10 ++
3 files changed, 206 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..7a082c4427 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added functions to calculate UDP/TCP checksum in mbuf.**
+ * Added the following functions to calculate UDP/TCP checksum of packets
+ which can be over multi-segments:
+ - ``rte_ipv4_udptcp_cksum_mbuf()``
+ - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
+ - ``rte_ipv6_udptcp_cksum_mbuf()``
+ - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
Removed Items
-------------
@@ -84,6 +91,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
+ ``rte_ipv4_udptcp_cksum_mbuf_verify()``, ``rte_ipv6_udptcp_cksum_mbuf()``,
+ ``rte_ipv6_udptcp_cksum_mbuf_verify()``
ABI Changes
-----------
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index c575250852..534f401d26 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr *ipv4_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv4 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Compute the IPv4 UDP/TCP checksum of a packet.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv4 UDP or TCP checksum.
*
@@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct rte_ipv4_hdr *ipv4_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Verify the IPv4 UDP/TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0
+ * (i.e. no checksum).
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/**
* IPv6 Header
*/
@@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr *ipv6_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv6 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process the IPv6 UDP or TCP checksum of a packet.
+ *
+ * The IPv6 header must not be followed by extension headers. The layer 4
+ * checksum must be set to 0 in the L4 header by the caller.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv6 UDP or TCP checksum.
*
@@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct rte_ipv6_hdr *ipv6_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Validate the IPv6 UDP or TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
+ * this is either invalid or means no checksum in some situations. See 8.1
+ * (Upper-Layer Checksums) in RFC 8200.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline int
+rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/** IPv6 fragment extension header. */
#define RTE_IPV6_EHDR_MF_SHIFT 0
#define RTE_IPV6_EHDR_MF_MASK 1
diff --git a/lib/net/version.map b/lib/net/version.map
index 4f4330d1c4..0f2aacdef8 100644
--- a/lib/net/version.map
+++ b/lib/net/version.map
@@ -12,3 +12,13 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 22.03
+ rte_ipv4_udptcp_cksum_mbuf;
+ rte_ipv4_udptcp_cksum_mbuf_verify;
+ rte_ipv6_udptcp_cksum_mbuf;
+ rte_ipv6_udptcp_cksum_mbuf_verify;
+};
--
2.25.1
--------------------------------------------------------------
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control
@ 2021-12-04 17:38 3% ` Stephen Hemminger
2021-12-05 7:03 3% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-12-04 17:38 UTC (permalink / raw)
To: jerinj
Cc: dev, Ray Kinsella, Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko, ajit.khaparde, aboyer, beilei.xing,
bruce.richardson, chas3, chenbo.xia, ciara.loftus, dsinghrawat,
ed.czeck, evgenys, grive, g.singh, zhouguoyang, haiyue.wang,
hkalra, heinrich.kuhn, hemant.agrawal, hyonkim, igorch,
irusskikh, jgrajcia, jasvinder.singh, jianwang, jiawenwu,
jingjing.wu, johndale, john.miller, linville, keith.wiles,
kirankumark, oulijun, lironh, longli, mw, spinler, matan,
matt.peters, maxime.coquelin, mk, humin29, pnalla, ndabilpuram,
qiming.yang, qi.z.zhang, radhac, rahul.lakkireddy, rmody,
rosen.xu, sachin.saxena, skoteshwar, shshaikh, shaibran,
shepard.siegel, asomalap, somnath.kotur, sthemmin,
steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2
On Sat, 4 Dec 2021 22:54:58 +0530
<jerinj@marvell.com> wrote:
> + /**
> + * Maximum supported traffic class as per PFC (802.1Qbb) specification.
> + *
> + * Based on device support and use-case need, there are two different
> + * ways to enable PFC. The first case is the port level PFC
> + * configuration, in this case, rte_eth_dev_priority_flow_ctrl_set()
> + * API shall be used to configure the PFC, and PFC frames will be
> + * generated using based on VLAN TC value.
> + * The second case is the queue level PFC configuration, in this case,
> + * Any packet field content can be used to steer the packet to the
> + * specific queue using rte_flow or RSS and then use
> + * rte_eth_dev_priority_flow_ctrl_queue_set() to set the TC mapping
> + * on each queue. Based on congestion selected on the specific queue,
> + * configured TC shall be used to generate PFC frames.
> + *
> + * When set to non zero value, application must use queue level
> + * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
> + * instead of port level PFC configuration via
> + * rte_eth_dev_priority_flow_ctrl_set() API to realize
> + * PFC configuration.
> + */
> + uint8_t pfc_queue_tc_max;
> + uint8_t reserved_8s[7];
> + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> void *reserved_ptrs[2]; /**< Reserved for future fields */
Not sure you can claim ABI compatibility because the previous versions of DPDK
did not enforce that reserved fields must be zero. The Linux kernel
learned this when adding flags for new system calls; reserved fields only
work if you enforce that application must set them to zero.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control
2021-12-04 17:38 3% ` Stephen Hemminger
@ 2021-12-05 7:03 3% ` Jerin Jacob
2021-12-05 18:00 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-12-05 7:03 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Jerin Jacob, dpdk-dev, Ray Kinsella, Thomas Monjalon,
Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Andrew Boyer,
Beilei Xing, Richardson, Bruce, Chas Williams, Xia, Chenbo,
Ciara Loftus, Devendra Singh Rawat, Ed Czeck, Evgeny Schemeilin,
Gaetan Rivet, Gagandeep Singh, Guoyang Zhou, Haiyue Wang,
Harman Kalra, heinrich.kuhn, Hemant Agrawal, Hyong Youb Kim,
Igor Chauskin, Igor Russkikh, Jakub Grajciar, Jasvinder Singh,
Jian Wang, Jiawen Wu, Jingjing Wu, John Daley, John Miller,
John W. Linville, Wiles, Keith, Kiran Kumar K, Lijun Ou,
Liron Himi, Long Li, Marcin Wojtas, Martin Spinler, Matan Azrad,
Matt Peters, Maxime Coquelin, Michal Krawczyk, Min Hu (Connor,
Pradeep Kumar Nalla, Nithin Dabilpuram, Qiming Yang, Qi Zhang,
Radha Mohan Chintakuntla, Rahul Lakkireddy, Rasesh Mody,
Rosen Xu, Sachin Saxena, Satha Koteswara Rao Kottidi,
Shahed Shaikh, Shai Brandes, Shepard Siegel,
Somalapuram Amaranath, Somnath Kotur, Stephen Hemminger,
Steven Webster, Sunil Kumar Kori, Tetsuya Mukawa,
Veerasenareddy Burru, Viacheslav Ovsiienko, Xiao Wang,
Xiaoyun Wang, Yisen Zhuang, Yong Wang, Ziyang Xuan
On Sat, Dec 4, 2021 at 11:08 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Sat, 4 Dec 2021 22:54:58 +0530
> <jerinj@marvell.com> wrote:
>
> > + /**
> > + * Maximum supported traffic class as per PFC (802.1Qbb) specification.
> > + *
> > + * Based on device support and use-case need, there are two different
> > + * ways to enable PFC. The first case is the port level PFC
> > + * configuration, in this case, rte_eth_dev_priority_flow_ctrl_set()
> > + * API shall be used to configure the PFC, and PFC frames will be
> > + * generated using based on VLAN TC value.
> > + * The second case is the queue level PFC configuration, in this case,
> > + * Any packet field content can be used to steer the packet to the
> > + * specific queue using rte_flow or RSS and then use
> > + * rte_eth_dev_priority_flow_ctrl_queue_set() to set the TC mapping
> > + * on each queue. Based on congestion selected on the specific queue,
> > + * configured TC shall be used to generate PFC frames.
> > + *
> > + * When set to non zero value, application must use queue level
> > + * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
> > + * instead of port level PFC configuration via
> > + * rte_eth_dev_priority_flow_ctrl_set() API to realize
> > + * PFC configuration.
> > + */
> > + uint8_t pfc_queue_tc_max;
> > + uint8_t reserved_8s[7];
> > + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> > void *reserved_ptrs[2]; /**< Reserved for future fields */
>
> Not sure you can claim ABI compatibility because the previous versions of DPDK
> did not enforce that reserved fields must be zero. The Linux kernel
> learned this when adding flags for new system calls; reserved fields only
> work if you enforce that application must set them to zero.
In this case it rte_eth_dev_info is an out parameter and implementation of
rte_eth_dev_info_get() already memseting to 0.
Do you still see any other ABI issue?
See rte_eth_dev_info_get()
/*
* Init dev_info before port_id check since caller does not have
* return status and does not know if get is successful or not.
*/
memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control
2021-12-05 7:03 3% ` Jerin Jacob
@ 2021-12-05 18:00 0% ` Stephen Hemminger
2021-12-06 9:57 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-12-05 18:00 UTC (permalink / raw)
To: Jerin Jacob
Cc: Jerin Jacob, dpdk-dev, Ray Kinsella, Thomas Monjalon,
Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Andrew Boyer,
Beilei Xing, Richardson, Bruce, Chas Williams, Xia, Chenbo,
Ciara Loftus, Devendra Singh Rawat, Ed Czeck, Evgeny Schemeilin,
Gaetan Rivet, Gagandeep Singh, Guoyang Zhou, Haiyue Wang,
Harman Kalra, heinrich.kuhn, Hemant Agrawal, Hyong Youb Kim,
Igor Chauskin, Igor Russkikh, Jakub Grajciar, Jasvinder Singh,
Jian Wang, Jiawen Wu, Jingjing Wu, John Daley, John Miller,
John W. Linville, Wiles, Keith, Kiran Kumar K, Lijun Ou,
Liron Himi, Long Li, Marcin Wojtas, Martin Spinler, Matan Azrad,
Matt Peters, Maxime Coquelin, Michal Krawczyk, Min Hu (Connor,
Pradeep Kumar Nalla, Nithin Dabilpuram, Qiming Yang, Qi Zhang,
Radha Mohan Chintakuntla, Rahul Lakkireddy, Rasesh Mody,
Rosen Xu, Sachin Saxena, Satha Koteswara Rao Kottidi,
Shahed Shaikh, Shai Brandes, Shepard Siegel,
Somalapuram Amaranath, Somnath Kotur, Stephen Hemminger,
Steven Webster, Sunil Kumar Kori, Tetsuya Mukawa,
Veerasenareddy Burru, Viacheslav Ovsiienko, Xiao Wang,
Xiaoyun Wang, Yisen Zhuang, Yong Wang, Ziyang Xuan
On Sun, 5 Dec 2021 12:33:57 +0530
Jerin Jacob <jerinjacobk@gmail.com> wrote:
> On Sat, Dec 4, 2021 at 11:08 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > On Sat, 4 Dec 2021 22:54:58 +0530
> > <jerinj@marvell.com> wrote:
> >
> > > + /**
> > > + * Maximum supported traffic class as per PFC (802.1Qbb) specification.
> > > + *
> > > + * Based on device support and use-case need, there are two different
> > > + * ways to enable PFC. The first case is the port level PFC
> > > + * configuration, in this case, rte_eth_dev_priority_flow_ctrl_set()
> > > + * API shall be used to configure the PFC, and PFC frames will be
> > > + * generated using based on VLAN TC value.
> > > + * The second case is the queue level PFC configuration, in this case,
> > > + * Any packet field content can be used to steer the packet to the
> > > + * specific queue using rte_flow or RSS and then use
> > > + * rte_eth_dev_priority_flow_ctrl_queue_set() to set the TC mapping
> > > + * on each queue. Based on congestion selected on the specific queue,
> > > + * configured TC shall be used to generate PFC frames.
> > > + *
> > > + * When set to non zero value, application must use queue level
> > > + * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
> > > + * instead of port level PFC configuration via
> > > + * rte_eth_dev_priority_flow_ctrl_set() API to realize
> > > + * PFC configuration.
> > > + */
> > > + uint8_t pfc_queue_tc_max;
> > > + uint8_t reserved_8s[7];
> > > + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> > > void *reserved_ptrs[2]; /**< Reserved for future fields */
> >
> > Not sure you can claim ABI compatibility because the previous versions of DPDK
> > did not enforce that reserved fields must be zero. The Linux kernel
> > learned this when adding flags for new system calls; reserved fields only
> > work if you enforce that application must set them to zero.
>
> In this case it rte_eth_dev_info is an out parameter and implementation of
> rte_eth_dev_info_get() already memseting to 0.
> Do you still see any other ABI issue?
>
> See rte_eth_dev_info_get()
> /*
> * Init dev_info before port_id check since caller does not have
> * return status and does not know if get is successful or not.
> */
> memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
The concern was from the misreading comment. It talks about what application should do.
Could you reword the comment so that it describes what pfc_queue_tc_max is here
and move the flow control set part of the comment to where the API for that is.
^ permalink raw reply [relevance 0%]
* Re: vmxnet3 no longer functional on DPDK 21.11
@ 2021-12-06 1:52 3% ` Lewis Donzis
0 siblings, 0 replies; 200+ results
From: Lewis Donzis @ 2021-12-06 1:52 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, yongwang
----- On Nov 30, 2021, at 7:42 AM, Bruce Richardson bruce.richardson@intel.com wrote:
> On Mon, Nov 29, 2021 at 02:45:15PM -0600, Lewis Donzis wrote:
>> Hello.
>> We just upgraded from 21.08 to 21.11 and it's rather astounding the
>> number of incompatible changes in three months. Not a big deal, just
>> kind of a surprise, that's all.
>> Anyway, the problem is that the vmxnet3 driver is no longer functional
>> on FreeBSD.
>> In drivers/net/vmxnet3/vmxnet3_ethdev.c, vmxnet3_dev_start() gets an
>> error calling rte_intr_enable(). So it logs "interrupt enable failed"
>> and returns an error.
>> In lib/eal/freebsd/eal_interrupts.c, rte_intr_enable() is returning an
>> error because rte_intr_dev_fd_get(intr_handle) is returning -1.
>> I don't see how that could ever return anything other than -1 since it
>> appears that there is no code that ever calls rte_intr_dev_fd_set()
>> with a value other than -1 on FreeBSD. Also weird to me is that even
>> if it didn't get an error, the switch statement that follows looks like
>> it will return an error in every case.
>> Nonetheless, it worked in 21.08, and I can't quite see why the
>> difference, so I must be missing something.
>> For the moment, I just commented the "return -EIO" in vmxnet3_ethdev.c,
>> and it's now working again, but that's obviously not the correct
>> solution.
>> Can someone who's knowledgable about this mechanism perhaps explain a
>> little bit about what's going on? I'll be happy to help troubleshoot.
>> It seems like it must be something simple, but I just don't see it yet.
>
> Hi
>
> if you have the chance, it would be useful if you could use "git bisect" to
> identify the commit in 21.11 that broke this driver. Looking through the
> logs for 21.11 I can't identify any particular likely-looking commit, so
> bisect is likely a good way to start looking into this.
>
> Regards,
> /Bruce
Hi, Bruce. git bisect is very time-consuming and very cool!
I went back to 21.08, about 1100 commits, and worked through the process, but then I realized that I had forgotten to run ninja on one of the steps, so I did it again.
I also re-checked it after the bisect, just to make sure that c87d435a4d79739c0cec2ed280b94b41cb908af7 is good, and 7a0935239b9eb817c65c03554a9954ddb8ea5044 is bad.
Thanks,
lew
Here's the result:
root@fbdev:/usr/local/share/dpdk-git # git bisect start
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
root@fbdev:/usr/local/share/dpdk-git # git bisect good 74bd4072996e64b0051d24d8d641554d225db196
Bisecting: 556 revisions left to test after this (roughly 9 steps)
[e2a289a788c0a128a15bc0f1099af7c031201ac5] net/ngbe: add mailbox process operations
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 277 revisions left to test after this (roughly 8 steps)
[5906be5af6570db8b70b307c96aace0b096d1a2c] ethdev: fix ID spelling in comments and log messages
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 138 revisions left to test after this (roughly 7 steps)
[a7c236b894a848c7bb9afb773a7e3c13615abaa8] net/cnxk: support meter ops get
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 69 revisions left to test after this (roughly 6 steps)
[14fc81aed73842d976dd19a93ca47e22d61c1759] ethdev: update modify field flow action
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 34 revisions left to test after this (roughly 5 steps)
[cdea571becb4dabf9962455f671af0c99594e380] common/sfc_efx/base: add flag to use Rx prefix user flag
root@fbdev:/usr/local/share/dpdk-git # git bisect good
Bisecting: 17 revisions left to test after this (roughly 4 steps)
[7a0935239b9eb817c65c03554a9954ddb8ea5044] ethdev: make fast-path functions to use new flat array
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 8 revisions left to test after this (roughly 3 steps)
[012bf708c20f4b23d055717e28f8de74887113d8] net/sfc: support group flows in tunnel offload
root@fbdev:/usr/local/share/dpdk-git # git bisect good
Bisecting: 4 revisions left to test after this (roughly 2 steps)
[9df2d8f5cc9653d6413cb2240c067ea455ab7c3c] net/sfc: support counters in tunnel offload jump rules
root@fbdev:/usr/local/share/dpdk-git # git bisect good
Bisecting: 2 revisions left to test after this (roughly 1 step)
[c024496ae8c8c075b0d0a3b43119475787b24b45] ethdev: allocate max space for internal queue array
root@fbdev:/usr/local/share/dpdk-git # git bisect good
Bisecting: 0 revisions left to test after this (roughly 1 step)
[c87d435a4d79739c0cec2ed280b94b41cb908af7] ethdev: copy fast-path API into separate structure
root@fbdev:/usr/local/share/dpdk-git # git bisect good
7a0935239b9eb817c65c03554a9954ddb8ea5044 is the first bad commit
commit 7a0935239b9eb817c65c03554a9954ddb8ea5044
Author: Konstantin Ananyev <konstantin.ananyev@intel.com>
Date: Wed Oct 13 14:37:02 2021 +0100
ethdev: make fast-path functions to use new flat array
Rework fast-path ethdev functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
lib/ethdev/ethdev_private.c | 31 +++++
lib/ethdev/rte_ethdev.h | 270 +++++++++++++++++++++++++++++++-------------
lib/ethdev/version.map | 3 +
3 files changed, 226 insertions(+), 78 deletions(-)
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
@ 2021-12-06 8:35 1% jerinj
2021-12-06 13:35 3% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: jerinj @ 2021-12-06 8:35 UTC (permalink / raw)
To: dev, Thomas Monjalon, Akhil Goyal, Declan Doherty, Jerin Jacob,
Ruifeng Wang, Jan Viktorin, Bruce Richardson, Ray Kinsella,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov
Cc: ferruh.yigit, sburla, lironh
From: Jerin Jacob <jerinj@marvell.com>
As per the deprecation notice, In the view of enabling unified driver
for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
drivers and replace with drivers/cnxk/ which
supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
This patch does the following
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
- Update the documentation and MAINTAINERS to reflect the same.
- Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
documentation is not accounted for this change as kernel documentation
still uses OCTEONTX2.
Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
MAINTAINERS | 37 -
app/test/meson.build | 1 -
app/test/test_cryptodev.c | 7 -
app/test/test_cryptodev.h | 1 -
app/test/test_cryptodev_asym.c | 17 -
app/test/test_eventdev.c | 8 -
config/arm/arm64_cn10k_linux_gcc | 1 -
...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
config/arm/meson.build | 10 +-
devtools/check-abi.sh | 4 +
doc/guides/cryptodevs/features/octeontx2.ini | 87 -
doc/guides/cryptodevs/index.rst | 1 -
doc/guides/cryptodevs/octeontx2.rst | 188 -
doc/guides/dmadevs/cnxk.rst | 2 +-
doc/guides/eventdevs/features/octeontx2.ini | 30 -
doc/guides/eventdevs/index.rst | 1 -
doc/guides/eventdevs/octeontx2.rst | 178 -
doc/guides/mempool/index.rst | 1 -
doc/guides/mempool/octeontx2.rst | 92 -
doc/guides/nics/cnxk.rst | 4 +-
doc/guides/nics/features/octeontx2.ini | 97 -
doc/guides/nics/features/octeontx2_vec.ini | 48 -
doc/guides/nics/features/octeontx2_vf.ini | 45 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/octeontx2.rst | 465 ---
doc/guides/nics/octeontx_ep.rst | 4 +-
doc/guides/platform/cnxk.rst | 12 +
.../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
.../img/octeontx2_resource_virtualization.svg | 2418 ------------
doc/guides/platform/index.rst | 1 -
doc/guides/platform/octeontx2.rst | 520 ---
doc/guides/rel_notes/deprecation.rst | 17 -
doc/guides/rel_notes/release_19_08.rst | 12 +-
doc/guides/rel_notes/release_19_11.rst | 6 +-
doc/guides/rel_notes/release_20_02.rst | 8 +-
doc/guides/rel_notes/release_20_05.rst | 4 +-
doc/guides/rel_notes/release_20_08.rst | 6 +-
doc/guides/rel_notes/release_20_11.rst | 8 +-
doc/guides/rel_notes/release_21_02.rst | 10 +-
doc/guides/rel_notes/release_21_05.rst | 6 +-
doc/guides/rel_notes/release_21_11.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 1 -
drivers/common/meson.build | 1 -
drivers/common/octeontx2/hw/otx2_nix.h | 1391 -------
drivers/common/octeontx2/hw/otx2_npa.h | 305 --
drivers/common/octeontx2/hw/otx2_npc.h | 503 ---
drivers/common/octeontx2/hw/otx2_ree.h | 27 -
drivers/common/octeontx2/hw/otx2_rvu.h | 219 --
drivers/common/octeontx2/hw/otx2_sdp.h | 184 -
drivers/common/octeontx2/hw/otx2_sso.h | 209 --
drivers/common/octeontx2/hw/otx2_ssow.h | 56 -
drivers/common/octeontx2/hw/otx2_tim.h | 34 -
drivers/common/octeontx2/meson.build | 24 -
drivers/common/octeontx2/otx2_common.c | 216 --
drivers/common/octeontx2/otx2_common.h | 179 -
drivers/common/octeontx2/otx2_dev.c | 1074 ------
drivers/common/octeontx2/otx2_dev.h | 161 -
drivers/common/octeontx2/otx2_io_arm64.h | 114 -
drivers/common/octeontx2/otx2_io_generic.h | 75 -
drivers/common/octeontx2/otx2_irq.c | 288 --
drivers/common/octeontx2/otx2_irq.h | 28 -
drivers/common/octeontx2/otx2_mbox.c | 465 ---
drivers/common/octeontx2/otx2_mbox.h | 1958 ----------
drivers/common/octeontx2/otx2_sec_idev.c | 183 -
drivers/common/octeontx2/otx2_sec_idev.h | 43 -
drivers/common/octeontx2/version.map | 44 -
drivers/crypto/meson.build | 1 -
drivers/crypto/octeontx2/meson.build | 30 -
drivers/crypto/octeontx2/otx2_cryptodev.c | 188 -
drivers/crypto/octeontx2/otx2_cryptodev.h | 63 -
.../octeontx2/otx2_cryptodev_capabilities.c | 924 -----
.../octeontx2/otx2_cryptodev_capabilities.h | 45 -
.../octeontx2/otx2_cryptodev_hw_access.c | 225 --
.../octeontx2/otx2_cryptodev_hw_access.h | 161 -
.../crypto/octeontx2/otx2_cryptodev_mbox.c | 285 --
.../crypto/octeontx2/otx2_cryptodev_mbox.h | 37 -
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 1438 -------
drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 15 -
.../octeontx2/otx2_cryptodev_ops_helper.h | 82 -
drivers/crypto/octeontx2/otx2_cryptodev_qp.h | 46 -
drivers/crypto/octeontx2/otx2_cryptodev_sec.c | 655 ----
drivers/crypto/octeontx2/otx2_cryptodev_sec.h | 64 -
.../crypto/octeontx2/otx2_ipsec_anti_replay.h | 227 --
drivers/crypto/octeontx2/otx2_ipsec_fp.h | 371 --
drivers/crypto/octeontx2/otx2_ipsec_po.h | 447 ---
drivers/crypto/octeontx2/otx2_ipsec_po_ops.h | 167 -
drivers/crypto/octeontx2/otx2_security.h | 37 -
drivers/crypto/octeontx2/version.map | 13 -
drivers/event/cnxk/cn9k_eventdev.c | 10 +
drivers/event/meson.build | 1 -
drivers/event/octeontx2/meson.build | 26 -
drivers/event/octeontx2/otx2_evdev.c | 1900 ----------
drivers/event/octeontx2/otx2_evdev.h | 430 ---
drivers/event/octeontx2/otx2_evdev_adptr.c | 656 ----
.../event/octeontx2/otx2_evdev_crypto_adptr.c | 132 -
.../octeontx2/otx2_evdev_crypto_adptr_rx.h | 77 -
.../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 -
drivers/event/octeontx2/otx2_evdev_irq.c | 272 --
drivers/event/octeontx2/otx2_evdev_selftest.c | 1517 --------
drivers/event/octeontx2/otx2_evdev_stats.h | 286 --
drivers/event/octeontx2/otx2_tim_evdev.c | 735 ----
drivers/event/octeontx2/otx2_tim_evdev.h | 256 --
drivers/event/octeontx2/otx2_tim_worker.c | 192 -
drivers/event/octeontx2/otx2_tim_worker.h | 598 ---
drivers/event/octeontx2/otx2_worker.c | 372 --
drivers/event/octeontx2/otx2_worker.h | 339 --
drivers/event/octeontx2/otx2_worker_dual.c | 345 --
drivers/event/octeontx2/otx2_worker_dual.h | 110 -
drivers/event/octeontx2/version.map | 3 -
drivers/mempool/cnxk/cnxk_mempool.c | 56 +-
drivers/mempool/meson.build | 1 -
drivers/mempool/octeontx2/meson.build | 18 -
drivers/mempool/octeontx2/otx2_mempool.c | 457 ---
drivers/mempool/octeontx2/otx2_mempool.h | 221 --
.../mempool/octeontx2/otx2_mempool_debug.c | 135 -
drivers/mempool/octeontx2/otx2_mempool_irq.c | 303 --
drivers/mempool/octeontx2/otx2_mempool_ops.c | 901 -----
drivers/mempool/octeontx2/version.map | 8 -
drivers/net/cnxk/cn9k_ethdev.c | 15 +
drivers/net/meson.build | 1 -
drivers/net/octeontx2/meson.build | 47 -
drivers/net/octeontx2/otx2_ethdev.c | 2814 --------------
drivers/net/octeontx2/otx2_ethdev.h | 619 ---
drivers/net/octeontx2/otx2_ethdev_debug.c | 811 ----
drivers/net/octeontx2/otx2_ethdev_devargs.c | 215 --
drivers/net/octeontx2/otx2_ethdev_irq.c | 493 ---
drivers/net/octeontx2/otx2_ethdev_ops.c | 589 ---
drivers/net/octeontx2/otx2_ethdev_sec.c | 923 -----
drivers/net/octeontx2/otx2_ethdev_sec.h | 130 -
drivers/net/octeontx2/otx2_ethdev_sec_tx.h | 182 -
drivers/net/octeontx2/otx2_flow.c | 1189 ------
drivers/net/octeontx2/otx2_flow.h | 414 --
drivers/net/octeontx2/otx2_flow_ctrl.c | 252 --
drivers/net/octeontx2/otx2_flow_dump.c | 595 ---
drivers/net/octeontx2/otx2_flow_parse.c | 1239 ------
drivers/net/octeontx2/otx2_flow_utils.c | 969 -----
drivers/net/octeontx2/otx2_link.c | 287 --
drivers/net/octeontx2/otx2_lookup.c | 352 --
drivers/net/octeontx2/otx2_mac.c | 151 -
drivers/net/octeontx2/otx2_mcast.c | 339 --
drivers/net/octeontx2/otx2_ptp.c | 450 ---
| 427 ---
drivers/net/octeontx2/otx2_rx.c | 430 ---
drivers/net/octeontx2/otx2_rx.h | 583 ---
drivers/net/octeontx2/otx2_stats.c | 397 --
drivers/net/octeontx2/otx2_tm.c | 3317 -----------------
drivers/net/octeontx2/otx2_tm.h | 176 -
drivers/net/octeontx2/otx2_tx.c | 1077 ------
drivers/net/octeontx2/otx2_tx.h | 791 ----
drivers/net/octeontx2/otx2_vlan.c | 1035 -----
drivers/net/octeontx2/version.map | 3 -
drivers/net/octeontx_ep/otx2_ep_vf.h | 2 +-
drivers/net/octeontx_ep/otx_ep_common.h | 16 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 10 +-
usertools/dpdk-devbind.py | 12 +-
156 files changed, 121 insertions(+), 52149 deletions(-)
rename config/arm/{arm64_octeontx2_linux_gcc => arm64_cn9k_linux_gcc} (84%)
delete mode 100644 doc/guides/cryptodevs/features/octeontx2.ini
delete mode 100644 doc/guides/cryptodevs/octeontx2.rst
delete mode 100644 doc/guides/eventdevs/features/octeontx2.ini
delete mode 100644 doc/guides/eventdevs/octeontx2.rst
delete mode 100644 doc/guides/mempool/octeontx2.rst
delete mode 100644 doc/guides/nics/features/octeontx2.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vec.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vf.ini
delete mode 100644 doc/guides/nics/octeontx2.rst
delete mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
delete mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg
delete mode 100644 doc/guides/platform/octeontx2.rst
delete mode 100644 drivers/common/octeontx2/hw/otx2_nix.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npa.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npc.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ree.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sdp.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sso.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_tim.h
delete mode 100644 drivers/common/octeontx2/meson.build
delete mode 100644 drivers/common/octeontx2/otx2_common.c
delete mode 100644 drivers/common/octeontx2/otx2_common.h
delete mode 100644 drivers/common/octeontx2/otx2_dev.c
delete mode 100644 drivers/common/octeontx2/otx2_dev.h
delete mode 100644 drivers/common/octeontx2/otx2_io_arm64.h
delete mode 100644 drivers/common/octeontx2/otx2_io_generic.h
delete mode 100644 drivers/common/octeontx2/otx2_irq.c
delete mode 100644 drivers/common/octeontx2/otx2_irq.h
delete mode 100644 drivers/common/octeontx2/otx2_mbox.c
delete mode 100644 drivers/common/octeontx2/otx2_mbox.h
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.c
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.h
delete mode 100644 drivers/common/octeontx2/version.map
delete mode 100644 drivers/crypto/octeontx2/meson.build
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_qp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_fp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_security.h
delete mode 100644 drivers/crypto/octeontx2/version.map
delete mode 100644 drivers/event/octeontx2/meson.build
delete mode 100644 drivers/event/octeontx2/otx2_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.c
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.h
delete mode 100644 drivers/event/octeontx2/version.map
delete mode 100644 drivers/mempool/octeontx2/meson.build
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.h
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c
delete mode 100644 drivers/mempool/octeontx2/version.map
delete mode 100644 drivers/net/octeontx2/meson.build
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_flow.c
delete mode 100644 drivers/net/octeontx2/otx2_flow.h
delete mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_dump.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
delete mode 100644 drivers/net/octeontx2/otx2_link.c
delete mode 100644 drivers/net/octeontx2/otx2_lookup.c
delete mode 100644 drivers/net/octeontx2/otx2_mac.c
delete mode 100644 drivers/net/octeontx2/otx2_mcast.c
delete mode 100644 drivers/net/octeontx2/otx2_ptp.c
delete mode 100644 drivers/net/octeontx2/otx2_rss.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.h
delete mode 100644 drivers/net/octeontx2/otx2_stats.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.h
delete mode 100644 drivers/net/octeontx2/otx2_tx.c
delete mode 100644 drivers/net/octeontx2/otx2_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_vlan.c
delete mode 100644 drivers/net/octeontx2/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 854b81f2a3..336bbb3547 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -534,15 +534,6 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
-Marvell OCTEON TX2
-M: Jerin Jacob <jerinj@marvell.com>
-M: Nithin Dabilpuram <ndabilpuram@marvell.com>
-F: drivers/common/octeontx2/
-F: drivers/mempool/octeontx2/
-F: doc/guides/platform/img/octeontx2_*
-F: doc/guides/platform/octeontx2.rst
-F: doc/guides/mempool/octeontx2.rst
-
Bus Drivers
-----------
@@ -795,21 +786,6 @@ F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
F: doc/guides/nics/features/mvneta.ini
-Marvell OCTEON TX2
-M: Jerin Jacob <jerinj@marvell.com>
-M: Nithin Dabilpuram <ndabilpuram@marvell.com>
-M: Kiran Kumar K <kirankumark@marvell.com>
-T: git://dpdk.org/next/dpdk-next-net-mrvl
-F: drivers/net/octeontx2/
-F: doc/guides/nics/features/octeontx2*.ini
-F: doc/guides/nics/octeontx2.rst
-
-Marvell OCTEON TX2 - security
-M: Anoob Joseph <anoobj@marvell.com>
-T: git://dpdk.org/next/dpdk-next-crypto
-F: drivers/common/octeontx2/otx2_sec*
-F: drivers/net/octeontx2/otx2_ethdev_sec*
-
Marvell OCTEON TX EP - endpoint
M: Nalla Pradeep <pnalla@marvell.com>
M: Radha Mohan Chintakuntla <radhac@marvell.com>
@@ -1115,13 +1091,6 @@ F: drivers/crypto/nitrox/
F: doc/guides/cryptodevs/nitrox.rst
F: doc/guides/cryptodevs/features/nitrox.ini
-Marvell OCTEON TX2 crypto
-M: Ankur Dwivedi <adwivedi@marvell.com>
-M: Anoob Joseph <anoobj@marvell.com>
-F: drivers/crypto/octeontx2/
-F: doc/guides/cryptodevs/octeontx2.rst
-F: doc/guides/cryptodevs/features/octeontx2.ini
-
Mellanox mlx5
M: Matan Azrad <matan@nvidia.com>
F: drivers/crypto/mlx5/
@@ -1298,12 +1267,6 @@ M: Shijith Thotton <sthotton@marvell.com>
F: drivers/event/cnxk/
F: doc/guides/eventdevs/cnxk.rst
-Marvell OCTEON TX2
-M: Pavan Nikhilesh <pbhagavatula@marvell.com>
-M: Jerin Jacob <jerinj@marvell.com>
-F: drivers/event/octeontx2/
-F: doc/guides/eventdevs/octeontx2.rst
-
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 2b480adfba..344a609a4d 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -341,7 +341,6 @@ driver_test_names = [
'cryptodev_dpaa_sec_autotest',
'cryptodev_dpaa2_sec_autotest',
'cryptodev_null_autotest',
- 'cryptodev_octeontx2_autotest',
'cryptodev_openssl_autotest',
'cryptodev_openssl_asym_autotest',
'cryptodev_qat_autotest',
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 10b48cdadb..293f59b48c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -15615,12 +15615,6 @@ test_cryptodev_octeontx(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX_SYM_PMD));
}
-static int
-test_cryptodev_octeontx2(void)
-{
- return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD));
-}
-
static int
test_cryptodev_caam_jr(void)
{
@@ -15733,7 +15727,6 @@ REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
REGISTER_TEST_COMMAND(cryptodev_ccp_autotest, test_cryptodev_ccp);
REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio);
REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
-REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 90c8287365..70f23a3f67 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -68,7 +68,6 @@
#define CRYPTODEV_NAME_CCP_PMD crypto_ccp
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
#define CRYPTODEV_NAME_OCTEONTX_SYM_PMD crypto_octeontx
-#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9d19a6d6d9..68f4d8e7a6 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -2375,20 +2375,6 @@ test_cryptodev_octeontx_asym(void)
return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
}
-static int
-test_cryptodev_octeontx2_asym(void)
-{
- gbl_driver_id = rte_cryptodev_driver_id_get(
- RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD));
- if (gbl_driver_id == -1) {
- RTE_LOG(ERR, USER1, "OCTEONTX2 PMD must be loaded.\n");
- return TEST_FAILED;
- }
-
- /* Use test suite registered for crypto_octeontx PMD */
- return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
-}
-
static int
test_cryptodev_cn9k_asym(void)
{
@@ -2424,8 +2410,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_TEST_COMMAND(cryptodev_octeontx_asym_autotest,
test_cryptodev_octeontx_asym);
-
-REGISTER_TEST_COMMAND(cryptodev_octeontx2_asym_autotest,
- test_cryptodev_octeontx2_asym);
REGISTER_TEST_COMMAND(cryptodev_cn9k_asym_autotest, test_cryptodev_cn9k_asym);
REGISTER_TEST_COMMAND(cryptodev_cn10k_asym_autotest, test_cryptodev_cn10k_asym);
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 843d9766b0..10028fe11d 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1018,12 +1018,6 @@ test_eventdev_selftest_octeontx(void)
return test_eventdev_selftest_impl("event_octeontx", "");
}
-static int
-test_eventdev_selftest_octeontx2(void)
-{
- return test_eventdev_selftest_impl("event_octeontx2", "");
-}
-
static int
test_eventdev_selftest_dpaa2(void)
{
@@ -1052,8 +1046,6 @@ REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
test_eventdev_selftest_octeontx);
-REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
- test_eventdev_selftest_octeontx2);
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
diff --git a/config/arm/arm64_cn10k_linux_gcc b/config/arm/arm64_cn10k_linux_gcc
index 88e5f10945..a3578c03a1 100644
--- a/config/arm/arm64_cn10k_linux_gcc
+++ b/config/arm/arm64_cn10k_linux_gcc
@@ -14,4 +14,3 @@ endian = 'little'
[properties]
platform = 'cn10k'
-disable_drivers = 'common/octeontx2'
diff --git a/config/arm/arm64_octeontx2_linux_gcc b/config/arm/arm64_cn9k_linux_gcc
similarity index 84%
rename from config/arm/arm64_octeontx2_linux_gcc
rename to config/arm/arm64_cn9k_linux_gcc
index 8fbdd3868d..a94b44a551 100644
--- a/config/arm/arm64_octeontx2_linux_gcc
+++ b/config/arm/arm64_cn9k_linux_gcc
@@ -13,5 +13,4 @@ cpu = 'armv8-a'
endian = 'little'
[properties]
-platform = 'octeontx2'
-disable_drivers = 'common/cnxk'
+platform = 'cn9k'
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 213324d262..16e808cdd5 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -139,7 +139,7 @@ implementer_cavium = {
'march_features': ['crc', 'crypto', 'lse'],
'compiler_options': ['-mcpu=octeontx2'],
'flags': [
- ['RTE_MACHINE', '"octeontx2"'],
+ ['RTE_MACHINE', '"cn9k"'],
['RTE_ARM_FEATURE_ATOMICS', true],
['RTE_USE_C11_MEM_MODEL', true],
['RTE_MAX_LCORE', 36],
@@ -340,8 +340,8 @@ soc_n2 = {
'numa': false
}
-soc_octeontx2 = {
- 'description': 'Marvell OCTEON TX2',
+soc_cn9k = {
+ 'description': 'Marvell OCTEON 9',
'implementer': '0x43',
'part_number': '0xb2',
'numa': false
@@ -377,6 +377,7 @@ generic_aarch32: Generic un-optimized build for armv8 aarch32 execution mode.
armada: Marvell ARMADA
bluefield: NVIDIA BlueField
centriq2400: Qualcomm Centriq 2400
+cn9k: Marvell OCTEON 9
cn10k: Marvell OCTEON 10
dpaa: NXP DPAA
emag: Ampere eMAG
@@ -385,7 +386,6 @@ kunpeng920: HiSilicon Kunpeng 920
kunpeng930: HiSilicon Kunpeng 930
n1sdp: Arm Neoverse N1SDP
n2: Arm Neoverse N2
-octeontx2: Marvell OCTEON TX2
stingray: Broadcom Stingray
thunderx2: Marvell ThunderX2 T99
thunderxt88: Marvell ThunderX T88
@@ -399,6 +399,7 @@ socs = {
'armada': soc_armada,
'bluefield': soc_bluefield,
'centriq2400': soc_centriq2400,
+ 'cn9k': soc_cn9k,
'cn10k' : soc_cn10k,
'dpaa': soc_dpaa,
'emag': soc_emag,
@@ -407,7 +408,6 @@ socs = {
'kunpeng930': soc_kunpeng930,
'n1sdp': soc_n1sdp,
'n2': soc_n2,
- 'octeontx2': soc_octeontx2,
'stingray': soc_stingray,
'thunderx2': soc_thunderx2,
'thunderxt88': soc_thunderxt88
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ca523eb94c..675f10142e 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -48,6 +48,10 @@ for dump in $(find $refdir -name "*.dump"); do
echo "Skipped removed driver $name."
continue
fi
+ if grep -qE "\<librte_*.*_octeontx2" $dump; then
+ echo "Skipped removed driver $name."
+ continue
+ fi
dump2=$(find $newdir -name $name)
if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
echo "Error: cannot find $name in $newdir" >&2
diff --git a/doc/guides/cryptodevs/features/octeontx2.ini b/doc/guides/cryptodevs/features/octeontx2.ini
deleted file mode 100644
index c54dc9409c..0000000000
--- a/doc/guides/cryptodevs/features/octeontx2.ini
+++ /dev/null
@@ -1,87 +0,0 @@
-;
-; Supported features of the 'octeontx2' crypto driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Symmetric crypto = Y
-Asymmetric crypto = Y
-Sym operation chaining = Y
-HW Accelerated = Y
-Protocol offload = Y
-In Place SGL = Y
-OOP SGL In LB Out = Y
-OOP SGL In SGL Out = Y
-OOP LB In LB Out = Y
-RSA PRIV OP KEY QT = Y
-Digest encrypted = Y
-Symmetric sessionless = Y
-
-;
-; Supported crypto algorithms of 'octeontx2' crypto driver.
-;
-[Cipher]
-NULL = Y
-3DES CBC = Y
-3DES ECB = Y
-AES CBC (128) = Y
-AES CBC (192) = Y
-AES CBC (256) = Y
-AES CTR (128) = Y
-AES CTR (192) = Y
-AES CTR (256) = Y
-AES XTS (128) = Y
-AES XTS (256) = Y
-DES CBC = Y
-KASUMI F8 = Y
-SNOW3G UEA2 = Y
-ZUC EEA3 = Y
-
-;
-; Supported authentication algorithms of 'octeontx2' crypto driver.
-;
-[Auth]
-NULL = Y
-AES GMAC = Y
-KASUMI F9 = Y
-MD5 = Y
-MD5 HMAC = Y
-SHA1 = Y
-SHA1 HMAC = Y
-SHA224 = Y
-SHA224 HMAC = Y
-SHA256 = Y
-SHA256 HMAC = Y
-SHA384 = Y
-SHA384 HMAC = Y
-SHA512 = Y
-SHA512 HMAC = Y
-SNOW3G UIA2 = Y
-ZUC EIA3 = Y
-
-;
-; Supported AEAD algorithms of 'octeontx2' crypto driver.
-;
-[AEAD]
-AES GCM (128) = Y
-AES GCM (192) = Y
-AES GCM (256) = Y
-CHACHA20-POLY1305 = Y
-
-;
-; Supported Asymmetric algorithms of the 'octeontx2' crypto driver.
-;
-[Asymmetric]
-RSA = Y
-DSA =
-Modular Exponentiation = Y
-Modular Inversion =
-Diffie-hellman =
-ECDSA = Y
-ECPM = Y
-
-;
-; Supported Operating systems of the 'octeontx2' crypto driver.
-;
-[OS]
-Linux = Y
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 3dcc2ecd2e..39cca6dbde 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -22,7 +22,6 @@ Crypto Device Drivers
dpaa_sec
kasumi
octeontx
- octeontx2
openssl
mlx5
mvsam
diff --git a/doc/guides/cryptodevs/octeontx2.rst b/doc/guides/cryptodevs/octeontx2.rst
deleted file mode 100644
index 811e61a1f6..0000000000
--- a/doc/guides/cryptodevs/octeontx2.rst
+++ /dev/null
@@ -1,188 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-
-Marvell OCTEON TX2 Crypto Poll Mode Driver
-==========================================
-
-The OCTEON TX2 crypto poll mode driver provides support for offloading
-cryptographic operations to cryptographic accelerator units on the
-**OCTEON TX2** :sup:`®` family of processors (CN9XXX).
-
-More information about OCTEON TX2 SoCs may be obtained from `<https://www.marvell.com>`_
-
-Features
---------
-
-The OCTEON TX2 crypto PMD has support for:
-
-Symmetric Crypto Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Cipher algorithms:
-
-* ``RTE_CRYPTO_CIPHER_NULL``
-* ``RTE_CRYPTO_CIPHER_3DES_CBC``
-* ``RTE_CRYPTO_CIPHER_3DES_ECB``
-* ``RTE_CRYPTO_CIPHER_AES_CBC``
-* ``RTE_CRYPTO_CIPHER_AES_CTR``
-* ``RTE_CRYPTO_CIPHER_AES_XTS``
-* ``RTE_CRYPTO_CIPHER_DES_CBC``
-* ``RTE_CRYPTO_CIPHER_KASUMI_F8``
-* ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2``
-* ``RTE_CRYPTO_CIPHER_ZUC_EEA3``
-
-Hash algorithms:
-
-* ``RTE_CRYPTO_AUTH_NULL``
-* ``RTE_CRYPTO_AUTH_AES_GMAC``
-* ``RTE_CRYPTO_AUTH_KASUMI_F9``
-* ``RTE_CRYPTO_AUTH_MD5``
-* ``RTE_CRYPTO_AUTH_MD5_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA1``
-* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA224``
-* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA256``
-* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA384``
-* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA512``
-* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
-* ``RTE_CRYPTO_AUTH_SNOW3G_UIA2``
-* ``RTE_CRYPTO_AUTH_ZUC_EIA3``
-
-AEAD algorithms:
-
-* ``RTE_CRYPTO_AEAD_AES_GCM``
-* ``RTE_CRYPTO_AEAD_CHACHA20_POLY1305``
-
-Asymmetric Crypto Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-* ``RTE_CRYPTO_ASYM_XFORM_RSA``
-* ``RTE_CRYPTO_ASYM_XFORM_MODEX``
-
-
-Installation
-------------
-
-The OCTEON TX2 crypto PMD may be compiled natively on an OCTEON TX2 platform or
-cross-compiled on an x86 platform.
-
-Refer to :doc:`../platform/octeontx2` for instructions to build your DPDK
-application.
-
-.. note::
-
- The OCTEON TX2 crypto PMD uses services from the kernel mode OCTEON TX2
- crypto PF driver in linux. This driver is included in the OCTEON TX SDK.
-
-Initialization
---------------
-
-List the CPT PF devices available on your OCTEON TX2 platform:
-
-.. code-block:: console
-
- lspci -d:a0fd
-
-``a0fd`` is the CPT PF device id. You should see output similar to:
-
-.. code-block:: console
-
- 0002:10:00.0 Class 1080: Device 177d:a0fd
-
-Set ``sriov_numvfs`` on the CPT PF device, to create a VF:
-
-.. code-block:: console
-
- echo 1 > /sys/bus/pci/drivers/octeontx2-cpt/0002:10:00.0/sriov_numvfs
-
-Bind the CPT VF device to the vfio_pci driver:
-
-.. code-block:: console
-
- echo '177d a0fe' > /sys/bus/pci/drivers/vfio-pci/new_id
- echo 0002:10:00.1 > /sys/bus/pci/devices/0002:10:00.1/driver/unbind
- echo 0002:10:00.1 > /sys/bus/pci/drivers/vfio-pci/bind
-
-Another way to bind the VF would be to use the ``dpdk-devbind.py`` script:
-
-.. code-block:: console
-
- cd <dpdk directory>
- ./usertools/dpdk-devbind.py -u 0002:10:00.1
- ./usertools/dpdk-devbind.py -b vfio-pci 0002:10.00.1
-
-.. note::
-
- * For CN98xx SoC, it is recommended to use even and odd DBDF VFs to achieve
- higher performance as even VF uses one crypto engine and odd one uses
- another crypto engine.
-
- * Ensure that sufficient huge pages are available for your application::
-
- dpdk-hugepages.py --setup 4G --pagesize 512M
-
- Refer to :ref:`linux_gsg_hugepages` for more details.
-
-Debugging Options
------------------
-
-.. _table_octeontx2_crypto_debug_options:
-
-.. table:: OCTEON TX2 crypto PMD debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | CPT | --log-level='pmd\.crypto\.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
-
-Testing
--------
-
-The symmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test
-application:
-
-.. code-block:: console
-
- ./dpdk-test
- RTE>>cryptodev_octeontx2_autotest
-
-The asymmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test
-application:
-
-.. code-block:: console
-
- ./dpdk-test
- RTE>>cryptodev_octeontx2_asym_autotest
-
-
-Lookaside IPsec Support
------------------------
-
-The OCTEON TX2 SoC can accelerate IPsec traffic in lookaside protocol mode,
-with its **cryptographic accelerator (CPT)**. ``OCTEON TX2 crypto PMD`` implements
-this as an ``RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL`` offload.
-
-Refer to :doc:`../prog_guide/rte_security` for more details on protocol offloads.
-
-This feature can be tested with ipsec-secgw sample application.
-
-
-Features supported
-~~~~~~~~~~~~~~~~~~
-
-* IPv4
-* IPv6
-* ESP
-* Tunnel mode
-* Transport mode(IPv4)
-* ESN
-* Anti-replay
-* UDP Encapsulation
-* AES-128/192/256-GCM
-* AES-128/192/256-CBC-SHA1-HMAC
-* AES-128/192/256-CBC-SHA256-128-HMAC
diff --git a/doc/guides/dmadevs/cnxk.rst b/doc/guides/dmadevs/cnxk.rst
index da2dd59071..418b9a9d63 100644
--- a/doc/guides/dmadevs/cnxk.rst
+++ b/doc/guides/dmadevs/cnxk.rst
@@ -7,7 +7,7 @@ CNXK DMA Device Driver
======================
The ``cnxk`` dmadev driver provides a poll-mode driver (PMD) for Marvell DPI DMA
-Hardware Accelerator block found in OCTEONTX2 and OCTEONTX3 family of SoCs.
+Hardware Accelerator block found in OCTEON 9 and OCTEON 10 family of SoCs.
Each DMA queue is exposed as a VF function when SRIOV is enabled.
The block supports following modes of DMA transfers:
diff --git a/doc/guides/eventdevs/features/octeontx2.ini b/doc/guides/eventdevs/features/octeontx2.ini
deleted file mode 100644
index 05b84beb6e..0000000000
--- a/doc/guides/eventdevs/features/octeontx2.ini
+++ /dev/null
@@ -1,30 +0,0 @@
-;
-; Supported features of the 'octeontx2' eventdev driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Scheduling Features]
-queue_qos = Y
-distributed_sched = Y
-queue_all_types = Y
-nonseq_mode = Y
-runtime_port_link = Y
-multiple_queue_port = Y
-carry_flow_id = Y
-maintenance_free = Y
-
-[Eth Rx adapter Features]
-internal_port = Y
-multi_eventq = Y
-
-[Eth Tx adapter Features]
-internal_port = Y
-
-[Crypto adapter Features]
-internal_port_op_new = Y
-internal_port_op_fwd = Y
-internal_port_qp_ev_bind = Y
-
-[Timer adapter Features]
-internal_port = Y
-periodic = Y
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index b11657f7ae..eed19ad28c 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -19,5 +19,4 @@ application through the eventdev API.
dsw
sw
octeontx
- octeontx2
opdl
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
deleted file mode 100644
index 0fa57abfa3..0000000000
--- a/doc/guides/eventdevs/octeontx2.rst
+++ /dev/null
@@ -1,178 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-OCTEON TX2 SSO Eventdev Driver
-===============================
-
-The OCTEON TX2 SSO PMD (**librte_event_octeontx2**) provides poll mode
-eventdev driver support for the inbuilt event device found in the **Marvell OCTEON TX2**
-SoC family.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Features
---------
-
-Features of the OCTEON TX2 SSO PMD are:
-
-- 256 Event queues
-- 26 (dual) and 52 (single) Event ports
-- HW event scheduler
-- Supports 1M flows per event queue
-- Flow based event pipelining
-- Flow pinning support in flow based event pipelining
-- Queue based event pipelining
-- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
-- Event scheduling QoS based on event queue priority
-- Open system with configurable amount of outstanding events limited only by
- DRAM
-- HW accelerated dequeue timeout support to enable power management
-- HW managed event timers support through TIM, with high precision and
- time granularity of 2.5us.
-- Up to 256 TIM rings aka event timer adapters.
-- Up to 8 rings traversed in parallel.
-- HW managed packets enqueued from ethdev to eventdev exposed through event eth
- RX adapter.
-- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
- capability while maintaining receive packet order.
-- Full Rx/Tx offload support defined through ethdev queue config.
-
-Prerequisites and Compilation procedure
----------------------------------------
-
- See :doc:`../platform/octeontx2` for setup information.
-
-
-Runtime Config Options
-----------------------
-
-- ``Maximum number of in-flight events`` (default ``8192``)
-
- In **Marvell OCTEON TX2** the max number of in-flight events are only limited
- by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
- upper limit for in-flight events.
- For example::
-
- -a 0002:0e:00.0,xae_cnt=16384
-
-- ``Force legacy mode``
-
- The ``single_ws`` devargs parameter is introduced to force legacy mode i.e
- single workslot mode in SSO and disable the default dual workslot mode.
- For example::
-
- -a 0002:0e:00.0,single_ws=1
-
-- ``Event Group QoS support``
-
- SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
- events. By default the buffers are assigned to the SSO GGRPs to
- satisfy minimum HW requirements. SSO is free to assign the remaining
- buffers to GGRPs based on a preconfigured threshold.
- We can control the QoS of SSO GGRP by modifying the above mentioned
- thresholds. GGRPs that have higher importance can be assigned higher
- thresholds than the rest. The dictionary format is as follows
- [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
- default.
- For example::
-
- -a 0002:0e:00.0,qos=[1-50-50-50]
-
-- ``TIM disable NPA``
-
- By default chunks are allocated from NPA then TIM can automatically free
- them when traversing the list of chunks. The ``tim_disable_npa`` devargs
- parameter disables NPA and uses software mempool to manage chunks
- For example::
-
- -a 0002:0e:00.0,tim_disable_npa=1
-
-- ``TIM modify chunk slots``
-
- The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
- Chunks are used to store event timers, a chunk can be visualised as an array
- where the last element points to the next chunk and rest of them are used to
- store events. TIM traverses the list of chunks and enqueues the event timers
- to SSO. The default value is 255 and the max value is 4095.
- For example::
-
- -a 0002:0e:00.0,tim_chnk_slots=1023
-
-- ``TIM enable arm/cancel statistics``
-
- The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
- event timer adapter.
- For example::
-
- -a 0002:0e:00.0,tim_stats_ena=1
-
-- ``TIM limit max rings reserved``
-
- The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
- rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
- resources we can avoid starving other applications by not grabbing all the
- rings.
- For example::
-
- -a 0002:0e:00.0,tim_rings_lmt=5
-
-- ``TIM ring control internal parameters``
-
- When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
- control each TIM rings internal parameters uniquely. The following dict
- format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
- default values.
- For Example::
-
- -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:0e:00.0,npa_lock_mask=0xf
-
-- ``Force Rx Back pressure``
-
- Force Rx back pressure when same mempool is used across ethernet device
- connected to event device.
-
- For example::
-
- -a 0002:0e:00.0,force_rx_bp=1
-
-Debugging Options
------------------
-
-.. _table_octeontx2_event_debug_options:
-
-.. table:: OCTEON TX2 event device debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | SSO | --log-level='pmd\.event\.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | TIM | --log-level='pmd\.event\.octeontx2\.timer,8' |
- +---+------------+-------------------------------------------------------+
-
-Limitations
------------
-
-Rx adapter support
-~~~~~~~~~~~~~~~~~~
-
-Using the same mempool for all the ethernet device ports connected to
-event device would cause back pressure to be asserted only on the first
-ethernet device.
-Back pressure is automatically disabled when using same mempool for all the
-ethernet devices connected to event device to override this applications can
-use `force_rx_bp=1` device arguments.
-Using unique mempool per each ethernet device is recommended when they are
-connected to event device.
diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst
index ce53bc1ac7..e4b6ee7d31 100644
--- a/doc/guides/mempool/index.rst
+++ b/doc/guides/mempool/index.rst
@@ -13,6 +13,5 @@ application through the mempool API.
cnxk
octeontx
- octeontx2
ring
stack
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
deleted file mode 100644
index 1272c1e72b..0000000000
--- a/doc/guides/mempool/octeontx2.rst
+++ /dev/null
@@ -1,92 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-OCTEON TX2 NPA Mempool Driver
-=============================
-
-The OCTEON TX2 NPA PMD (**librte_mempool_octeontx2**) provides mempool
-driver support for the integrated mempool device found in **Marvell OCTEON TX2** SoC family.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Features
---------
-
-OCTEON TX2 NPA PMD supports:
-
-- Up to 128 NPA LFs
-- 1M Pools per LF
-- HW mempool manager
-- Ethdev Rx buffer allocation in HW to save CPU cycles in the Rx path.
-- Ethdev Tx buffer recycling in HW to save CPU cycles in the Tx path.
-
-Prerequisites and Compilation procedure
----------------------------------------
-
- See :doc:`../platform/octeontx2` for setup information.
-
-Pre-Installation Configuration
-------------------------------
-
-
-Runtime Config Options
-~~~~~~~~~~~~~~~~~~~~~~
-
-- ``Maximum number of mempools per application`` (default ``128``)
-
- The maximum number of mempools per application needs to be configured on
- HW during mempool driver initialization. HW can support up to 1M mempools,
- Since each mempool costs set of HW resources, the ``max_pools`` ``devargs``
- parameter is being introduced to configure the number of mempools required
- for the application.
- For example::
-
- -a 0002:02:00.0,max_pools=512
-
- With the above configuration, the driver will set up only 512 mempools for
- the given application to save HW resources.
-
-.. note::
-
- Since this configuration is per application, the end user needs to
- provide ``max_pools`` parameter to the first PCIe device probed by the given
- application.
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:02:00.0,npa_lock_mask=0xf
-
-Debugging Options
-~~~~~~~~~~~~~~~~~
-
-.. _table_octeontx2_mempool_debug_options:
-
-.. table:: OCTEON TX2 mempool debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | NPA | --log-level='pmd\.mempool.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
-
-Standalone mempool device
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
- The ``usertools/dpdk-devbind.py`` script shall enumerate all the mempool devices
- available in the system. In order to avoid, the end user to bind the mempool
- device prior to use ethdev and/or eventdev device, the respective driver
- configures an NPA LF and attach to the first probed ethdev or eventdev device.
- In case, if end user need to run mempool as a standalone device
- (without ethdev or eventdev), end user needs to bind a mempool device using
- ``usertools/dpdk-devbind.py``
-
- Example command to run ``mempool_autotest`` test with standalone OCTEONTX2 NPA device::
-
- echo "mempool_autotest" | <build_dir>/app/test/dpdk-test -c 0xf0 --mbuf-pool-ops-name="octeontx2_npa"
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 84f9865654..2119ba51c8 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -178,7 +178,7 @@ Runtime Config Options
* ``rss_adder<7:0> = flow_tag<7:0>``
Latter one aligns with standard NIC behavior vs former one is a legacy
- RSS adder scheme used in OCTEON TX2 products.
+ RSS adder scheme used in OCTEON 9 products.
By default, the driver runs in the latter mode.
Setting this flag to 1 to select the legacy mode.
@@ -291,7 +291,7 @@ Limitations
The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool manager.
``net_cnxk`` PMD only works with ``mempool_cnxk`` mempool handler
as it is performance wise most effective way for packet allocation and Tx buffer
-recycling on OCTEON TX2 SoC platform.
+recycling on OCTEON 9 SoC platform.
CRC stripping
~~~~~~~~~~~~~
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
deleted file mode 100644
index bf0c2890f2..0000000000
--- a/doc/guides/nics/features/octeontx2.ini
+++ /dev/null
@@ -1,97 +0,0 @@
-;
-; Supported features of the 'octeontx2' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Rx interrupt = Y
-Lock-free Tx queue = Y
-SR-IOV = Y
-Multiprocess aware = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-MTU update = Y
-TSO = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-Unicast MAC filter = Y
-Multicast MAC filter = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-Inline protocol = Y
-VLAN filter = Y
-Flow control = Y
-Rate limitation = Y
-Scattered Rx = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Timesync = Y
-Timestamp offload = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Stats per queue = Y
-Extended stats = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
-
-[rte_flow items]
-any = Y
-arp_eth_ipv4 = Y
-esp = Y
-eth = Y
-e_tag = Y
-geneve = Y
-gre = Y
-gre_key = Y
-gtpc = Y
-gtpu = Y
-higig2 = Y
-icmp = Y
-ipv4 = Y
-ipv6 = Y
-ipv6_ext = Y
-mpls = Y
-nvgre = Y
-raw = Y
-sctp = Y
-tcp = Y
-udp = Y
-vlan = Y
-vxlan = Y
-vxlan_gpe = Y
-
-[rte_flow actions]
-count = Y
-drop = Y
-flag = Y
-mark = Y
-of_pop_vlan = Y
-of_push_vlan = Y
-of_set_vlan_pcp = Y
-of_set_vlan_vid = Y
-pf = Y
-port_id = Y
-port_representor = Y
-queue = Y
-rss = Y
-security = Y
-vf = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
deleted file mode 100644
index c405db7cf9..0000000000
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ /dev/null
@@ -1,48 +0,0 @@
-;
-; Supported features of the 'octeontx2_vec' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Lock-free Tx queue = Y
-SR-IOV = Y
-Multiprocess aware = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-MTU update = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-Unicast MAC filter = Y
-Multicast MAC filter = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-VLAN filter = Y
-Flow control = Y
-Rate limitation = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Extended stats = Y
-Stats per queue = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
deleted file mode 100644
index 5ac7a49a5c..0000000000
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ /dev/null
@@ -1,45 +0,0 @@
-;
-; Supported features of the 'octeontx2_vf' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Lock-free Tx queue = Y
-Multiprocess aware = Y
-Rx interrupt = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-TSO = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-Inline protocol = Y
-VLAN filter = Y
-Rate limitation = Y
-Scattered Rx = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Extended stats = Y
-Stats per queue = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 1c94caccea..f48e9f815c 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -52,7 +52,6 @@ Network Interface Controller Drivers
ngbe
null
octeontx
- octeontx2
octeontx_ep
pfe
qede
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
deleted file mode 100644
index 4ce067f2c5..0000000000
--- a/doc/guides/nics/octeontx2.rst
+++ /dev/null
@@ -1,465 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(C) 2019 Marvell International Ltd.
-
-OCTEON TX2 Poll Mode driver
-===========================
-
-The OCTEON TX2 ETHDEV PMD (**librte_net_octeontx2**) provides poll mode ethdev
-driver support for the inbuilt network device found in **Marvell OCTEON TX2**
-SoC family as well as for their virtual functions (VF) in SR-IOV context.
-
-More information can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
-
-Features
---------
-
-Features of the OCTEON TX2 Ethdev PMD are:
-
-- Packet type information
-- Promiscuous mode
-- Jumbo frames
-- SR-IOV VF
-- Lock-free Tx queue
-- Multiple queues for TX and RX
-- Receiver Side Scaling (RSS)
-- MAC/VLAN filtering
-- Multicast MAC filtering
-- Generic flow API
-- Inner and Outer Checksum offload
-- VLAN/QinQ stripping and insertion
-- Port hardware statistics
-- Link state information
-- Link flow control
-- MTU update
-- Scatter-Gather IO support
-- Vector Poll mode driver
-- Debug utilities - Context dump and error interrupt support
-- IEEE1588 timestamping
-- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
-- Support Rx interrupt
-- Inline IPsec processing support
-- :ref:`Traffic Management API <otx2_tmapi>`
-
-Prerequisites
--------------
-
-See :doc:`../platform/octeontx2` for setup information.
-
-
-Driver compilation and testing
-------------------------------
-
-Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
-for details.
-
-#. Running testpmd:
-
- Follow instructions available in the document
- :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
- to run testpmd.
-
- Example output:
-
- .. code-block:: console
-
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
- EAL: Detected 24 lcore(s)
- EAL: Detected 1 NUMA nodes
- EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
- EAL: No available hugepages reported in hugepages-2048kB
- EAL: Probing VFIO support...
- EAL: VFIO support initialized
- EAL: PCI device 0002:02:00.0 on NUMA socket 0
- EAL: probe driver: 177d:a063 net_octeontx2
- EAL: using IOMMU type 1 (Type 1)
- testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
- testpmd: preferred mempool ops selected: octeontx2_npa
- Configuring Port 0 (socket 0)
- PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
-
- Port 0: link state change event
- Port 0: 36:10:66:88:7A:57
- Checking link statuses...
- Done
- No commandline core given, start packet forwarding
- io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
- Logical Core 9 (socket 0) forwards packets on 1 streams:
- RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
-
- io packet forwarding packets/burst=32
- nb forwarding cores=1 - nb forwarding ports=1
- port 0: RX queue number: 1 Tx queue number: 1
- Rx offloads=0x0 Tx offloads=0x10000
- RX queue: 0
- RX desc=512 - RX free threshold=0
- RX threshold registers: pthresh=0 hthresh=0 wthresh=0
- RX Offloads=0x0
- TX queue: 0
- TX desc=512 - TX free threshold=0
- TX threshold registers: pthresh=0 hthresh=0 wthresh=0
- TX offloads=0x10000 - TX RS bit threshold=0
- Press enter to exit
-
-Runtime Config Options
-----------------------
-
-- ``Rx&Tx scalar mode enable`` (default ``0``)
-
- Ethdev supports both scalar and vector mode, it may be selected at runtime
- using ``scalar_enable`` ``devargs`` parameter.
-
-- ``RSS reta size`` (default ``64``)
-
- RSS redirection table size may be configured during runtime using ``reta_size``
- ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,reta_size=256
-
- With the above configuration, reta table of size 256 is populated.
-
-- ``Flow priority levels`` (default ``3``)
-
- RTE Flow priority levels can be configured during runtime using
- ``flow_max_priority`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,flow_max_priority=10
-
- With the above configuration, priority level was set to 10 (0-9). Max
- priority level supported is 32.
-
-- ``Reserve Flow entries`` (default ``8``)
-
- RTE flow entries can be pre allocated and the size of pre allocation can be
- selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,flow_prealloc_size=4
-
- With the above configuration, pre alloc size was set to 4. Max pre alloc
- size supported is 32.
-
-- ``Max SQB buffer count`` (default ``512``)
-
- Send queue descriptor buffer count may be limited during runtime using
- ``max_sqb_count`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,max_sqb_count=64
-
- With the above configuration, each send queue's descriptor buffer count is
- limited to a maximum of 64 buffers.
-
-- ``Switch header enable`` (default ``none``)
-
- A port can be configured to a specific switch header type by using
- ``switch_header`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,switch_header="higig2"
-
- With the above configuration, higig2 will be enabled on that port and the
- traffic on this port should be higig2 traffic only. Supported switch header
- types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2" and "vlan_exdsa".
-
-- ``RSS tag as XOR`` (default ``0``)
-
- C0 HW revision onward, The HW gives an option to configure the RSS adder as
-
- * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
-
- * ``rss_adder<7:0> = flow_tag<7:0>``
-
- Latter one aligns with standard NIC behavior vs former one is a legacy
- RSS adder scheme used in OCTEON TX2 products.
-
- By default, the driver runs in the latter mode from C0 HW revision onward.
- Setting this flag to 1 to select the legacy mode.
-
- For example to select the legacy mode(RSS tag adder as XOR)::
-
- -a 0002:02:00.0,tag_as_xor=1
-
-- ``Max SPI for inbound inline IPsec`` (default ``1``)
-
- Max SPI supported for inbound inline IPsec processing can be specified by
- ``ipsec_in_max_spi`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,ipsec_in_max_spi=128
-
- With the above configuration, application can enable inline IPsec processing
- on 128 SAs (SPI 0-127).
-
-- ``Lock Rx contexts in NDC cache``
-
- Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
-
- For example::
-
- -a 0002:02:00.0,lock_rx_ctx=1
-
-- ``Lock Tx contexts in NDC cache``
-
- Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
-
- For example::
-
- -a 0002:02:00.0,lock_tx_ctx=1
-
-.. note::
-
- Above devarg parameters are configurable per device, user needs to pass the
- parameters to all the PCIe devices if application requires to configure on
- all the ethdev ports.
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:02:00.0,npa_lock_mask=0xf
-
-.. _otx2_tmapi:
-
-Traffic Management API
-----------------------
-
-OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
-configure the following features:
-
-#. Hierarchical scheduling
-#. Single rate - Two color, Two rate - Three color shaping
-
-Both DWRR and Static Priority(SP) hierarchical scheduling is supported.
-
-Every parent can have atmost 10 SP Children and unlimited DWRR children.
-
-Both PF & VF supports traffic management API with PF supporting 6 levels
-and VF supporting 5 levels of topology.
-
-Limitations
------------
-
-``mempool_octeontx2`` external mempool handler dependency
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
-``net_octeontx2`` PMD only works with ``mempool_octeontx2`` mempool handler
-as it is performance wise most effective way for packet allocation and Tx buffer
-recycling on OCTEON TX2 SoC platform.
-
-CRC stripping
-~~~~~~~~~~~~~
-
-The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
-the host interface irrespective of the offload configuration.
-
-Multicast MAC filtering
-~~~~~~~~~~~~~~~~~~~~~~~
-
-``net_octeontx2`` PMD supports multicast mac filtering feature only on physical
-function devices.
-
-SDP interface support
-~~~~~~~~~~~~~~~~~~~~~
-OCTEON TX2 SDP interface support is limited to PF device, No VF support.
-
-Inline Protocol Processing
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-``net_octeontx2`` PMD doesn't support the following features for packets to be
-inline protocol processed.
-- TSO offload
-- VLAN/QinQ offload
-- Fragmentation
-
-Debugging Options
------------------
-
-.. _table_octeontx2_ethdev_debug_options:
-
-.. table:: OCTEON TX2 ethdev debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
- +---+------------+-------------------------------------------------------+
-
-RTE Flow Support
-----------------
-
-The OCTEON TX2 SoC family NIC has support for the following patterns and
-actions.
-
-Patterns:
-
-.. _table_octeontx2_supported_flow_item_types:
-
-.. table:: Item types
-
- +----+--------------------------------+
- | # | Pattern Type |
- +====+================================+
- | 1 | RTE_FLOW_ITEM_TYPE_ETH |
- +----+--------------------------------+
- | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
- +----+--------------------------------+
- | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
- +----+--------------------------------+
- | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
- +----+--------------------------------+
- | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
- +----+--------------------------------+
- | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
- +----+--------------------------------+
- | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
- +----+--------------------------------+
- | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
- +----+--------------------------------+
- | 9 | RTE_FLOW_ITEM_TYPE_UDP |
- +----+--------------------------------+
- | 10 | RTE_FLOW_ITEM_TYPE_TCP |
- +----+--------------------------------+
- | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
- +----+--------------------------------+
- | 12 | RTE_FLOW_ITEM_TYPE_ESP |
- +----+--------------------------------+
- | 13 | RTE_FLOW_ITEM_TYPE_GRE |
- +----+--------------------------------+
- | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
- +----+--------------------------------+
- | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
- +----+--------------------------------+
- | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
- +----+--------------------------------+
- | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
- +----+--------------------------------+
- | 18 | RTE_FLOW_ITEM_TYPE_GENEVE |
- +----+--------------------------------+
- | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE |
- +----+--------------------------------+
- | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT |
- +----+--------------------------------+
- | 21 | RTE_FLOW_ITEM_TYPE_VOID |
- +----+--------------------------------+
- | 22 | RTE_FLOW_ITEM_TYPE_ANY |
- +----+--------------------------------+
- | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY |
- +----+--------------------------------+
- | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 |
- +----+--------------------------------+
- | 25 | RTE_FLOW_ITEM_TYPE_RAW |
- +----+--------------------------------+
-
-.. note::
-
- ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
- bits in the GRE header are equal to 0.
-
-Actions:
-
-.. _table_octeontx2_supported_ingress_action_types:
-
-.. table:: Ingress action types
-
- +----+-----------------------------------------+
- | # | Action Type |
- +====+=========================================+
- | 1 | RTE_FLOW_ACTION_TYPE_VOID |
- +----+-----------------------------------------+
- | 2 | RTE_FLOW_ACTION_TYPE_MARK |
- +----+-----------------------------------------+
- | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
- +----+-----------------------------------------+
- | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
- +----+-----------------------------------------+
- | 5 | RTE_FLOW_ACTION_TYPE_DROP |
- +----+-----------------------------------------+
- | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
- +----+-----------------------------------------+
- | 7 | RTE_FLOW_ACTION_TYPE_RSS |
- +----+-----------------------------------------+
- | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
- +----+-----------------------------------------+
- | 9 | RTE_FLOW_ACTION_TYPE_PF |
- +----+-----------------------------------------+
- | 10 | RTE_FLOW_ACTION_TYPE_VF |
- +----+-----------------------------------------+
- | 11 | RTE_FLOW_ACTION_TYPE_OF_POP_VLAN |
- +----+-----------------------------------------+
- | 12 | RTE_FLOW_ACTION_TYPE_PORT_ID |
- +----+-----------------------------------------+
- | 13 | RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR |
- +----+-----------------------------------------+
-
-.. note::
-
- ``RTE_FLOW_ACTION_TYPE_PORT_ID``, ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR``
- are only supported between PF and its VFs.
-
-.. _table_octeontx2_supported_egress_action_types:
-
-.. table:: Egress action types
-
- +----+-----------------------------------------+
- | # | Action Type |
- +====+=========================================+
- | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
- +----+-----------------------------------------+
- | 2 | RTE_FLOW_ACTION_TYPE_DROP |
- +----+-----------------------------------------+
- | 3 | RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN |
- +----+-----------------------------------------+
- | 4 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID |
- +----+-----------------------------------------+
- | 5 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP |
- +----+-----------------------------------------+
-
-Custom protocols supported in RTE Flow
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols.
-
-* ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level.
-* ``NGIO`` can be parsed at L3 level.
-
-For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the
-respective switch header.
-
-For example::
-
- -a 0002:02:00.0,switch_header="vlan_exdsa"
-
-The below fields of ``struct rte_flow_item_raw`` shall be used to specify the
-pattern.
-
-- ``relative`` Selects the layer at which parsing is done.
-
- - 0 for ``exdsa`` and ``vlan_exdsa``.
-
- - 1 for ``NGIO``.
-
-- ``offset`` The offset in the header where the pattern should be matched.
-- ``length`` Length of the pattern.
-- ``pattern`` Pattern as a byte string.
-
-Example usage in testpmd::
-
- ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \
- --rx-offloads=0x00080000 --rxq 8 --txq 8
- testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
- spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
diff --git a/doc/guides/nics/octeontx_ep.rst b/doc/guides/nics/octeontx_ep.rst
index b512ccfdab..2ec8a034b5 100644
--- a/doc/guides/nics/octeontx_ep.rst
+++ b/doc/guides/nics/octeontx_ep.rst
@@ -5,7 +5,7 @@ OCTEON TX EP Poll Mode driver
=============================
The OCTEON TX EP ETHDEV PMD (**librte_pmd_octeontx_ep**) provides poll mode
-ethdev driver support for the virtual functions (VF) of **Marvell OCTEON TX2**
+ethdev driver support for the virtual functions (VF) of **Marvell OCTEON 9**
and **Cavium OCTEON TX** families of adapters in SR-IOV context.
More information can be found at `Marvell Official Website
@@ -24,4 +24,4 @@ must be installed separately:
allocates resources such as number of VFs, input/output queues for itself and
the number of i/o queues each VF can use.
-See :doc:`../platform/octeontx2` for SDP interface information which provides PCIe endpoint support for a remote host.
+See :doc:`../platform/cnxk` for SDP interface information which provides PCIe endpoint support for a remote host.
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index 5213df3ccd..97e38c868c 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -13,6 +13,9 @@ More information about CN9K and CN10K SoC can be found at `Marvell Official Webs
Supported OCTEON cnxk SoCs
--------------------------
+- CN93xx
+- CN96xx
+- CN98xx
- CN106xx
- CNF105xx
@@ -583,6 +586,15 @@ Cross Compilation
Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
+CN9K:
+
+.. code-block:: console
+
+ meson build --cross-file config/arm/arm64_cn9k_linux_gcc
+ ninja -C build
+
+CN10K:
+
.. code-block:: console
meson build --cross-file config/arm/arm64_cn10k_linux_gcc
diff --git a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg b/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
deleted file mode 100644
index ecd575947a..0000000000
--- a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
+++ /dev/null
@@ -1,2804 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<!--
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2019 Marvell International Ltd.
-#
--->
-
-<svg
- xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="631.91431"
- height="288.34286"
- id="svg3868"
- version="1.1"
- inkscape:version="0.92.4 (5da689c313, 2019-01-14)"
- sodipodi:docname="octeontx2_packet_flow_hw_accelerators.svg"
- sodipodi:version="0.32"
- inkscape:output_extension="org.inkscape.output.svg.inkscape">
- <defs
- id="defs3870">
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker18508"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Send">
- <path
- transform="scale(0.2) rotate(180) translate(6,0)"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path18506" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Sstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker18096"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path18094"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) translate(6,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker17550"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Sstart"
- inkscape:collect="always">
- <path
- transform="scale(0.2) translate(6,0)"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path17548" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker17156"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Send">
- <path
- transform="scale(0.2) rotate(180) translate(6,0)"
- style="fill-rule:evenodd;stroke:#00db00;stroke-width:1pt;stroke-opacity:1;fill:#00db00;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path17154" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient13962">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop13958" />
- <stop
- style="stop-color:#fc0000;stop-opacity:0;"
- offset="1"
- id="stop13960" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Send"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow1Send"
- style="overflow:visible;"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6218"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) rotate(180) translate(6,0)" />
- </marker>
- <linearGradient
- id="linearGradient13170"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop13168" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker12747"
- style="overflow:visible;"
- inkscape:isstock="true">
- <path
- id="path12745"
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#ff0000;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- transform="scale(0.6) rotate(180) translate(0,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker10821"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow2Mend"
- inkscape:collect="always">
- <path
- transform="scale(0.6) rotate(180) translate(0,0)"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- id="path10819" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker10463"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow2Mend">
- <path
- transform="scale(0.6) rotate(180) translate(0,0)"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- id="path10461" />
- </marker>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow2Mend"
- style="overflow:visible;"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6230"
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- transform="scale(0.6) rotate(180) translate(0,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker9807"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="TriangleOutS">
- <path
- transform="scale(0.2)"
- style="fill-rule:evenodd;stroke:#fe0000;stroke-width:1pt;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
- id="path9805" />
- </marker>
- <marker
- inkscape:stockid="TriangleOutS"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="TriangleOutS"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6351"
- d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
- style="fill-rule:evenodd;stroke:#fe0000;stroke-width:1pt;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- transform="scale(0.2)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Sstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow1Sstart"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6215"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) translate(6,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient4340">
- <stop
- style="stop-color:#d7eef4;stop-opacity:1;"
- offset="0"
- id="stop4336" />
- <stop
- style="stop-color:#d7eef4;stop-opacity:0;"
- offset="1"
- id="stop4338" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient4330">
- <stop
- style="stop-color:#d7eef4;stop-opacity:1;"
- offset="0"
- id="stop4326" />
- <stop
- style="stop-color:#d7eef4;stop-opacity:0;"
- offset="1"
- id="stop4328" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient3596">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3592" />
- <stop
- style="stop-color:#6ba6fd;stop-opacity:0;"
- offset="1"
- id="stop3594" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker9460"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path9458"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker7396"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path7133"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5474">
- <stop
- style="stop-color:#ffffff;stop-opacity:1;"
- offset="0"
- id="stop5470" />
- <stop
- style="stop-color:#ffffff;stop-opacity:0;"
- offset="1"
- id="stop5472" />
- </linearGradient>
- <linearGradient
- id="linearGradient6545"
- osb:paint="solid">
- <stop
- style="stop-color:#ffa600;stop-opacity:1;"
- offset="0"
- id="stop6543" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3302"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3294"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3290"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3286"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3228"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3188"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3184"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3180"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3176"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3172"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3168"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3164"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3160"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120"
- is_visible="true" />
- <linearGradient
- id="linearGradient3114"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3112" />
- </linearGradient>
- <linearGradient
- id="linearGradient3088"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3086" />
- </linearGradient>
- <linearGradient
- id="linearGradient3058"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3056" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3054"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3050"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3046"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3042"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3038"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3034"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3030"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3004"
- is_visible="true" />
- <linearGradient
- id="linearGradient2975"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2200;stop-opacity:1;"
- offset="0"
- id="stop2973" />
- </linearGradient>
- <linearGradient
- id="linearGradient2969"
- osb:paint="solid">
- <stop
- style="stop-color:#69ff72;stop-opacity:1;"
- offset="0"
- id="stop2967" />
- </linearGradient>
- <linearGradient
- id="linearGradient2963"
- osb:paint="solid">
- <stop
- style="stop-color:#000000;stop-opacity:1;"
- offset="0"
- id="stop2961" />
- </linearGradient>
- <linearGradient
- id="linearGradient2929"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2d00;stop-opacity:1;"
- offset="0"
- id="stop2927" />
- </linearGradient>
- <linearGradient
- id="linearGradient4610"
- osb:paint="solid">
- <stop
- style="stop-color:#00ffff;stop-opacity:1;"
- offset="0"
- id="stop4608" />
- </linearGradient>
- <linearGradient
- id="linearGradient3993"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3991" />
- </linearGradient>
- <linearGradient
- id="linearGradient3808"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3806" />
- </linearGradient>
- <linearGradient
- id="linearGradient3776"
- osb:paint="solid">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop3774" />
- </linearGradient>
- <linearGradient
- id="linearGradient3438"
- osb:paint="solid">
- <stop
- style="stop-color:#b8e132;stop-opacity:1;"
- offset="0"
- id="stop3436" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3408"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3404"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3400"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3392"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3376"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3040"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3036"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3032"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3028"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3024"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3020"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2854"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect2844"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <linearGradient
- id="linearGradient2828"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop2826" />
- </linearGradient>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect329"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart"
- style="overflow:visible">
- <path
- id="path4530"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend"
- style="overflow:visible">
- <path
- id="path4533"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- id="linearGradient4513">
- <stop
- style="stop-color:#fdffdb;stop-opacity:1;"
- offset="0"
- id="stop4515" />
- <stop
- style="stop-color:#dfe2d8;stop-opacity:0;"
- offset="1"
- id="stop4517" />
- </linearGradient>
- <inkscape:perspective
- sodipodi:type="inkscape:persp3d"
- inkscape:vp_x="0 : 526.18109 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_z="744.09448 : 526.18109 : 1"
- inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
- id="perspective3876" />
- <inkscape:perspective
- id="perspective3886"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lend"
- style="overflow:visible">
- <path
- id="path3211"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3892"
- style="overflow:visible">
- <path
- id="path3894"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3896"
- style="overflow:visible">
- <path
- id="path3898"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lstart"
- style="overflow:visible">
- <path
- id="path3208"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3902"
- style="overflow:visible">
- <path
- id="path3904"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3906"
- style="overflow:visible">
- <path
- id="path3908"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3910"
- style="overflow:visible">
- <path
- id="path3912"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective4086"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective4113"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective5195"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-4"
- style="overflow:visible">
- <path
- id="path4533-7"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5272"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-4"
- style="overflow:visible">
- <path
- id="path4530-5"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-0"
- style="overflow:visible">
- <path
- id="path4533-3"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5317"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-3"
- style="overflow:visible">
- <path
- id="path4530-2"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-06"
- style="overflow:visible">
- <path
- id="path4533-1"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-8"
- style="overflow:visible">
- <path
- id="path4530-7"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-9"
- style="overflow:visible">
- <path
- id="path4533-2"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858-0"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3"
- style="overflow:visible">
- <path
- id="path4533-75"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3-2"
- style="overflow:visible">
- <path
- id="path4533-75-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008-3"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7-3"
- is_visible="true" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5695"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,206.76869,3.9208776)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-34"
- style="overflow:visible">
- <path
- id="path4530-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-45"
- style="overflow:visible">
- <path
- id="path4533-16"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7"
- style="overflow:visible">
- <path
- id="path4530-58"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1"
- style="overflow:visible">
- <path
- id="path4533-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-6"
- style="overflow:visible">
- <path
- id="path4530-58-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2"
- style="overflow:visible">
- <path
- id="path4530-58-46"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1"
- style="overflow:visible">
- <path
- id="path4533-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2-6"
- style="overflow:visible">
- <path
- id="path4530-58-46-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-4-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#grad0-40"
- id="linearGradient5917"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(8.8786147,-0.0235964,-0.00460261,1.50035,-400.25558,-2006.3745)"
- x1="-0.12893644"
- y1="1717.1688"
- x2="28.140806"
- y2="1717.1688" />
- <linearGradient
- id="grad0-40"
- x1="0"
- y1="0"
- x2="1"
- y2="0"
- gradientTransform="rotate(60,0.5,0.5)">
- <stop
- offset="0"
- stop-color="#f3f6fa"
- stop-opacity="1"
- id="stop3419" />
- <stop
- offset="0.24"
- stop-color="#f9fafc"
- stop-opacity="1"
- id="stop3421" />
- <stop
- offset="0.54"
- stop-color="#feffff"
- stop-opacity="1"
- id="stop3423" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30"
- style="overflow:visible">
- <path
- id="path4530-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6"
- style="overflow:visible">
- <path
- id="path4533-19"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0"
- style="overflow:visible">
- <path
- id="path4530-0-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8"
- style="overflow:visible">
- <path
- id="path4533-19-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9"
- style="overflow:visible">
- <path
- id="path4530-0-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3"
- style="overflow:visible">
- <path
- id="path4533-19-6-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-7"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,321.82147,-1.8659026)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-81"
- style="overflow:visible">
- <path
- id="path4530-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-5"
- style="overflow:visible">
- <path
- id="path4533-72"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-1"
- style="overflow:visible">
- <path
- id="path4530-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker9714"
- style="overflow:visible">
- <path
- id="path9712"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48"
- style="overflow:visible">
- <path
- id="path4530-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker10117"
- style="overflow:visible">
- <path
- id="path10115"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48-6"
- style="overflow:visible">
- <path
- id="path4530-4-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker11186"
- style="overflow:visible">
- <path
- id="path11184"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9-0"
- style="overflow:visible">
- <path
- id="path4530-0-6-4-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3-7"
- style="overflow:visible">
- <path
- id="path4533-19-6-1-5"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3602"
- x1="113.62777"
- y1="238.35289"
- x2="178.07406"
- y2="238.35289"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3604"
- x1="106.04746"
- y1="231.17514"
- x2="170.49375"
- y2="231.17514"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3606"
- x1="97.456466"
- y1="223.48468"
- x2="161.90276"
- y2="223.48468"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- gradientTransform="matrix(1.2309135,0,0,0.9993652,112.21043,-29.394096)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.2419105,0,0,0.99933655,110.714,51.863352)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.3078944,0,0,0.99916717,224.87462,63.380078)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-8-7"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.2309135,0,0,0.9993652,359.82239,-48.56566)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-9"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(-35.122992,139.17627)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(32.977515,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(100.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(168.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(236.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5-7"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(516.30192,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5-73"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(448.30192,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-59"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(380.30193,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(312.20142,138.83669)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <radialGradient
- inkscape:collect="always"
- xlink:href="#linearGradient4330"
- id="radialGradient4334"
- cx="222.02666"
- cy="354.61401"
- fx="222.02666"
- fy="354.61401"
- r="171.25233"
- gradientTransform="matrix(1,0,0,0.15767701,0,298.69953)"
- gradientUnits="userSpaceOnUse" />
- <radialGradient
- inkscape:collect="always"
- xlink:href="#linearGradient4340"
- id="radialGradient4342"
- cx="535.05641"
- cy="353.56737"
- fx="535.05641"
- fy="353.56737"
- r="136.95767"
- gradientTransform="matrix(1.0000096,0,0,0.19866251,-0.00515595,284.82679)"
- gradientUnits="userSpaceOnUse" />
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker28236"
- refX="0"
- refY="0"
- orient="auto"
- inkscape:stockid="Arrow2Mstart">
- <path
- inkscape:connector-curvature="0"
- transform="scale(0.6)"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- id="path28234" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3706"
- style="overflow:visible">
- <path
- id="path3704"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect14461"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9"
- style="fill:#fe0000;fill-opacity:1;fill-rule:evenodd;stroke:#fe0000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3-1"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9-8"
- style="fill:#fe0000;fill-opacity:1;fill-rule:evenodd;stroke:#fe0000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient13962"
- id="linearGradient14808"
- x1="447.95767"
- y1="176.3018"
- x2="576.27008"
- y2="176.3018"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(0,-8)" />
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3-1-6"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9-8-5"
- style="fill:#808080;fill-opacity:1;fill-rule:evenodd;stroke:#808080;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-53"
- style="overflow:visible">
- <path
- id="path4533-35"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-99"
- style="overflow:visible">
- <path
- id="path4533-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- </defs>
- <sodipodi:namedview
- id="base"
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1.0"
- inkscape:pageopacity="0.0"
- inkscape:pageshadow="2"
- inkscape:zoom="1.8101934"
- inkscape:cx="434.42776"
- inkscape:cy="99.90063"
- inkscape:document-units="px"
- inkscape:current-layer="layer1"
- showgrid="false"
- inkscape:window-width="1920"
- inkscape:window-height="1057"
- inkscape:window-x="-8"
- inkscape:window-y="-8"
- inkscape:window-maximized="1"
- fit-margin-top="0.1"
- fit-margin-left="0.1"
- fit-margin-right="0.1"
- fit-margin-bottom="0.1"
- inkscape:measure-start="-29.078,219.858"
- inkscape:measure-end="346.809,219.858"
- showguides="true"
- inkscape:snap-page="true"
- inkscape:snap-others="false"
- inkscape:snap-nodes="false"
- inkscape:snap-bbox="true"
- inkscape:lockguides="false"
- inkscape:guide-bbox="true">
- <sodipodi:guide
- position="-120.20815,574.17069"
- orientation="0,1"
- id="guide7077"
- inkscape:locked="false" />
- </sodipodi:namedview>
- <metadata
- id="metadata3873">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <g
- inkscape:label="Layer 1"
- inkscape:groupmode="layer"
- id="layer1"
- transform="translate(-46.542857,-100.33361)">
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-7"
- width="64.18129"
- height="45.550591"
- x="575.72662"
- y="144.79553" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-8-5"
- width="64.18129"
- height="45.550591"
- x="584.44391"
- y="152.87041" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-42-0"
- width="64.18129"
- height="45.550591"
- x="593.03491"
- y="160.56087" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-0-3"
- width="64.18129"
- height="45.550591"
- x="600.61523"
- y="167.73862" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-46-4"
- width="64.18129"
- height="45.550591"
- x="608.70087"
- y="175.42906" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#aaffcc;fill-opacity:1;stroke:none"
- transform="matrix(0.71467688,0,0,0.72506311,529.61388,101.41825)"><flowRegion
- id="flowRegion1855-0"
- style="fill:#aaffcc"><rect
- id="rect1857-5"
- width="67.17514"
- height="33.941124"
- x="120.20815"
- y="120.75856"
- style="fill:#aaffcc" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#aaffcc"
- id="flowPara1976" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot5313"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;letter-spacing:0px;word-spacing:0px"><flowRegion
- id="flowRegion5315"><rect
- id="rect5317"
- width="120.91525"
- height="96.873627"
- x="-192.33304"
- y="-87.130829" /></flowRegion><flowPara
- id="flowPara5319" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot8331"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion8333"><rect
- id="rect8335"
- width="48.5"
- height="28"
- x="252.5"
- y="208.34286" /></flowRegion><flowPara
- id="flowPara8337" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot11473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(46.542857,100.33361)"><flowRegion
- id="flowRegion11475"><rect
- id="rect11477"
- width="90"
- height="14.5"
- x="426"
- y="26.342873" /></flowRegion><flowPara
- id="flowPara11479">DDDpk</flowPara></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="533.54285"
- y="158.17648"
- id="text11489"><tspan
- sodipodi:role="line"
- id="tspan11487"
- x="533.54285"
- y="170.34088" /></text>
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3606);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-8"
- width="64.18129"
- height="45.550591"
- x="101.58897"
- y="178.70938" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3604);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-42"
- width="64.18129"
- height="45.550591"
- x="110.17996"
- y="186.39984" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3602);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-0"
- width="64.18129"
- height="45.550591"
- x="117.76027"
- y="193.57759" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-46"
- width="64.18129"
- height="45.550591"
- x="125.84592"
- y="201.26804" />
- <rect
- style="fill:#d7f4e3;fill-opacity:1;stroke:url(#linearGradient3608-4);stroke-width:0.293915;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86"
- width="79.001617"
- height="45.521675"
- x="221.60374"
- y="163.11812" />
- <rect
- style="fill:#d7f4e3;fill-opacity:1;stroke:url(#linearGradient3608-4-8);stroke-width:0.29522076;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-5"
- width="79.70742"
- height="45.52037"
- x="221.08463"
- y="244.37004" />
- <rect
- style="opacity:1;fill:#d7eef4;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.31139579;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718"
- width="125.8186"
- height="100.36277"
- x="321.87323"
- y="112.72702" />
- <rect
- style="fill:#ffd5d5;fill-opacity:1;stroke:url(#linearGradient3608-4-8-7);stroke-width:0.30293623;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-5-3"
- width="83.942352"
- height="45.512653"
- x="341.10928"
- y="255.85414" />
- <rect
- style="fill:#ffb380;fill-opacity:1;stroke:url(#linearGradient3608-4-9);stroke-width:0.293915;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-2"
- width="79.001617"
- height="45.521675"
- x="469.21576"
- y="143.94656" />
- <rect
- style="opacity:1;fill:url(#radialGradient4334);fill-opacity:1;stroke:#6ba6fd;stroke-width:0.32037571;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3783"
- width="342.1843"
- height="53.684738"
- x="50.934502"
- y="327.77164" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1"
- width="64.18129"
- height="45.550591"
- x="53.748672"
- y="331.81079" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3"
- width="64.18129"
- height="45.550591"
- x="121.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6"
- width="64.18129"
- height="45.550591"
- x="189.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4"
- width="64.18129"
- height="45.550591"
- x="257.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5-7);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4-9"
- width="64.18129"
- height="45.550591"
- x="325.84918"
- y="331.71741" />
- <rect
- style="opacity:1;fill:url(#radialGradient4342);fill-opacity:1;stroke:#6ba6fd;stroke-width:0.28768006;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3783-8"
- width="273.62766"
- height="54.131645"
- x="398.24258"
- y="328.00156" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-8);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-5"
- width="64.18129"
- height="45.550591"
- x="401.07309"
- y="331.47122" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-8);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-0"
- width="64.18129"
- height="45.550591"
- x="469.17358"
- y="331.37781" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-1-59);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-3"
- width="64.18129"
- height="45.550591"
- x="537.17358"
- y="331.37781" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5-73);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4-6"
- width="64.18129"
- height="45.550591"
- x="605.17358"
- y="331.37781" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3"
- width="27.798103"
- height="21.434149"
- x="325.80197"
- y="117.21037" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8"
- width="27.798103"
- height="21.434149"
- x="325.2959"
- y="140.20857" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9"
- width="27.798103"
- height="21.434149"
- x="325.2959"
- y="164.20857" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5"
- width="27.798103"
- height="21.434149"
- x="356.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1"
- width="27.798103"
- height="21.434149"
- x="355.86447"
- y="140.38893" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2"
- width="27.798103"
- height="21.434149"
- x="355.86447"
- y="164.38893" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5"
- width="27.798103"
- height="21.434149"
- x="386.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9"
- width="27.798103"
- height="21.434149"
- x="385.86447"
- y="140.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6"
- width="27.798103"
- height="21.434149"
- x="385.86447"
- y="164.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-9"
- width="27.798103"
- height="21.434149"
- x="416.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-3"
- width="27.798103"
- height="21.434149"
- x="415.86447"
- y="140.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8"
- width="27.798103"
- height="21.434149"
- x="415.86447"
- y="164.38896" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-5"
- width="27.798103"
- height="21.434149"
- x="324.61139"
- y="187.85849" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-0"
- width="27.798103"
- height="21.434149"
- x="355.17996"
- y="188.03886" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-0"
- width="27.798103"
- height="21.434149"
- x="385.17996"
- y="188.03888" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-4"
- width="27.798103"
- height="21.434149"
- x="415.17996"
- y="188.03889" />
- <rect
- style="opacity:1;fill:#d7eef4;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.31139579;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-5"
- width="125.8186"
- height="100.36277"
- x="452.24075"
- y="208.56764" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-9"
- width="27.798103"
- height="21.434149"
- x="456.16949"
- y="213.05098" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-8"
- width="27.798103"
- height="21.434149"
- x="455.66342"
- y="236.04919" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-55"
- width="27.798103"
- height="21.434149"
- x="455.66342"
- y="260.04919" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-7"
- width="27.798103"
- height="21.434149"
- x="486.73807"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-5"
- width="27.798103"
- height="21.434149"
- x="486.23199"
- y="236.22954" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-3"
- width="27.798103"
- height="21.434149"
- x="486.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-2"
- width="27.798103"
- height="21.434149"
- x="516.73804"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-5"
- width="27.798103"
- height="21.434149"
- x="516.23199"
- y="236.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-1"
- width="27.798103"
- height="21.434149"
- x="516.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-9-6"
- width="27.798103"
- height="21.434149"
- x="546.73804"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-3-1"
- width="27.798103"
- height="21.434149"
- x="546.23199"
- y="236.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-7"
- width="27.798103"
- height="21.434149"
- x="546.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-5-1"
- width="27.798103"
- height="21.434149"
- x="454.97891"
- y="283.6991" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-0-6"
- width="27.798103"
- height="21.434149"
- x="485.54749"
- y="283.87946" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-0-7"
- width="27.798103"
- height="21.434149"
- x="515.54749"
- y="283.87949" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-4-2"
- width="27.798103"
- height="21.434149"
- x="545.54749"
- y="283.87952" />
- <g
- id="g5089"
- transform="matrix(0.7206312,0,0,1.0073979,12.37404,-312.02679)"
- style="fill:#ff8080">
- <path
- inkscape:connector-curvature="0"
- d="m 64.439519,501.23542 v 5.43455 h 45.917801 v -5.43455 z"
- style="opacity:1;fill:#ff8080;fill-opacity:1;stroke:#6ba6fd;stroke-width:1.09656608;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:fill markers stroke"
- id="rect4455" />
- <path
- inkscape:connector-curvature="0"
- id="path5083"
- d="m 108.30535,494.82846 c 13.96414,8.6951 13.96414,8.40526 13.96414,8.40526 l -12.46798,9.85445 z"
- style="fill:#ff8080;stroke:#000000;stroke-width:0.53767502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
- </g>
- <g
- id="g5089-4"
- transform="matrix(-0.6745281,0,0,0.97266112,143.12774,-266.3349)"
- style="fill:#000080;fill-opacity:1">
- <path
- inkscape:connector-curvature="0"
- d="m 64.439519,501.23542 v 5.43455 h 45.917801 v -5.43455 z"
- style="opacity:1;fill:#000080;fill-opacity:1;stroke:#6ba6fd;stroke-width:1.09656608;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:fill markers stroke"
- id="rect4455-9" />
- <path
- inkscape:connector-curvature="0"
- id="path5083-2"
- d="m 108.30535,494.82846 c 13.96414,8.6951 13.96414,8.40526 13.96414,8.40526 l -12.46798,9.85445 z"
- style="fill:#000080;stroke:#000000;stroke-width:0.53767502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;fill-opacity:1" />
- </g>
- <flowRoot
- xml:space="preserve"
- id="flowRoot5112"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(52.199711,162.55901)"><flowRegion
- id="flowRegion5114"><rect
- id="rect5116"
- width="28.991377"
- height="19.79899"
- x="22.627417"
- y="64.897125" /></flowRegion><flowPara
- id="flowPara5118">Tx</flowPara></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot5112-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(49.878465,112.26812)"><flowRegion
- id="flowRegion5114-7"><rect
- id="rect5116-7"
- width="28.991377"
- height="19.79899"
- x="22.627417"
- y="64.897125" /></flowRegion><flowPara
- id="flowPara5118-5">Rx</flowPara></flowRoot> <path
- style="fill:none;stroke:#f60300;stroke-width:0.783;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:0.783, 0.78300000000000003;stroke-dashoffset:0;marker-start:url(#Arrow1Sstart);marker-end:url(#TriangleOutS)"
- d="m 116.81066,179.28348 v -11.31903 l -0.37893,-12.93605 0.37893,-5.25526 3.03134,-5.25526 4.16811,-2.82976 8.3362,-1.61701 h 7.19945 l 7.19946,2.02126 3.03135,2.02126 0.37892,2.02125 -0.37892,3.23401 -0.37892,7.27652 -0.37892,8.48927 -0.37892,14.55304"
- id="path8433"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="104.04285"
- y="144.86398"
- id="text9071"><tspan
- sodipodi:role="line"
- id="tspan9069"
- x="104.04285"
- y="144.86398"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333333px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">HW loop back device</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="59.542858"
- y="53.676483"
- id="text9621"><tspan
- sodipodi:role="line"
- id="tspan9619"
- x="59.542858"
- y="65.840889" /></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7-2-7-8-7-2-4-3-9-0-2-9-5-6-7-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="matrix(0.57822568,0,0,0.72506311,454.1297,247.6848)"><flowRegion
- id="flowRegion1855-0-1-3-66-99-9-2-5-4-1-1-1-4-0-5-4"><rect
- id="rect1857-5-1-5-2-6-1-4-9-3-8-1-8-5-7-9-1"
- width="162.09244"
- height="78.764809"
- x="120.20815"
- y="120.75856" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#5500d4"
- id="flowPara9723" /></flowRoot> <path
- style="fill:none;stroke:#fe0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow2Mend)"
- d="m 181.60025,194.22211 12.72792,-7.07106 14.14214,-2.82843 12.02081,0.70711 h 1.41422 v 0"
- id="path9797"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#marker10821)"
- d="m 179.47893,193.51501 3.53554,-14.14214 5.65685,-12.72792 16.97056,-9.19239 8.48528,-9.19238 14.84924,-7.77818 24.04163,-8.48528 18.38478,-6.36396 38.89087,-2.82843 h 12.02082 l -2.12132,-0.7071"
- id="path10453"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:0.70021206;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.70021208, 0.70021208;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3)"
- d="m 299.68795,188.0612 7.97521,-5.53298 8.86135,-2.2132 7.53214,0.5533 h 0.88614 v 0"
- id="path9797-9"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:0.96708673;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.96708673, 0.96708673;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3-1)"
- d="m 300.49277,174.25976 7.49033,-11.23756 8.32259,-4.49504 7.07419,1.12376 h 0.83227 v 0"
- id="path9797-9-7"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#marker12747)"
- d="m 299.68708,196.34344 9.19239,7.77817 7.07107,1.41421 h 4.94974 v 0"
- id="path12737"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:url(#linearGradient14808);stroke-width:4.66056013;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:4.66056002, 4.66056002;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Send)"
- d="m 447.95767,168.30181 c 119.99171,0 119.99171,0 119.99171,0"
- id="path13236"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#808080;stroke-width:0.96708673;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.96708673, 0.96708673000000001;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3-1-6)"
- d="m 529.56098,142.71226 7.49033,-11.23756 8.32259,-4.49504 7.07419,1.12376 h 0.83227 v 0"
- id="path9797-9-7-3"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mend)"
- d="m 612.93538,222.50639 -5.65686,12.72792 -14.84924,3.53553 -14.14213,0.70711"
- id="path16128"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Mend);stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0"
- d="m 624.95619,220.38507 -3.53553,13.43502 -12.72792,14.84925 -9.19239,5.65685 -19.09188,2.82843 -1.41422,-0.70711 h -1.41421"
- id="path16130"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Mend);stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0"
- d="m 635.56279,221.09217 -7.77817,33.94113 -4.24264,6.36396 -8.48528,3.53553 -10.6066,4.94975 -19.09189,5.65685 -6.36396,3.53554"
- id="path16132"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1.01083219;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.01083222, 1.01083221999999995;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-53)"
- d="m 456.03282,270.85761 -4.96024,14.83162 -13.02062,4.11988 -12.40058,0.82399"
- id="path16128-3"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:0.80101544;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.80101541, 0.80101540999999998;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-99)"
- d="m 341.29831,266.70565 -6.88826,6.70663 -18.08168,1.86296 -17.22065,0.37258"
- id="path16128-6"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00faf5;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mend)"
- d="m 219.78402,264.93279 -6.36396,-9.89949 -3.53554,-16.26346 -7.77817,-8.48528 -8.48528,-4.94975 -4.94975,-2.82842"
- id="path17144"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00db00;stroke-width:1.4;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1.4, 1.39999999999999991;stroke-dashoffset:0;marker-end:url(#marker17156);marker-start:url(#marker17550)"
- d="m 651.11914,221.09217 -7.07107,31.81981 -17.67766,34.64823 -21.21321,26.87005 -80.61017,1.41422 -86.97413,1.41421 -79.90306,-3.53553 -52.3259,1.41421 -24.04163,10.6066 -2.82843,1.41422"
- id="path17146"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#000000;stroke-width:1.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1.3, 1.30000000000000004;stroke-dashoffset:0;marker-start:url(#marker18096);marker-end:url(#marker18508)"
- d="M 659.60442,221.09217 C 656.776,327.86529 656.776,328.5724 656.776,328.5724"
- id="path18086"
- inkscape:connector-curvature="0" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7-2-7-8-7-2"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="matrix(0.57822568,0,0,0.72506311,137.7802,161.1139)"><flowRegion
- id="flowRegion1855-0-1-3-66-99-9"><rect
- id="rect1857-5-1-5-2-6-1"
- width="174.19844"
- height="91.867104"
- x="120.20815"
- y="120.75856" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#5500d4"
- id="flowPara9188-8-4" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="155.96185"
- y="220.07472"
- id="text9071-6"><tspan
- sodipodi:role="line"
- x="158.29518"
- y="220.07472"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2100"> <tspan
- style="fill:#0000ff"
- id="tspan2327">Ethdev Ports </tspan></tspan><tspan
- sodipodi:role="line"
- x="155.96185"
- y="236.74139"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104">(NIX)</tspan></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot2106"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2108"><rect
- id="rect2110"
- width="42.1875"
- height="28.125"
- x="178.125"
- y="71.155365" /></flowRegion><flowPara
- id="flowPara2112" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2114"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2116"><rect
- id="rect2118"
- width="38.28125"
- height="28.90625"
- x="196.09375"
- y="74.280365" /></flowRegion><flowPara
- id="flowPara2120" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2122"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2124"><rect
- id="rect2126"
- width="39.0625"
- height="23.4375"
- x="186.71875"
- y="153.96786" /></flowRegion><flowPara
- id="flowPara2128" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="262.1366"
- y="172.08614"
- id="text9071-6-4"><tspan
- sodipodi:role="line"
- x="264.46994"
- y="172.08614"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0">Ingress </tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="188.75281"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176">Classification</tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="205.41946"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180">(NPC)</tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="222.08614"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178" /><tspan
- sodipodi:role="line"
- x="262.1366"
- y="238.75281"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="261.26727"
- y="254.46307"
- id="text9071-6-4-9"><tspan
- sodipodi:role="line"
- x="263.60062"
- y="254.46307"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-0">Egress </tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="271.12973"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176-8">Classification</tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="287.79642"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180-9">(NPC)</tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="304.46307"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-3" /><tspan
- sodipodi:role="line"
- x="261.26727"
- y="321.12973"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2174-7" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="362.7016"
- y="111.81297"
- id="text9071-4"><tspan
- sodipodi:role="line"
- id="tspan9069-8"
- x="362.7016"
- y="111.81297"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Rx Queues</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="488.21777"
- y="207.21898"
- id="text9071-4-3"><tspan
- sodipodi:role="line"
- id="tspan9069-8-8"
- x="488.21777"
- y="207.21898"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Tx Queues</tspan></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot2311"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2313"><rect
- id="rect2315"
- width="49.21875"
- height="41.40625"
- x="195.3125"
- y="68.811615" /></flowRegion><flowPara
- id="flowPara2317" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2319"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2321"><rect
- id="rect2323"
- width="40.625"
- height="39.0625"
- x="196.09375"
- y="69.592865" /></flowRegion><flowPara
- id="flowPara2325" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="382.20477"
- y="263.74432"
- id="text9071-6-4-6"><tspan
- sodipodi:role="line"
- x="382.20477"
- y="263.74432"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-9">Egress</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="280.41098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176-3">Traffic Manager</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="297.07767"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180-1">(NIX)</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="313.74432"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-6" /><tspan
- sodipodi:role="line"
- x="382.20477"
- y="330.41098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174-8" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="500.98602"
- y="154.02556"
- id="text9071-6-4-0"><tspan
- sodipodi:role="line"
- x="503.31937"
- y="154.02556"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-97">Scheduler </tspan><tspan
- sodipodi:role="line"
- x="500.98602"
- y="170.69223"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2389" /><tspan
- sodipodi:role="line"
- x="500.98602"
- y="187.35889"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2391">SSO</tspan><tspan
- sodipodi:role="line"
- x="500.98602"
- y="204.02556"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-60" /><tspan
- sodipodi:role="line"
- x="500.98602"
- y="220.69223"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174-3" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="571.61627"
- y="119.24016"
- id="text9071-4-2"><tspan
- sodipodi:role="line"
- id="tspan9069-8-82"
- x="571.61627"
- y="119.24016"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Supports both poll mode and/or event mode</tspan><tspan
- sodipodi:role="line"
- x="571.61627"
- y="135.90683"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2416">by configuring scheduler</tspan><tspan
- sodipodi:role="line"
- x="571.61627"
- y="152.57349"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2418" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="638.14227"
- y="192.46773"
- id="text9071-6-4-9-2"><tspan
- sodipodi:role="line"
- x="638.14227"
- y="192.46773"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-3-2">ARMv8</tspan><tspan
- sodipodi:role="line"
- x="638.14227"
- y="209.1344"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2499">Cores</tspan><tspan
- sodipodi:role="line"
- x="638.14227"
- y="225.80106"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2174-7-8" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="180.24902"
- y="325.09399"
- id="text9071-4-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-7"
- x="180.24902"
- y="325.09399"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Hardware Libraries</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="487.8916"
- y="325.91599"
- id="text9071-4-1-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-7-1"
- x="487.8916"
- y="325.91599"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Software Libraries</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="81.178604"
- y="350.03149"
- id="text9071-4-18"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83"
- x="81.178604"
- y="350.03149"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Mempool</tspan><tspan
- sodipodi:role="line"
- x="81.178604"
- y="366.69815"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555">(NPA)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="151.09518"
- y="348.77365"
- id="text9071-4-18-9"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-3"
- x="151.09518"
- y="348.77365"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Timer</tspan><tspan
- sodipodi:role="line"
- x="151.09518"
- y="365.44031"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-9">(TIM)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="222.56393"
- y="347.1174"
- id="text9071-4-18-0"><tspan
- sodipodi:role="line"
- x="222.56393"
- y="347.1174"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90">Crypto</tspan><tspan
- sodipodi:role="line"
- x="222.56393"
- y="363.78406"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601">(CPT)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="289.00229"
- y="347.69473"
- id="text9071-4-18-0-5"><tspan
- sodipodi:role="line"
- x="289.00229"
- y="347.69473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90-9">Compress</tspan><tspan
- sodipodi:role="line"
- x="289.00229"
- y="364.36139"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601-6">(ZIP)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="355.50653"
- y="348.60098"
- id="text9071-4-18-0-5-6"><tspan
- sodipodi:role="line"
- x="355.50653"
- y="348.60098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90-9-5">Shared</tspan><tspan
- sodipodi:role="line"
- x="355.50653"
- y="365.26764"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2645">Memory</tspan><tspan
- sodipodi:role="line"
- x="355.50653"
- y="381.93433"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601-6-1" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="430.31393"
- y="356.4924"
- id="text9071-4-18-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-35"
- x="430.31393"
- y="356.4924"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">SW Ring</tspan><tspan
- sodipodi:role="line"
- x="430.31393"
- y="373.15906"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-6" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="569.37646"
- y="341.1799"
- id="text9071-4-18-2"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-4"
- x="569.37646"
- y="341.1799"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">HASH</tspan><tspan
- sodipodi:role="line"
- x="569.37646"
- y="357.84656"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2742">LPM</tspan><tspan
- sodipodi:role="line"
- x="569.37646"
- y="374.51324"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-2">ACL</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="503.75143"
- y="355.02365"
- id="text9071-4-18-2-3"><tspan
- sodipodi:role="line"
- x="503.75143"
- y="355.02365"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2733">Mbuf</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="639.34521"
- y="355.6174"
- id="text9071-4-18-19"><tspan
- sodipodi:role="line"
- x="639.34521"
- y="355.6174"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2771">De(Frag)</tspan></text>
- </g>
-</svg>
diff --git a/doc/guides/platform/img/octeontx2_resource_virtualization.svg b/doc/guides/platform/img/octeontx2_resource_virtualization.svg
deleted file mode 100644
index bf976b52af..0000000000
--- a/doc/guides/platform/img/octeontx2_resource_virtualization.svg
+++ /dev/null
@@ -1,2418 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<!--
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2019 Marvell International Ltd.
-#
--->
-
-<svg
- xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="631.91431"
- height="288.34286"
- id="svg3868"
- version="1.1"
- inkscape:version="0.92.4 (5da689c313, 2019-01-14)"
- sodipodi:docname="octeontx2_resource_virtualization.svg"
- sodipodi:version="0.32"
- inkscape:output_extension="org.inkscape.output.svg.inkscape">
- <defs
- id="defs3870">
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker9460"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path9458"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker7396"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path7133"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5474">
- <stop
- style="stop-color:#ffffff;stop-opacity:1;"
- offset="0"
- id="stop5470" />
- <stop
- style="stop-color:#ffffff;stop-opacity:0;"
- offset="1"
- id="stop5472" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5464">
- <stop
- style="stop-color:#daeef5;stop-opacity:1;"
- offset="0"
- id="stop5460" />
- <stop
- style="stop-color:#daeef5;stop-opacity:0;"
- offset="1"
- id="stop5462" />
- </linearGradient>
- <linearGradient
- id="linearGradient6545"
- osb:paint="solid">
- <stop
- style="stop-color:#ffa600;stop-opacity:1;"
- offset="0"
- id="stop6543" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3302"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3294"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3290"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3286"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3228"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3188"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3184"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3180"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3176"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3172"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3168"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3164"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3160"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120"
- is_visible="true" />
- <linearGradient
- id="linearGradient3114"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3112" />
- </linearGradient>
- <linearGradient
- id="linearGradient3088"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3086" />
- </linearGradient>
- <linearGradient
- id="linearGradient3058"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3056" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3054"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3050"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3046"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3042"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3038"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3034"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3030"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3004"
- is_visible="true" />
- <linearGradient
- id="linearGradient2975"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2200;stop-opacity:1;"
- offset="0"
- id="stop2973" />
- </linearGradient>
- <linearGradient
- id="linearGradient2969"
- osb:paint="solid">
- <stop
- style="stop-color:#69ff72;stop-opacity:1;"
- offset="0"
- id="stop2967" />
- </linearGradient>
- <linearGradient
- id="linearGradient2963"
- osb:paint="solid">
- <stop
- style="stop-color:#000000;stop-opacity:1;"
- offset="0"
- id="stop2961" />
- </linearGradient>
- <linearGradient
- id="linearGradient2929"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2d00;stop-opacity:1;"
- offset="0"
- id="stop2927" />
- </linearGradient>
- <linearGradient
- id="linearGradient4610"
- osb:paint="solid">
- <stop
- style="stop-color:#00ffff;stop-opacity:1;"
- offset="0"
- id="stop4608" />
- </linearGradient>
- <linearGradient
- id="linearGradient3993"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3991" />
- </linearGradient>
- <linearGradient
- id="linearGradient3808"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3806" />
- </linearGradient>
- <linearGradient
- id="linearGradient3776"
- osb:paint="solid">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop3774" />
- </linearGradient>
- <linearGradient
- id="linearGradient3438"
- osb:paint="solid">
- <stop
- style="stop-color:#b8e132;stop-opacity:1;"
- offset="0"
- id="stop3436" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3408"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3404"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3400"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3392"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3376"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3040"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3036"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3032"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3028"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3024"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3020"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2854"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect2844"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <linearGradient
- id="linearGradient2828"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop2826" />
- </linearGradient>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect329"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart"
- style="overflow:visible">
- <path
- id="path4530"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend"
- style="overflow:visible">
- <path
- id="path4533"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- id="linearGradient4513">
- <stop
- style="stop-color:#fdffdb;stop-opacity:1;"
- offset="0"
- id="stop4515" />
- <stop
- style="stop-color:#dfe2d8;stop-opacity:0;"
- offset="1"
- id="stop4517" />
- </linearGradient>
- <inkscape:perspective
- sodipodi:type="inkscape:persp3d"
- inkscape:vp_x="0 : 526.18109 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_z="744.09448 : 526.18109 : 1"
- inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
- id="perspective3876" />
- <inkscape:perspective
- id="perspective3886"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lend"
- style="overflow:visible">
- <path
- id="path3211"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3892"
- style="overflow:visible">
- <path
- id="path3894"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3896"
- style="overflow:visible">
- <path
- id="path3898"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lstart"
- style="overflow:visible">
- <path
- id="path3208"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3902"
- style="overflow:visible">
- <path
- id="path3904"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3906"
- style="overflow:visible">
- <path
- id="path3908"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3910"
- style="overflow:visible">
- <path
- id="path3912"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective4086"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective4113"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective5195"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-4"
- style="overflow:visible">
- <path
- id="path4533-7"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5272"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-4"
- style="overflow:visible">
- <path
- id="path4530-5"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-0"
- style="overflow:visible">
- <path
- id="path4533-3"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5317"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-3"
- style="overflow:visible">
- <path
- id="path4530-2"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-06"
- style="overflow:visible">
- <path
- id="path4533-1"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-8"
- style="overflow:visible">
- <path
- id="path4530-7"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-9"
- style="overflow:visible">
- <path
- id="path4533-2"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858-0"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3"
- style="overflow:visible">
- <path
- id="path4533-75"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3-2"
- style="overflow:visible">
- <path
- id="path4533-75-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008-3"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7-3"
- is_visible="true" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5464"
- id="linearGradient5466"
- x1="65.724048"
- y1="169.38839"
- x2="183.38978"
- y2="169.38839"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-14,-4)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5476"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,105.65926,-0.6580533)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5658"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,148.76869,-0.0791224)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5695"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,206.76869,3.9208776)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-34"
- style="overflow:visible">
- <path
- id="path4530-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-45"
- style="overflow:visible">
- <path
- id="path4533-16"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7"
- style="overflow:visible">
- <path
- id="path4530-58"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1"
- style="overflow:visible">
- <path
- id="path4533-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-6"
- style="overflow:visible">
- <path
- id="path4530-58-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2"
- style="overflow:visible">
- <path
- id="path4530-58-46"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1"
- style="overflow:visible">
- <path
- id="path4533-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2-6"
- style="overflow:visible">
- <path
- id="path4530-58-46-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-4-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,192.76869,-0.0791224)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#grad0-40"
- id="linearGradient5917"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(8.8786147,-0.0235964,-0.00460261,1.50035,-400.25558,-2006.3745)"
- x1="-0.12893644"
- y1="1717.1688"
- x2="28.140806"
- y2="1717.1688" />
- <linearGradient
- id="grad0-40"
- x1="0"
- y1="0"
- x2="1"
- y2="0"
- gradientTransform="rotate(60,0.5,0.5)">
- <stop
- offset="0"
- stop-color="#f3f6fa"
- stop-opacity="1"
- id="stop3419" />
- <stop
- offset="0.24"
- stop-color="#f9fafc"
- stop-opacity="1"
- id="stop3421" />
- <stop
- offset="0.54"
- stop-color="#feffff"
- stop-opacity="1"
- id="stop3423" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30"
- style="overflow:visible">
- <path
- id="path4530-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6"
- style="overflow:visible">
- <path
- id="path4533-19"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0"
- style="overflow:visible">
- <path
- id="path4530-0-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8"
- style="overflow:visible">
- <path
- id="path4533-19-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9"
- style="overflow:visible">
- <path
- id="path4530-0-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3"
- style="overflow:visible">
- <path
- id="path4533-19-6-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-7"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,321.82147,-1.8659026)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-8"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(1.3985479,0,0,0.98036646,376.02779,12.240541)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-81"
- style="overflow:visible">
- <path
- id="path4530-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-5"
- style="overflow:visible">
- <path
- id="path4533-72"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-1"
- style="overflow:visible">
- <path
- id="path4530-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker9714"
- style="overflow:visible">
- <path
- id="path9712"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48"
- style="overflow:visible">
- <path
- id="path4530-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker10117"
- style="overflow:visible">
- <path
- id="path10115"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48-6"
- style="overflow:visible">
- <path
- id="path4530-4-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker11186"
- style="overflow:visible">
- <path
- id="path11184"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-8-0"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(1.3985479,0,0,0.98036646,497.77779,12.751681)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9-0"
- style="overflow:visible">
- <path
- id="path4530-0-6-4-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3-7"
- style="overflow:visible">
- <path
- id="path4533-19-6-1-5"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- </defs>
- <sodipodi:namedview
- id="base"
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1.0"
- inkscape:pageopacity="0.0"
- inkscape:pageshadow="2"
- inkscape:zoom="1.4142136"
- inkscape:cx="371.09569"
- inkscape:cy="130.22425"
- inkscape:document-units="px"
- inkscape:current-layer="layer1"
- showgrid="false"
- inkscape:window-width="1920"
- inkscape:window-height="1057"
- inkscape:window-x="-8"
- inkscape:window-y="-8"
- inkscape:window-maximized="1"
- fit-margin-top="0.1"
- fit-margin-left="0.1"
- fit-margin-right="0.1"
- fit-margin-bottom="0.1"
- inkscape:measure-start="-29.078,219.858"
- inkscape:measure-end="346.809,219.858"
- showguides="true"
- inkscape:snap-page="true"
- inkscape:snap-others="false"
- inkscape:snap-nodes="false"
- inkscape:snap-bbox="true"
- inkscape:lockguides="false"
- inkscape:guide-bbox="true">
- <sodipodi:guide
- position="-120.20815,574.17069"
- orientation="0,1"
- id="guide7077"
- inkscape:locked="false" />
- </sodipodi:namedview>
- <metadata
- id="metadata3873">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <g
- inkscape:label="Layer 1"
- inkscape:groupmode="layer"
- id="layer1"
- transform="translate(-46.542857,-100.33361)">
- <flowRoot
- xml:space="preserve"
- id="flowRoot5313"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;letter-spacing:0px;word-spacing:0px"><flowRegion
- id="flowRegion5315"><rect
- id="rect5317"
- width="120.91525"
- height="96.873627"
- x="-192.33304"
- y="-87.130829" /></flowRegion><flowPara
- id="flowPara5319" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="90.320152"
- y="299.67871"
- id="text2978"
- inkscape:export-filename="/home/matz/barracuda/rapports/mbuf-api-v2-images/octeon_multi.png"
- inkscape:export-xdpi="112"
- inkscape:export-ydpi="112"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="90.320152"
- y="299.67871"
- id="tspan3006"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15.74255753px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025"> </tspan></text>
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.82973665;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066"
- width="127.44949"
- height="225.03024"
- x="47.185646"
- y="111.20448" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="154.93478" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.55900002;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096-6"
- width="117.1069"
- height="20.907221"
- x="51.955002"
- y="181.51834" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b7dfd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096-6-2"
- width="117.1069"
- height="20.907221"
- x="51.691605"
- y="205.82234" />
- <rect
- y="154.93478"
- x="52.003464"
- height="20.907221"
- width="117.1069"
- id="rect5160"
- style="fill:url(#linearGradient5466);fill-opacity:1;stroke:#6b8afd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5162"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="231.92767" />
- <rect
- y="255.45328"
- x="52.003464"
- height="20.907221"
- width="117.1069"
- id="rect5164"
- style="fill:#daeef5;fill-opacity:1;stroke:#6b6ffd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="281.11758" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.59729731;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-6"
- width="117.0697"
- height="23.892008"
- x="52.659744"
- y="306.01089" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:'Bitstream Vera Sans';-inkscape-font-specification:'Bitstream Vera Sans';fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.955597"
- y="163.55217"
- id="text5219-26-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.955597"
- y="163.55217"
- id="tspan5223-10-9"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.098343"
- y="187.18845"
- id="text5219-26-1-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.098343"
- y="187.18845"
- id="tspan5223-10-9-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.829468"
- y="211.79611"
- id="text5219-26-1-5"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.829468"
- y="211.79611"
- id="tspan5223-10-9-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">SSO AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.770523"
- y="235.66898"
- id="text5219-26-1-5-7-6"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.770523"
- y="235.66898"
- id="tspan5223-10-9-1-6-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPC AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.895973"
- y="259.25156"
- id="text5219-26-1-5-7-6-3"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.895973"
- y="259.25156"
- id="tspan5223-10-9-1-6-8-3"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">CPT AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.645073"
- y="282.35391"
- id="text5219-26-1-5-7-6-3-0"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.645073"
- y="282.35391"
- id="tspan5223-10-9-1-6-8-3-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">RVU AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.93084431px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.07757032"
- x="110.2803"
- y="126.02858"
- id="text5219-26"
- transform="scale(1.0076913,0.9923674)"><tspan
- sodipodi:role="line"
- x="110.2803"
- y="126.02858"
- id="tspan5223-10"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032">Linux AF driver</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="139.49821"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032"
- id="tspan5325">(octeontx2_af)</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="152.96783"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#ff0000;stroke-width:1.07757032"
- id="tspan5327">PF0</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="160.38988"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032"
- id="tspan5329" /></text>
- <rect
- style="fill:url(#linearGradient5476);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5468"
- width="36.554455"
- height="18.169683"
- x="49.603416"
- y="357.7995" />
- <g
- id="g5594"
- transform="translate(-18,-40)">
- <text
- id="text5480"
- y="409.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#6a5400;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#6a5400;fill-opacity:1"
- y="409.46326"
- x="73.41291"
- id="tspan5478"
- sodipodi:role="line">CGX-0</tspan></text>
- </g>
- <rect
- style="fill:url(#linearGradient5658);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5468-2"
- width="36.554455"
- height="18.169683"
- x="92.712852"
- y="358.37842" />
- <g
- id="g5594-7"
- transform="translate(25.109434,2.578931)">
- <text
- id="text5480-9"
- y="367.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#695400;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#695400;fill-opacity:1"
- y="367.46326"
- x="73.41291"
- id="tspan5478-0"
- sodipodi:role="line">CGX-1</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="104.15788"
- y="355.79947"
- id="text5711"><tspan
- sodipodi:role="line"
- id="tspan5709"
- x="104.15788"
- y="392.29269" /></text>
- </g>
- <rect
- style="opacity:1;fill:url(#linearGradient6997);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1"
- width="36.554455"
- height="18.169683"
- x="136.71284"
- y="358.37842" />
- <g
- id="g5594-7-0"
- transform="translate(69.109434,2.578931)">
- <text
- id="text5480-9-7"
- y="367.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#695400;fill-opacity:1"
- y="367.46326"
- x="73.41291"
- id="tspan5478-0-4"
- sodipodi:role="line">CGX-2</tspan></text>
- </g>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="116.4436"
- y="309.90784"
- id="text5219-26-1-5-7-6-3-0-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="116.4436"
- y="309.90784"
- id="tspan5223-10-9-1-6-8-3-1-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.03398025">CGX-FW Interface</tspan></text>
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart);marker-end:url(#Arrow1Mend)"
- d="m 65.54286,336.17648 v 23"
- id="path7614"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30);marker-end:url(#Arrow1Mend-6)"
- d="m 108.54285,336.67647 v 23"
- id="path7614-2"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0);marker-end:url(#Arrow1Mend-6-8)"
- d="m 152.54285,336.67647 v 23"
- id="path7614-2-2"
- inkscape:connector-curvature="0" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50469553;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1"
- width="100.27454"
- height="105.81976"
- x="242.65558"
- y="233.7666" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50588065;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6"
- width="100.27335"
- height="106.31857"
- x="361.40619"
- y="233.7672" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50588065;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-7"
- width="100.27335"
- height="106.31857"
- x="467.40619"
- y="233.7672" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.49445513;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-7-0"
- width="95.784782"
- height="106.33"
- x="573.40039"
- y="233.76149" />
- <path
- style="fill:none;stroke:#00ff00;stroke-width:0.984;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.984, 0.98400000000000021;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart);marker-end:url(#Arrow1Mend)"
- d="M 176.02438,304.15296 C 237.06133,305.2 237.06133,305.2 237.06133,305.2"
- id="path8315"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="177.04286"
- y="299.17648"
- id="text8319"><tspan
- sodipodi:role="line"
- id="tspan8317"
- x="177.04286"
- y="299.17648"
- style="font-size:10.66666698px;line-height:1">AF-PF MBOX</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="291.53308"
- y="264.67648"
- id="text8323"><tspan
- sodipodi:role="line"
- id="tspan8321"
- x="291.53308"
- y="264.67648"
- style="font-size:10px;text-align:center;text-anchor:middle"><tspan
- style="font-size:10px;fill:#0000ff"
- id="tspan8339"><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11972">Linux</tspan></tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11970"> Netdev </tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="281.34314"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345">driver</tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="298.00983"
- id="tspan8325"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">(octeontx2_pf)</tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="314.67648"
- id="tspan8327"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10511">x</tspan></tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="331.34314"
- id="tspan8329" /></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot8331"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion8333"><rect
- id="rect8335"
- width="48.5"
- height="28"
- x="252.5"
- y="208.34286" /></flowRegion><flowPara
- id="flowPara8337" /></flowRoot> <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9"
- width="71.28923"
- height="15.589548"
- x="253.89825"
- y="320.63168" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="283.97266"
- y="319.09348"
- id="text5219-26-1-5-7-6-3-0-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="283.97266"
- y="319.09348"
- id="tspan5223-10-9-1-6-8-3-1-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7"
- width="71.28923"
- height="15.589548"
- x="255.89822"
- y="237.88171" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="285.03787"
- y="239.81017"
- id="text5219-26-1-5-7-6-3-0-1-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="285.03787"
- y="239.81017"
- id="tspan5223-10-9-1-6-8-3-1-0-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333333px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA LF</tspan></text>
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.41014698;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0-9);marker-end:url(#Arrow1Mend-6-8-3)"
- d="m 287.54285,340.99417 v 18.3646"
- id="path7614-2-2-8"
- inkscape:connector-curvature="0" />
- <rect
- style="opacity:1;fill:url(#linearGradient6997-8);fill-opacity:1;stroke:#695400;stroke-width:1.316;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1-4"
- width="81.505402"
- height="17.62063"
- x="251.04015"
- y="359.86615" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="263.46152"
- y="224.99915"
- id="text8319-7"><tspan
- sodipodi:role="line"
- id="tspan8317-7"
- x="263.46152"
- y="224.99915"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="259.23218"
- y="371.46179"
- id="text8319-7-7"><tspan
- sodipodi:role="line"
- id="tspan8317-7-3"
- x="259.23218"
- y="371.46179"
- style="font-size:9.33333302px;line-height:1">CGX-x LMAC-y</tspan></text>
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3"
- width="80.855743"
- height="92.400963"
- x="197.86496"
- y="112.97599" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4"
- width="80.855743"
- height="92.400963"
- x="286.61499"
- y="112.476" />
- <path
- style="fill:none;stroke:#580000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.3, 0.3;stroke-dashoffset:0;stroke-opacity:1"
- d="m 188.04286,109.67648 c 2.5,238.5 2,238 2,238 163.49999,0.5 163.49999,0.5 163.49999,0.5 v -124 l -70,0.5 -1.5,-116 v 1.5 z"
- id="path9240"
- inkscape:connector-curvature="0" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4-0"
- width="80.855743"
- height="92.400963"
- x="375.11499"
- y="111.976" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4-0-0"
- width="80.855743"
- height="92.400963"
- x="586.61499"
- y="111.476" />
- <path
- style="fill:none;stroke:#ff00cc;stroke-width:0.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:7.2, 0.29999999999999999;stroke-dashoffset:0"
- d="m 675.54284,107.17648 1,239.5 -317.99999,0.5 -1,-125 14.5,0.5 -0.5,-113.5 z"
- id="path9272"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:0.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:7.2,0.3;stroke-dashoffset:0"
- d="m 284.54285,109.17648 0.5,100 84,-0.5 v -99.5 z"
- id="path9274"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="231.87221"
- y="146.02637"
- id="text8323-1"
- transform="scale(1.0315378,0.96942639)"><tspan
- sodipodi:role="line"
- id="tspan8321-2"
- x="231.87221"
- y="146.02637"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="font-size:8.12077141px;fill:#0000ff;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8339-6">Linux</tspan> Netdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9396">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="159.56099"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-6">driver</tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="173.09561"
- id="tspan8325-2"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">(octeontx2_vf)</tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="186.63022"
- id="tspan8327-7"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#782121;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10513">x</tspan><tspan
- style="font-size:8.12077141px;fill:#782121;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="200.16484"
- id="tspan8329-3"
- style="stroke-width:0.81207716;fill:#782121" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9"
- width="59.718147"
- height="12.272857"
- x="207.65872"
- y="185.61246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="225.56583"
- y="192.49615"
- id="text5219-26-1-5-7-6-3-0-1-6"
- transform="scale(0.99742277,1.0025839)"><tspan
- sodipodi:role="line"
- x="225.56583"
- y="192.49615"
- id="tspan5223-10-9-1-6-8-3-1-0-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5"
- width="59.718147"
- height="12.272857"
- x="209.33406"
- y="116.46765" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="226.43088"
- y="124.1223"
- id="text5219-26-1-5-7-6-3-0-1-4-7"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="226.43088"
- y="124.1223"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="317.66635"
- y="121.26925"
- id="text8323-1-9"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-3"
- x="317.66635"
- y="131.14769"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716" /><tspan
- sodipodi:role="line"
- x="317.66635"
- y="144.6823"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9400"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9402">DPDK</tspan> Ethdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9398">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="158.21692"
- id="tspan8325-2-7"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">driver</tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="171.75154"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9392" /><tspan
- sodipodi:role="line"
- x="317.66635"
- y="185.28616"
- id="tspan8327-7-8"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#782121;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10515">x</tspan><tspan
- style="font-size:8.12077141px;fill:#782121;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1-0">-VF1</tspan></tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="198.82077"
- id="tspan8329-3-3"
- style="stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3"
- width="59.718147"
- height="12.272857"
- x="295.65872"
- y="185.11246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="313.79312"
- y="191.99756"
- id="text5219-26-1-5-7-6-3-0-1-6-1"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="313.79312"
- y="191.99756"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5-8"
- width="59.718147"
- height="12.272857"
- x="297.33408"
- y="115.96765" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="314.65817"
- y="123.62372"
- id="text5219-26-1-5-7-6-3-0-1-4-7-9"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="314.65817"
- y="123.62372"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0-9"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mstart);marker-start:url(#Arrow1Mstart)"
- d="m 254.54285,205.17648 c 1,29 1,28.5 1,28.5"
- id="path9405"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-1);marker-end:url(#Arrow1Mstart-1)"
- d="m 324.42292,203.92589 c 1,29 1,28.5 1,28.5"
- id="path9405-3"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="408.28308"
- y="265.83011"
- id="text8323-7"><tspan
- sodipodi:role="line"
- id="tspan8321-3"
- x="408.28308"
- y="265.83011"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10440">DPDK</tspan> Ethdev <tspan
- style="font-size:10px;fill:#00d4aa;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8343-5">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="282.49677"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-8">driver</tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="299.16345"
- id="tspan8325-5"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /><tspan
- sodipodi:role="line"
- x="408.28308"
- y="315.83011"
- id="tspan8327-1"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#ff0000;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10517">y</tspan></tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="332.49677"
- id="tspan8329-2" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-3"
- width="71.28923"
- height="15.589548"
- x="376.64825"
- y="319.78531" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="410.92075"
- y="318.27411"
- id="text5219-26-1-5-7-6-3-0-1-62"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="410.92075"
- y="318.27411"
- id="tspan5223-10-9-1-6-8-3-1-0-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-2"
- width="71.28923"
- height="15.589548"
- x="378.64822"
- y="237.03534" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="411.98596"
- y="238.99095"
- id="text5219-26-1-5-7-6-3-0-1-4-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="411.98596"
- y="238.99095"
- id="tspan5223-10-9-1-6-8-3-1-0-8-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="386.21152"
- y="224.15277"
- id="text8319-7-5"><tspan
- sodipodi:role="line"
- id="tspan8317-7-8"
- x="386.21152"
- y="224.15277"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-48);marker-end:url(#Arrow1Mstart-48)"
- d="m 411.29285,204.33011 c 1,29 1,28.5 1,28.5"
- id="path9405-0"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="520.61176"
- y="265.49265"
- id="text8323-7-8"><tspan
- sodipodi:role="line"
- id="tspan8321-3-3"
- x="520.61176"
- y="265.49265"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff2a2a"
- id="tspan10440-2">DPDK</tspan> Eventdev <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343-5-3">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="282.1593"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345-8-6">driver</tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="298.82599"
- id="tspan8325-5-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle" /><tspan
- sodipodi:role="line"
- x="520.61176"
- y="315.49265"
- id="tspan8327-1-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10519">z</tspan></tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="332.1593"
- id="tspan8329-2-1" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-3-6"
- width="71.28923"
- height="15.589548"
- x="484.97693"
- y="319.44785" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="522.95496"
- y="317.94733"
- id="text5219-26-1-5-7-6-3-0-1-62-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="522.95496"
- y="317.94733"
- id="tspan5223-10-9-1-6-8-3-1-0-4-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">TIM LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-2-8"
- width="71.28923"
- height="15.589548"
- x="486.9769"
- y="236.69788" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="524.0202"
- y="238.66432"
- id="text5219-26-1-5-7-6-3-0-1-4-4-3"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="524.0202"
- y="238.66432"
- id="tspan5223-10-9-1-6-8-3-1-0-8-7-6"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">SSO LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="619.6156"
- y="265.47531"
- id="text8323-7-8-3"><tspan
- sodipodi:role="line"
- id="tspan8321-3-3-1"
- x="619.6156"
- y="265.47531"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"> <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff"
- id="tspan10562">Linux </tspan>Crypto <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343-5-3-7">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="282.14197"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345-8-6-8">driver</tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="298.80865"
- id="tspan8325-5-4-3"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle" /><tspan
- sodipodi:role="line"
- x="619.6156"
- y="315.47531"
- id="tspan8327-1-0-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10560">m</tspan></tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="332.14197"
- id="tspan8329-2-1-9" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3-0"
- width="59.718147"
- height="12.272857"
- x="385.10458"
- y="183.92126" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="403.46997"
- y="190.80957"
- id="text5219-26-1-5-7-6-3-0-1-6-1-5"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="403.46997"
- y="190.80957"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5-8-5"
- width="59.718147"
- height="12.272857"
- x="386.77994"
- y="116.77647" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="404.33502"
- y="124.43062"
- id="text5219-26-1-5-7-6-3-0-1-4-7-9-8"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="404.33502"
- y="124.43062"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0-9-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="402.97598"
- y="143.8235"
- id="text8323-1-7"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-1"
- x="402.97598"
- y="143.8235"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11102">DPDK</tspan> Ethdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9396-1">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="157.35812"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-6-5">driver</tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="170.89275"
- id="tspan8327-7-2"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /><tspan
- sodipodi:role="line"
- x="402.97598"
- y="184.42735"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11106">PF<tspan
- style="fill:#a02c2c;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11110">y</tspan><tspan
- style="font-size:8.12077141px;fill:#a02c2c;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1-2">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="197.96198"
- id="tspan8329-3-4"
- style="stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3-0-0"
- width="59.718147"
- height="12.272857"
- x="596.60461"
- y="185.11246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="615.51703"
- y="191.99774"
- id="text5219-26-1-5-7-6-3-0-1-6-1-5-1"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="615.51703"
- y="191.99774"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5-5-2"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">CPT LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="608.00879"
- y="145.05219"
- id="text8323-1-7-3"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-1-5"
- x="608.00879"
- y="145.05219"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716"><tspan
- id="tspan1793"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff2a2a">DPDK</tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11966"> Crypto </tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#0066ff"
- id="tspan9396-1-1">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="158.58681"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716"
- id="tspan8345-6-5-4">driver</tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="172.12143"
- id="tspan8327-7-2-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716" /><tspan
- sodipodi:role="line"
- x="608.00879"
- y="185.65604"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716"
- id="tspan11106-8">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#c83737"
- id="tspan11172">m</tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;fill:#c83737;stroke-width:0.81207716"
- id="tspan8347-1-2-0">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="199.19066"
- id="tspan8329-3-4-0"
- style="stroke-width:0.81207716" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="603.23218"
- y="224.74855"
- id="text8319-7-5-1"><tspan
- sodipodi:role="line"
- id="tspan8317-7-8-4"
- x="603.23218"
- y="224.74855"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-48-6);marker-end:url(#Arrow1Mstart-48-6)"
- d="m 628.31351,204.92589 c 1,29 1,28.5 1,28.5"
- id="path9405-0-2"
- inkscape:connector-curvature="0" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot11473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(46.542857,100.33361)"><flowRegion
- id="flowRegion11475"><rect
- id="rect11477"
- width="90"
- height="14.5"
- x="426"
- y="26.342873" /></flowRegion><flowPara
- id="flowPara11479">DDDpk</flowPara></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="509.60013"
- y="128.17648"
- id="text11483"><tspan
- sodipodi:role="line"
- id="tspan11481"
- x="511.47513"
- y="128.17648"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544">D<tspan
- style="-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal;fill:#005544"
- id="tspan11962">PDK-APP1 with </tspan></tspan><tspan
- sodipodi:role="line"
- x="511.47513"
- y="144.84315"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11485">one ethdev </tspan><tspan
- sodipodi:role="line"
- x="509.60013"
- y="161.50981"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11491">over Linux PF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="533.54285"
- y="158.17648"
- id="text11489"><tspan
- sodipodi:role="line"
- id="tspan11487"
- x="533.54285"
- y="170.34088" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="518.02197"
- y="179.98117"
- id="text11483-6"><tspan
- sodipodi:role="line"
- id="tspan11481-4"
- x="519.42822"
- y="179.98117"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">DPDK-APP2 with </tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="196.64784"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11485-5">Two ethdevs(PF,VF) ,</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="213.3145"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11517">eventdev, timer adapter and</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="229.98117"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11519"> cryptodev</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="246.64784"
- style="font-size:10.66666698px;text-align:center;text-anchor:middle;fill:#00ffff"
- id="tspan11491-6" /></text>
- <path
- style="fill:#005544;stroke:#00ffff;stroke-width:1.02430511;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.02430516, 4.09722065999999963;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mstart-8)"
- d="m 483.99846,150.16496 -112.95349,13.41069 v 0 l -0.48897,-0.53643 h 0.48897"
- id="path11521"
- inkscape:connector-curvature="0" />
- <path
- style="fill:#ff0000;stroke:#ff5555;stroke-width:1.16440296;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.16440301, 2.32880602999999997;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-0)"
- d="m 545.54814,186.52569 c 26.3521,-76.73875 26.3521,-76.73875 26.3521,-76.73875"
- id="path11523"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.41014698;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0-9-0);marker-end:url(#Arrow1Mend-6-8-3-7)"
- d="m 409.29286,341.50531 v 18.3646"
- id="path7614-2-2-8-2"
- inkscape:connector-curvature="0" />
- <rect
- style="opacity:1;fill:url(#linearGradient6997-8-0);fill-opacity:1;stroke:#695400;stroke-width:1.31599998;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1-4-9"
- width="81.505402"
- height="17.62063"
- x="372.79016"
- y="360.37729" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="380.98218"
- y="371.97293"
- id="text8319-7-7-1"><tspan
- sodipodi:role="line"
- id="tspan8317-7-3-1"
- x="380.98218"
- y="371.97293"
- style="font-size:9.33333302px;line-height:1">CGX-x LMAC-y</tspan></text>
- </g>
-</svg>
diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst
index 7614e1a368..2ff91a6018 100644
--- a/doc/guides/platform/index.rst
+++ b/doc/guides/platform/index.rst
@@ -15,4 +15,3 @@ The following are platform specific guides and setup information.
dpaa
dpaa2
octeontx
- octeontx2
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
deleted file mode 100644
index 5ab43abbdd..0000000000
--- a/doc/guides/platform/octeontx2.rst
+++ /dev/null
@@ -1,520 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-Marvell OCTEON TX2 Platform Guide
-=================================
-
-This document gives an overview of **Marvell OCTEON TX2** RVU H/W block,
-packet flow and procedure to build DPDK on OCTEON TX2 platform.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Supported OCTEON TX2 SoCs
--------------------------
-
-- CN98xx
-- CN96xx
-- CN93xx
-
-OCTEON TX2 Resource Virtualization Unit architecture
-----------------------------------------------------
-
-The :numref:`figure_octeontx2_resource_virtualization` diagram depicts the
-RVU architecture and a resource provisioning example.
-
-.. _figure_octeontx2_resource_virtualization:
-
-.. figure:: img/octeontx2_resource_virtualization.*
-
- OCTEON TX2 Resource virtualization architecture and provisioning example
-
-
-Resource Virtualization Unit (RVU) on Marvell's OCTEON TX2 SoC maps HW
-resources belonging to the network, crypto and other functional blocks onto
-PCI-compatible physical and virtual functions.
-
-Each functional block has multiple local functions (LFs) for
-provisioning to different PCIe devices. RVU supports multiple PCIe SRIOV
-physical functions (PFs) and virtual functions (VFs).
-
-The :numref:`table_octeontx2_rvu_dpdk_mapping` shows the various local
-functions (LFs) provided by the RVU and its functional mapping to
-DPDK subsystem.
-
-.. _table_octeontx2_rvu_dpdk_mapping:
-
-.. table:: RVU managed functional blocks and its mapping to DPDK subsystem
-
- +---+-----+--------------------------------------------------------------+
- | # | LF | DPDK subsystem mapping |
- +===+=====+==============================================================+
- | 1 | NIX | rte_ethdev, rte_tm, rte_event_eth_[rt]x_adapter, rte_security|
- +---+-----+--------------------------------------------------------------+
- | 2 | NPA | rte_mempool |
- +---+-----+--------------------------------------------------------------+
- | 3 | NPC | rte_flow |
- +---+-----+--------------------------------------------------------------+
- | 4 | CPT | rte_cryptodev, rte_event_crypto_adapter |
- +---+-----+--------------------------------------------------------------+
- | 5 | SSO | rte_eventdev |
- +---+-----+--------------------------------------------------------------+
- | 6 | TIM | rte_event_timer_adapter |
- +---+-----+--------------------------------------------------------------+
- | 7 | LBK | rte_ethdev |
- +---+-----+--------------------------------------------------------------+
- | 8 | DPI | rte_rawdev |
- +---+-----+--------------------------------------------------------------+
- | 9 | SDP | rte_ethdev |
- +---+-----+--------------------------------------------------------------+
- | 10| REE | rte_regexdev |
- +---+-----+--------------------------------------------------------------+
-
-PF0 is called the administrative / admin function (AF) and has exclusive
-privileges to provision RVU functional block's LFs to each of the PF/VF.
-
-PF/VFs communicates with AF via a shared memory region (mailbox).Upon receiving
-requests from PF/VF, AF does resource provisioning and other HW configuration.
-
-AF is always attached to host, but PF/VFs may be used by host kernel itself,
-or attached to VMs or to userspace applications like DPDK, etc. So, AF has to
-handle provisioning/configuration requests sent by any device from any domain.
-
-The AF driver does not receive or process any data.
-It is only a configuration driver used in control path.
-
-The :numref:`figure_octeontx2_resource_virtualization` diagram also shows a
-resource provisioning example where,
-
-1. PFx and PFx-VF0 bound to Linux netdev driver.
-2. PFx-VF1 ethdev driver bound to the first DPDK application.
-3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
-
-LBK HW Access
--------------
-
-Loopback HW Unit (LBK) receives packets from NIX-RX and sends packets back to NIX-TX.
-The loopback block has N channels and contains data buffering that is shared across
-all channels. The LBK HW Unit is abstracted using ethdev subsystem, Where PF0's
-VFs are exposed as ethdev device and odd-even pairs of VFs are tied together,
-that is, packets sent on odd VF end up received on even VF and vice versa.
-This would enable HW accelerated means of communication between two domains
-where even VF bound to the first domain and odd VF bound to the second domain.
-
-Typical application usage models are,
-
-#. Communication between the Linux kernel and DPDK application.
-#. Exception path to Linux kernel from DPDK application as SW ``KNI`` replacement.
-#. Communication between two different DPDK applications.
-
-SDP interface
--------------
-
-System DPI Packet Interface unit(SDP) provides PCIe endpoint support for remote host
-to DMA packets into and out of OCTEON TX2 SoC. SDP interface comes in to live only when
-OCTEON TX2 SoC is connected in PCIe endpoint mode. It can be used to send/receive
-packets to/from remote host machine using input/output queue pairs exposed to it.
-SDP interface receives input packets from remote host from NIX-RX and sends packets
-to remote host using NIX-TX. Remote host machine need to use corresponding driver
-(kernel/user mode) to communicate with SDP interface on OCTEON TX2 SoC. SDP supports
-single PCIe SRIOV physical function(PF) and multiple virtual functions(VF's). Users
-can bind PF or VF to use SDP interface and it will be enumerated as ethdev ports.
-
-The primary use case for SDP is to enable the smart NIC use case. Typical usage models are,
-
-#. Communication channel between remote host and OCTEON TX2 SoC over PCIe.
-#. Transfer packets received from network interface to remote host over PCIe and
- vice-versa.
-
-OCTEON TX2 packet flow
-----------------------
-
-The :numref:`figure_octeontx2_packet_flow_hw_accelerators` diagram depicts
-the packet flow on OCTEON TX2 SoC in conjunction with use of various HW accelerators.
-
-.. _figure_octeontx2_packet_flow_hw_accelerators:
-
-.. figure:: img/octeontx2_packet_flow_hw_accelerators.*
-
- OCTEON TX2 packet flow in conjunction with use of HW accelerators
-
-HW Offload Drivers
-------------------
-
-This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
-
-#. **Ethdev Driver**
- See :doc:`../nics/octeontx2` for NIX Ethdev driver information.
-
-#. **Mempool Driver**
- See :doc:`../mempool/octeontx2` for NPA mempool driver information.
-
-#. **Event Device Driver**
- See :doc:`../eventdevs/octeontx2` for SSO event device driver information.
-
-#. **Crypto Device Driver**
- See :doc:`../cryptodevs/octeontx2` for CPT crypto device driver information.
-
-Procedure to Setup Platform
----------------------------
-
-There are three main prerequisites for setting up DPDK on OCTEON TX2
-compatible board:
-
-1. **OCTEON TX2 Linux kernel driver**
-
- The dependent kernel drivers can be obtained from the
- `kernel.org <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/marvell/octeontx2>`_.
-
- Alternatively, the Marvell SDK also provides the required kernel drivers.
-
- Linux kernel should be configured with the following features enabled:
-
-.. code-block:: console
-
- # 64K pages enabled for better performance
- CONFIG_ARM64_64K_PAGES=y
- CONFIG_ARM64_VA_BITS_48=y
- # huge pages support enabled
- CONFIG_HUGETLBFS=y
- CONFIG_HUGETLB_PAGE=y
- # VFIO enabled with TYPE1 IOMMU at minimum
- CONFIG_VFIO_IOMMU_TYPE1=y
- CONFIG_VFIO_VIRQFD=y
- CONFIG_VFIO=y
- CONFIG_VFIO_NOIOMMU=y
- CONFIG_VFIO_PCI=y
- CONFIG_VFIO_PCI_MMAP=y
- # SMMUv3 driver
- CONFIG_ARM_SMMU_V3=y
- # ARMv8.1 LSE atomics
- CONFIG_ARM64_LSE_ATOMICS=y
- # OCTEONTX2 drivers
- CONFIG_OCTEONTX2_MBOX=y
- CONFIG_OCTEONTX2_AF=y
- # Enable if netdev PF driver required
- CONFIG_OCTEONTX2_PF=y
- # Enable if netdev VF driver required
- CONFIG_OCTEONTX2_VF=y
- CONFIG_CRYPTO_DEV_OCTEONTX2_CPT=y
- # Enable if OCTEONTX2 DMA PF driver required
- CONFIG_OCTEONTX2_DPI_PF=n
-
-2. **ARM64 Linux Tool Chain**
-
- For example, the *aarch64* Linaro Toolchain, which can be obtained from
- `here <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/>`_.
-
- Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is
- optimized for OCTEON TX2 CPU.
-
-3. **Rootfile system**
-
- Any *aarch64* supporting filesystem may be used. For example,
- Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
- from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
-
- Alternatively, the Marvell SDK provides the buildroot based root filesystem.
- The SDK includes all the above prerequisites necessary to bring up the OCTEON TX2 board.
-
-- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
-
-
-Debugging Options
------------------
-
-.. _table_octeontx2_common_debug_options:
-
-.. table:: OCTEON TX2 common debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | Common | --log-level='pmd\.octeontx2\.base,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | Mailbox | --log-level='pmd\.octeontx2\.mbox,8' |
- +---+------------+-------------------------------------------------------+
-
-Debugfs support
-~~~~~~~~~~~~~~~
-
-The **OCTEON TX2 Linux kernel driver** provides support to dump RVU blocks
-context or stats using debugfs.
-
-Enable ``debugfs`` by:
-
-1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUGFS=y``.
-2. Boot OCTEON TX2 with debugfs supported kernel.
-3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
-
-.. code-block:: console
-
- # mount -t debugfs none /sys/kernel/debug
-
-Currently ``debugfs`` supports the following RVU blocks NIX, NPA, NPC, NDC,
-SSO & CGX.
-
-The file structure under ``/sys/kernel/debug`` is as follows
-
-.. code-block:: console
-
- octeontx2/
- |-- cgx
- | |-- cgx0
- | | '-- lmac0
- | | '-- stats
- | |-- cgx1
- | | |-- lmac0
- | | | '-- stats
- | | '-- lmac1
- | | '-- stats
- | '-- cgx2
- | '-- lmac0
- | '-- stats
- |-- cpt
- | |-- cpt_engines_info
- | |-- cpt_engines_sts
- | |-- cpt_err_info
- | |-- cpt_lfs_info
- | '-- cpt_pc
- |---- nix
- | |-- cq_ctx
- | |-- ndc_rx_cache
- | |-- ndc_rx_hits_miss
- | |-- ndc_tx_cache
- | |-- ndc_tx_hits_miss
- | |-- qsize
- | |-- rq_ctx
- | |-- sq_ctx
- | '-- tx_stall_hwissue
- |-- npa
- | |-- aura_ctx
- | |-- ndc_cache
- | |-- ndc_hits_miss
- | |-- pool_ctx
- | '-- qsize
- |-- npc
- | |-- mcam_info
- | '-- rx_miss_act_stats
- |-- rsrc_alloc
- '-- sso
- |-- hws
- | '-- sso_hws_info
- '-- hwgrp
- |-- sso_hwgrp_aq_thresh
- |-- sso_hwgrp_iaq_walk
- |-- sso_hwgrp_pc
- |-- sso_hwgrp_free_list_walk
- |-- sso_hwgrp_ient_walk
- '-- sso_hwgrp_taq_walk
-
-RVU block LF allocation:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/rsrc_alloc
-
- pcifunc NPA NIX SSO GROUP SSOWS TIM CPT
- PF1 0 0
- PF4 1
- PF13 0, 1 0, 1 0
-
-CGX example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/cgx/cgx2/lmac0/stats
-
- =======Link Status======
- Link is UP 40000 Mbps
- =======RX_STATS======
- Received packets: 0
- Octets of received packets: 0
- Received PAUSE packets: 0
- Received PAUSE and control packets: 0
- Filtered DMAC0 (NIX-bound) packets: 0
- Filtered DMAC0 (NIX-bound) octets: 0
- Packets dropped due to RX FIFO full: 0
- Octets dropped due to RX FIFO full: 0
- Error packets: 0
- Filtered DMAC1 (NCSI-bound) packets: 0
- Filtered DMAC1 (NCSI-bound) octets: 0
- NCSI-bound packets dropped: 0
- NCSI-bound octets dropped: 0
- =======TX_STATS======
- Packets dropped due to excessive collisions: 0
- Packets dropped due to excessive deferral: 0
- Multiple collisions before successful transmission: 0
- Single collisions before successful transmission: 0
- Total octets sent on the interface: 0
- Total frames sent on the interface: 0
- Packets sent with an octet count < 64: 0
- Packets sent with an octet count == 64: 0
- Packets sent with an octet count of 65127: 0
- Packets sent with an octet count of 128-255: 0
- Packets sent with an octet count of 256-511: 0
- Packets sent with an octet count of 512-1023: 0
- Packets sent with an octet count of 1024-1518: 0
- Packets sent with an octet count of > 1518: 0
- Packets sent to a broadcast DMAC: 0
- Packets sent to the multicast DMAC: 0
- Transmit underflow and were truncated: 0
- Control/PAUSE packets sent: 0
-
-CPT example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/cpt/cpt_pc
-
- CPT instruction requests 0
- CPT instruction latency 0
- CPT NCB read requests 0
- CPT NCB read latency 0
- CPT read requests caused by UC fills 0
- CPT active cycles pc 1395642
- CPT clock count pc 5579867595493
-
-NIX example usage:
-
-.. code-block:: console
-
- Usage: echo <nixlf> [cq number/all] > /sys/kernel/debug/octeontx2/nix/cq_ctx
- cat /sys/kernel/debug/octeontx2/nix/cq_ctx
- echo 0 0 > /sys/kernel/debug/octeontx2/nix/cq_ctx
- cat /sys/kernel/debug/octeontx2/nix/cq_ctx
-
- =====cq_ctx for nixlf:0 and qidx:0 is=====
- W0: base 158ef1a00
-
- W1: wrptr 0
- W1: avg_con 0
- W1: cint_idx 0
- W1: cq_err 0
- W1: qint_idx 0
- W1: bpid 0
- W1: bp_ena 0
-
- W2: update_time 31043
- W2:avg_level 255
- W2: head 0
- W2:tail 0
-
- W3: cq_err_int_ena 5
- W3:cq_err_int 0
- W3: qsize 4
- W3:caching 1
- W3: substream 0x000
- W3: ena 1
- W3: drop_ena 1
- W3: drop 64
- W3: bp 0
-
-NPA example usage:
-
-.. code-block:: console
-
- Usage: echo <npalf> [pool number/all] > /sys/kernel/debug/octeontx2/npa/pool_ctx
- cat /sys/kernel/debug/octeontx2/npa/pool_ctx
- echo 0 0 > /sys/kernel/debug/octeontx2/npa/pool_ctx
- cat /sys/kernel/debug/octeontx2/npa/pool_ctx
-
- ======POOL : 0=======
- W0: Stack base 1375bff00
- W1: ena 1
- W1: nat_align 1
- W1: stack_caching 1
- W1: stack_way_mask 0
- W1: buf_offset 1
- W1: buf_size 19
- W2: stack_max_pages 24315
- W2: stack_pages 24314
- W3: op_pc 267456
- W4: stack_offset 2
- W4: shift 5
- W4: avg_level 255
- W4: avg_con 0
- W4: fc_ena 0
- W4: fc_stype 0
- W4: fc_hyst_bits 0
- W4: fc_up_crossing 0
- W4: update_time 62993
- W5: fc_addr 0
- W6: ptr_start 1593adf00
- W7: ptr_end 180000000
- W8: err_int 0
- W8: err_int_ena 7
- W8: thresh_int 0
- W8: thresh_int_ena 0
- W8: thresh_up 0
- W8: thresh_qint_idx 0
- W8: err_qint_idx 0
-
-NPC example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/npc/mcam_info
-
- NPC MCAM info:
- RX keywidth : 224bits
- TX keywidth : 224bits
-
- MCAM entries : 2048
- Reserved : 158
- Available : 1890
-
- MCAM counters : 512
- Reserved : 1
- Available : 511
-
-SSO example usage:
-
-.. code-block:: console
-
- Usage: echo [<hws>/all] > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info
- echo 0 > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info
-
- ==================================================
- SSOW HWS[0] Arbitration State 0x0
- SSOW HWS[0] Guest Machine Control 0x0
- SSOW HWS[0] SET[0] Group Mask[0] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[1] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[2] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[3] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[0] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[1] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[2] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[3] 0xffffffffffffffff
- ==================================================
-
-Compile DPDK
-------------
-
-DPDK may be compiled either natively on OCTEON TX2 platform or cross-compiled on
-an x86 based platform.
-
-Native Compilation
-~~~~~~~~~~~~~~~~~~
-
-.. code-block:: console
-
- meson build
- ninja -C build
-
-Cross Compilation
-~~~~~~~~~~~~~~~~~
-
-Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
-
-.. code-block:: console
-
- meson build --cross-file config/arm/arm64_octeontx2_linux_gcc
- ninja -C build
-
-.. note::
-
- By default, meson cross compilation uses ``aarch64-linux-gnu-gcc`` toolchain,
- if Marvell toolchain is available then it can be used by overriding the
- c, cpp, ar, strip ``binaries`` attributes to respective Marvell
- toolchain binaries in ``config/arm/arm64_octeontx2_linux_gcc`` file.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5581822d10..4e5b23c53d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,20 +125,3 @@ Deprecation Notices
applications should be updated to use the ``dmadev`` library instead,
with the underlying HW-functionality being provided by the ``ioat`` or
``idxd`` dma drivers
-
-* drivers/octeontx2: remove octeontx2 drivers
-
- In the view of enabling unified driver for ``octeontx2(cn9k)``/``octeontx3(cn10k)``,
- removing ``drivers/octeontx2`` drivers and replace with ``drivers/cnxk/`` which
- supports both ``octeontx2(cn9k)`` and ``octeontx3(cn10k)`` SoCs.
- This deprecation notice is to do following actions in DPDK v22.02 version.
-
- #. Replace ``drivers/common/octeontx2/`` with ``drivers/common/cnxk/``
- #. Replace ``drivers/mempool/octeontx2/`` with ``drivers/mempool/cnxk/``
- #. Replace ``drivers/net/octeontx2/`` with ``drivers/net/cnxk/``
- #. Replace ``drivers/event/octeontx2/`` with ``drivers/event/cnxk/``
- #. Replace ``drivers/crypto/octeontx2/`` with ``drivers/crypto/cnxk/``
- #. Rename ``drivers/regex/octeontx2/`` as ``drivers/regex/cn9k/``
- #. Rename ``config/arm/arm64_octeontx2_linux_gcc`` as ``config/arm/arm64_cn9k_linux_gcc``
-
- Last two actions are to align naming convention as cnxk scheme.
diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst
index 1a0e6111d7..2f6973cef3 100644
--- a/doc/guides/rel_notes/release_19_08.rst
+++ b/doc/guides/rel_notes/release_19_08.rst
@@ -146,17 +146,17 @@ New Features
of via software, reducing cycles spent copying large blocks of data in
applications.
-* **Added Marvell OCTEON TX2 drivers.**
+* **Added Marvell OCTEON 9 drivers.**
Added the new ``ethdev``, ``eventdev``, ``mempool``, ``eventdev Rx adapter``,
``eventdev Tx adapter``, ``eventdev Timer adapter`` and ``rawdev DMA``
- drivers for various HW co-processors available in ``OCTEON TX2`` SoC.
+ drivers for various HW co-processors available in ``OCTEON 9`` SoC.
- See :doc:`../platform/octeontx2` and driver information:
+ See ``platform/octeontx2`` and driver information:
- * :doc:`../nics/octeontx2`
- * :doc:`../mempool/octeontx2`
- * :doc:`../eventdevs/octeontx2`
+ * ``nics/octeontx2``
+ * ``mempool/octeontx2``
+ * ``eventdevs/octeontx2``
* ``rawdevs/octeontx2_dma``
* **Introduced the Intel NTB PMD.**
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 302b3e5f37..6c3aa14c0d 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -187,12 +187,12 @@ New Features
Added support for asymmetric operations to Marvell OCTEON TX crypto PMD.
Supports RSA and modexp operations.
-* **Added Marvell OCTEON TX2 crypto PMD.**
+* **Added Marvell OCTEON 9 crypto PMD.**
- Added a new PMD for hardware crypto offload block on ``OCTEON TX2``
+ Added a new PMD for hardware crypto offload block on ``OCTEON 9``
SoC.
- See :doc:`../cryptodevs/octeontx2` for more details
+ See ``cryptodevs/octeontx2`` for more details
* **Updated NXP crypto PMDs for PDCP support.**
diff --git a/doc/guides/rel_notes/release_20_02.rst b/doc/guides/rel_notes/release_20_02.rst
index 925985b4f8..daeca868e0 100644
--- a/doc/guides/rel_notes/release_20_02.rst
+++ b/doc/guides/rel_notes/release_20_02.rst
@@ -175,18 +175,18 @@ New Features
armv8 crypto library is not used anymore. The library name has been changed
from armv8_crypto to AArch64crypto.
-* **Added inline IPsec support to Marvell OCTEON TX2 PMD.**
+* **Added inline IPsec support to Marvell OCTEON 9 PMD.**
- Added inline IPsec support to Marvell OCTEON TX2 PMD. With this feature,
+ Added inline IPsec support to Marvell OCTEON 9 PMD. With this feature,
applications will be able to offload entire IPsec offload to the hardware.
For the configured sessions, hardware will do the lookup and perform
decryption and IPsec transformation. For the outbound path, applications
can submit a plain packet to the PMD, and it will be sent out on the wire
after doing encryption and IPsec transformation of the packet.
-* **Added Marvell OCTEON TX2 End Point rawdev PMD.**
+* **Added Marvell OCTEON 9 End Point rawdev PMD.**
- Added a new OCTEON TX2 rawdev PMD for End Point mode of operation.
+ Added a new OCTEON 9 rawdev PMD for End Point mode of operation.
See ``rawdevs/octeontx2_ep`` for more details on this new PMD.
* **Added event mode to l3fwd sample application.**
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index a38c6c673d..b853f00ae6 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -116,9 +116,9 @@ New Features
* Added support for DCF (Device Config Function) feature.
* Added switch filter support for Intel DCF.
-* **Updated Marvell OCTEON TX2 ethdev driver.**
+* **Updated Marvell OCTEON 9 ethdev driver.**
- Updated Marvell OCTEON TX2 ethdev driver with traffic manager support,
+ Updated Marvell OCTEON 9 ethdev driver with traffic manager support,
including:
* Hierarchical Scheduling with DWRR and SP.
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 445e40fbac..e597cd0130 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -183,11 +183,11 @@ New Features
* Added support for Intel GEN2 QuickAssist device 200xx
(PF device id 0x18ee, VF device id 0x18ef).
-* **Updated the OCTEON TX2 crypto PMD.**
+* **Updated the OCTEON 9 crypto PMD.**
- * Added Chacha20-Poly1305 AEAD algorithm support in OCTEON TX2 crypto PMD.
+ * Added Chacha20-Poly1305 AEAD algorithm support in OCTEON 9 crypto PMD.
- * Updated the OCTEON TX2 crypto PMD to support ``rte_security`` lookaside
+ * Updated the OCTEON 9 crypto PMD to support ``rte_security`` lookaside
protocol offload for IPsec.
* **Added support for BPF_ABS/BPF_IND load instructions.**
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7fd15398e4..4ce9b6aea9 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -265,9 +265,9 @@ New Features
* Added AES-GCM support.
* Added cipher only offload support.
-* **Updated Marvell OCTEON TX2 crypto PMD.**
+* **Updated Marvell OCTEON 9 crypto PMD.**
- * Updated the OCTEON TX2 crypto PMD lookaside protocol offload for IPsec with
+ * Updated the OCTEON 9 crypto PMD lookaside protocol offload for IPsec with
IPv6 support.
* **Updated Intel QAT PMD.**
@@ -286,9 +286,9 @@ New Features
``rte_security_pdcp_xform`` in ``rte_security`` lib is updated to enable
5G NR processing of SDAP headers in PMDs.
-* **Added Marvell OCTEON TX2 regex PMD.**
+* **Added Marvell OCTEON 9 regex PMD.**
- Added a new PMD for the hardware regex offload block for OCTEON TX2 SoC.
+ Added a new PMD for the hardware regex offload block for OCTEON 9 SoC.
See ``regexdevs/octeontx2`` for more details.
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 5fbf5b3d43..ac996dce95 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -123,14 +123,14 @@ New Features
enable applications to add/remove user callbacks which get called
for every enqueue/dequeue operation.
-* **Updated the OCTEON TX2 crypto PMD.**
+* **Updated the OCTEON 9 crypto PMD.**
- * Updated the OCTEON TX2 crypto PMD lookaside protocol offload for IPsec with
+ * Updated the OCTEON 9 crypto PMD lookaside protocol offload for IPsec with
ESN and anti-replay support.
- * Updated the OCTEON TX2 crypto PMD with CN98xx support.
- * Added support for aes-cbc sha1-hmac cipher combination in OCTEON TX2 crypto
+ * Updated the OCTEON 9 crypto PMD with CN98xx support.
+ * Added support for aes-cbc sha1-hmac cipher combination in OCTEON 9 crypto
PMD lookaside protocol offload for IPsec.
- * Added support for aes-cbc sha256-128-hmac cipher combination in OCTEON TX2
+ * Added support for aes-cbc sha256-128-hmac cipher combination in OCTEON 9
crypto PMD lookaside protocol offload for IPsec.
* **Added mlx5 compress PMD.**
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 49044ed422..89a261e5f5 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -121,7 +121,7 @@ New Features
* Added GTPU TEID support for DCF switch filter.
* Added flow priority support for DCF switch filter.
-* **Updated Marvell OCTEON TX2 ethdev driver.**
+* **Updated Marvell OCTEON 9 ethdev driver.**
* Added support for flow action port id.
@@ -187,9 +187,9 @@ New Features
* Added support for ``DIGEST_ENCRYPTED`` mode in the OCTEON TX crypto PMD.
-* **Updated the OCTEON TX2 crypto PMD.**
+* **Updated the OCTEON 9 crypto PMD.**
- * Added support for ``DIGEST_ENCRYPTED`` mode in OCTEON TX2 crypto PMD.
+ * Added support for ``DIGEST_ENCRYPTED`` mode in OCTEON 9 crypto PMD.
* Added support in lookaside protocol offload mode for IPsec with
UDP encapsulation support for NAT Traversal.
* Added support in lookaside protocol offload mode for IPsec with
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index db09ec01ea..f2497f1447 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -54,7 +54,7 @@ New Features
* **Added Marvell CNXK DMA driver.**
Added dmadev driver for the DPI DMA hardware accelerator
- of Marvell OCTEONTX2 and OCTEONTX3 family of SoCs.
+ of Marvell OCTEON 9 and OCTEON 10 family of SoCs.
* **Added NXP DPAA DMA driver.**
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index ce93483291..d3d5ebe4dc 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -157,7 +157,6 @@ The following are the application command-line options:
crypto_mvsam
crypto_null
crypto_octeontx
- crypto_octeontx2
crypto_openssl
crypto_qat
crypto_scheduler
diff --git a/drivers/common/meson.build b/drivers/common/meson.build
index 4acbad60b1..ea261dd70a 100644
--- a/drivers/common/meson.build
+++ b/drivers/common/meson.build
@@ -8,5 +8,4 @@ drivers = [
'iavf',
'mvep',
'octeontx',
- 'octeontx2',
]
diff --git a/drivers/common/octeontx2/hw/otx2_nix.h b/drivers/common/octeontx2/hw/otx2_nix.h
deleted file mode 100644
index e3b68505b7..0000000000
--- a/drivers/common/octeontx2/hw/otx2_nix.h
+++ /dev/null
@@ -1,1391 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NIX_HW_H__
-#define __OTX2_NIX_HW_H__
-
-/* Register offsets */
-
-#define NIX_AF_CFG (0x0ull)
-#define NIX_AF_STATUS (0x10ull)
-#define NIX_AF_NDC_CFG (0x18ull)
-#define NIX_AF_CONST (0x20ull)
-#define NIX_AF_CONST1 (0x28ull)
-#define NIX_AF_CONST2 (0x30ull)
-#define NIX_AF_CONST3 (0x38ull)
-#define NIX_AF_SQ_CONST (0x40ull)
-#define NIX_AF_CQ_CONST (0x48ull)
-#define NIX_AF_RQ_CONST (0x50ull)
-#define NIX_AF_PSE_CONST (0x60ull)
-#define NIX_AF_TL1_CONST (0x70ull)
-#define NIX_AF_TL2_CONST (0x78ull)
-#define NIX_AF_TL3_CONST (0x80ull)
-#define NIX_AF_TL4_CONST (0x88ull)
-#define NIX_AF_MDQ_CONST (0x90ull)
-#define NIX_AF_MC_MIRROR_CONST (0x98ull)
-#define NIX_AF_LSO_CFG (0xa8ull)
-#define NIX_AF_BLK_RST (0xb0ull)
-#define NIX_AF_TX_TSTMP_CFG (0xc0ull)
-#define NIX_AF_RX_CFG (0xd0ull)
-#define NIX_AF_AVG_DELAY (0xe0ull)
-#define NIX_AF_CINT_DELAY (0xf0ull)
-#define NIX_AF_RX_MCAST_BASE (0x100ull)
-#define NIX_AF_RX_MCAST_CFG (0x110ull)
-#define NIX_AF_RX_MCAST_BUF_BASE (0x120ull)
-#define NIX_AF_RX_MCAST_BUF_CFG (0x130ull)
-#define NIX_AF_RX_MIRROR_BUF_BASE (0x140ull)
-#define NIX_AF_RX_MIRROR_BUF_CFG (0x148ull)
-#define NIX_AF_LF_RST (0x150ull)
-#define NIX_AF_GEN_INT (0x160ull)
-#define NIX_AF_GEN_INT_W1S (0x168ull)
-#define NIX_AF_GEN_INT_ENA_W1S (0x170ull)
-#define NIX_AF_GEN_INT_ENA_W1C (0x178ull)
-#define NIX_AF_ERR_INT (0x180ull)
-#define NIX_AF_ERR_INT_W1S (0x188ull)
-#define NIX_AF_ERR_INT_ENA_W1S (0x190ull)
-#define NIX_AF_ERR_INT_ENA_W1C (0x198ull)
-#define NIX_AF_RAS (0x1a0ull)
-#define NIX_AF_RAS_W1S (0x1a8ull)
-#define NIX_AF_RAS_ENA_W1S (0x1b0ull)
-#define NIX_AF_RAS_ENA_W1C (0x1b8ull)
-#define NIX_AF_RVU_INT (0x1c0ull)
-#define NIX_AF_RVU_INT_W1S (0x1c8ull)
-#define NIX_AF_RVU_INT_ENA_W1S (0x1d0ull)
-#define NIX_AF_RVU_INT_ENA_W1C (0x1d8ull)
-#define NIX_AF_TCP_TIMER (0x1e0ull)
-#define NIX_AF_RX_DEF_OL2 (0x200ull)
-#define NIX_AF_RX_DEF_OIP4 (0x210ull)
-#define NIX_AF_RX_DEF_IIP4 (0x220ull)
-#define NIX_AF_RX_DEF_OIP6 (0x230ull)
-#define NIX_AF_RX_DEF_IIP6 (0x240ull)
-#define NIX_AF_RX_DEF_OTCP (0x250ull)
-#define NIX_AF_RX_DEF_ITCP (0x260ull)
-#define NIX_AF_RX_DEF_OUDP (0x270ull)
-#define NIX_AF_RX_DEF_IUDP (0x280ull)
-#define NIX_AF_RX_DEF_OSCTP (0x290ull)
-#define NIX_AF_RX_DEF_ISCTP (0x2a0ull)
-#define NIX_AF_RX_DEF_IPSECX(a) (0x2b0ull | (uint64_t)(a) << 3)
-#define NIX_AF_RX_IPSEC_GEN_CFG (0x300ull)
-#define NIX_AF_RX_CPTX_INST_QSEL(a) (0x320ull | (uint64_t)(a) << 3)
-#define NIX_AF_RX_CPTX_CREDIT(a) (0x360ull | (uint64_t)(a) << 3)
-#define NIX_AF_NDC_RX_SYNC (0x3e0ull)
-#define NIX_AF_NDC_TX_SYNC (0x3f0ull)
-#define NIX_AF_AQ_CFG (0x400ull)
-#define NIX_AF_AQ_BASE (0x410ull)
-#define NIX_AF_AQ_STATUS (0x420ull)
-#define NIX_AF_AQ_DOOR (0x430ull)
-#define NIX_AF_AQ_DONE_WAIT (0x440ull)
-#define NIX_AF_AQ_DONE (0x450ull)
-#define NIX_AF_AQ_DONE_ACK (0x460ull)
-#define NIX_AF_AQ_DONE_TIMER (0x470ull)
-#define NIX_AF_AQ_DONE_ENA_W1S (0x490ull)
-#define NIX_AF_AQ_DONE_ENA_W1C (0x498ull)
-#define NIX_AF_RX_LINKX_CFG(a) (0x540ull | (uint64_t)(a) << 16)
-#define NIX_AF_RX_SW_SYNC (0x550ull)
-#define NIX_AF_RX_LINKX_WRR_CFG(a) (0x560ull | (uint64_t)(a) << 16)
-#define NIX_AF_EXPR_TX_FIFO_STATUS (0x640ull)
-#define NIX_AF_NORM_TX_FIFO_STATUS (0x648ull)
-#define NIX_AF_SDP_TX_FIFO_STATUS (0x650ull)
-#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x660ull)
-#define NIX_AF_TX_NPC_CAPTURE_INFO (0x668ull)
-#define NIX_AF_TX_NPC_CAPTURE_RESPX(a) (0x680ull | (uint64_t)(a) << 3)
-#define NIX_AF_SEB_ACTIVE_CYCLES_PCX(a) (0x6c0ull | (uint64_t)(a) << 3)
-#define NIX_AF_SMQX_CFG(a) (0x700ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_HEAD(a) (0x710ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_TAIL(a) (0x720ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_STATUS(a) (0x730ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_NXT_HEAD(a) (0x740ull | (uint64_t)(a) << 16)
-#define NIX_AF_SQM_ACTIVE_CYCLES_PC (0x770ull)
-#define NIX_AF_PSE_CHANNEL_LEVEL (0x800ull)
-#define NIX_AF_PSE_SHAPER_CFG (0x810ull)
-#define NIX_AF_PSE_ACTIVE_CYCLES_PC (0x8c0ull)
-#define NIX_AF_MARK_FORMATX_CTL(a) (0x900ull | (uint64_t)(a) << 18)
-#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xa00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xa10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xa20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xa30ull | (uint64_t)(a) << 16)
-#define NIX_AF_SDP_LINK_CREDIT (0xa40ull)
-#define NIX_AF_SDP_SW_XOFFX(a) (0xa60ull | (uint64_t)(a) << 3)
-#define NIX_AF_SDP_HW_XOFFX(a) (0xac0ull | (uint64_t)(a) << 3)
-#define NIX_AF_TL4X_BP_STATUS(a) (0xb00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xb10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SCHEDULE(a) (0xc00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SHAPE(a) (0xc10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_CIR(a) (0xc20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SHAPE_STATE(a) (0xc50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SW_XOFF(a) (0xc70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_TOPOLOGY(a) (0xc80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG0(a) (0xcc0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG1(a) (0xcc8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG2(a) (0xcd0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG3(a) (0xcd8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xd20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xd30ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_RED_PACKETS(a) (0xd40ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_RED_BYTES(a) (0xd50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xd60ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xd70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xd80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_GREEN_BYTES(a) (0xd90ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SCHEDULE(a) (0xe00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SHAPE(a) (0xe10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_CIR(a) (0xe20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_PIR(a) (0xe30ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SCHED_STATE(a) (0xe40ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SHAPE_STATE(a) (0xe50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SW_XOFF(a) (0xe70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_TOPOLOGY(a) (0xe80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_PARENT(a) (0xe88ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG0(a) (0xec0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG1(a) (0xec8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG2(a) (0xed0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG3(a) (0xed8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SCHEDULE(a) \
- (0x1000ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SHAPE(a) \
- (0x1010ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_CIR(a) \
- (0x1020ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_PIR(a) \
- (0x1030ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SCHED_STATE(a) \
- (0x1040ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SHAPE_STATE(a) \
- (0x1050ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SW_XOFF(a) \
- (0x1070ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_TOPOLOGY(a) \
- (0x1080ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_PARENT(a) \
- (0x1088ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG0(a) \
- (0x10c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG1(a) \
- (0x10c8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG2(a) \
- (0x10d0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG3(a) \
- (0x10d8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SCHEDULE(a) \
- (0x1200ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SHAPE(a) \
- (0x1210ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_CIR(a) \
- (0x1220ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_PIR(a) \
- (0x1230ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SCHED_STATE(a) \
- (0x1240ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SHAPE_STATE(a) \
- (0x1250ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SW_XOFF(a) \
- (0x1270ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_TOPOLOGY(a) \
- (0x1280ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_PARENT(a) \
- (0x1288ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG0(a) \
- (0x12c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG1(a) \
- (0x12c8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG2(a) \
- (0x12d0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG3(a) \
- (0x12d8ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SCHEDULE(a) \
- (0x1400ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SHAPE(a) \
- (0x1410ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_CIR(a) \
- (0x1420ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_PIR(a) \
- (0x1430ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SCHED_STATE(a) \
- (0x1440ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SHAPE_STATE(a) \
- (0x1450ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SW_XOFF(a) \
- (0x1470ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_PARENT(a) \
- (0x1480ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_MD_DEBUG(a) \
- (0x14c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_CFG(a) \
- (0x1600ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_BP_STATUS(a) \
- (0x1610ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) \
- (0x1700ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) \
- (0x1800ull | (uint64_t)(a) << 18 | (uint64_t)(b) << 3)
-#define NIX_AF_TX_MCASTX(a) \
- (0x1900ull | (uint64_t)(a) << 15)
-#define NIX_AF_TX_VTAG_DEFX_CTL(a) \
- (0x1a00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_VTAG_DEFX_DATA(a) \
- (0x1a10ull | (uint64_t)(a) << 16)
-#define NIX_AF_RX_BPIDX_STATUS(a) \
- (0x1a20ull | (uint64_t)(a) << 17)
-#define NIX_AF_RX_CHANX_CFG(a) \
- (0x1a30ull | (uint64_t)(a) << 15)
-#define NIX_AF_CINT_TIMERX(a) \
- (0x1a40ull | (uint64_t)(a) << 18)
-#define NIX_AF_LSO_FORMATX_FIELDX(a, b) \
- (0x1b00ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_CFG(a) \
- (0x4000ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_SQS_CFG(a) \
- (0x4020ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_CFG2(a) \
- (0x4028ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_SQS_BASE(a) \
- (0x4030ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RQS_CFG(a) \
- (0x4040ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RQS_BASE(a) \
- (0x4050ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CQS_CFG(a) \
- (0x4060ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CQS_BASE(a) \
- (0x4070ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_CFG(a) \
- (0x4080ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_PARSE_CFG(a) \
- (0x4090ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_CFG(a) \
- (0x40a0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RSS_CFG(a) \
- (0x40c0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RSS_BASE(a) \
- (0x40d0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_QINTS_CFG(a) \
- (0x4100ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_QINTS_BASE(a) \
- (0x4110ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CINTS_CFG(a) \
- (0x4120ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CINTS_BASE(a) \
- (0x4130ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_CFG0(a) \
- (0x4140ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_CFG1(a) \
- (0x4148ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) \
- (0x4150ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) \
- (0x4158ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) \
- (0x4170ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_STATUS(a) \
- (0x4180ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) \
- (0x4200ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_LOCKX(a, b) \
- (0x4300ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_TX_STATX(a, b) \
- (0x4400ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_RX_STATX(a, b) \
- (0x4500ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_RSS_GRPX(a, b) \
- (0x4600ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_RX_NPC_MC_RCV (0x4700ull)
-#define NIX_AF_RX_NPC_MC_DROP (0x4710ull)
-#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720ull)
-#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730ull)
-#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) \
- (0x4800ull | (uint64_t)(a) << 16)
-#define NIX_PRIV_AF_INT_CFG (0x8000000ull)
-#define NIX_PRIV_LFX_CFG(a) \
- (0x8000010ull | (uint64_t)(a) << 8)
-#define NIX_PRIV_LFX_INT_CFG(a) \
- (0x8000020ull | (uint64_t)(a) << 8)
-#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030ull)
-
-#define NIX_LF_RX_SECRETX(a) (0x0ull | (uint64_t)(a) << 3)
-#define NIX_LF_CFG (0x100ull)
-#define NIX_LF_GINT (0x200ull)
-#define NIX_LF_GINT_W1S (0x208ull)
-#define NIX_LF_GINT_ENA_W1C (0x210ull)
-#define NIX_LF_GINT_ENA_W1S (0x218ull)
-#define NIX_LF_ERR_INT (0x220ull)
-#define NIX_LF_ERR_INT_W1S (0x228ull)
-#define NIX_LF_ERR_INT_ENA_W1C (0x230ull)
-#define NIX_LF_ERR_INT_ENA_W1S (0x238ull)
-#define NIX_LF_RAS (0x240ull)
-#define NIX_LF_RAS_W1S (0x248ull)
-#define NIX_LF_RAS_ENA_W1C (0x250ull)
-#define NIX_LF_RAS_ENA_W1S (0x258ull)
-#define NIX_LF_SQ_OP_ERR_DBG (0x260ull)
-#define NIX_LF_MNQ_ERR_DBG (0x270ull)
-#define NIX_LF_SEND_ERR_DBG (0x280ull)
-#define NIX_LF_TX_STATX(a) (0x300ull | (uint64_t)(a) << 3)
-#define NIX_LF_RX_STATX(a) (0x400ull | (uint64_t)(a) << 3)
-#define NIX_LF_OP_SENDX(a) (0x800ull | (uint64_t)(a) << 3)
-#define NIX_LF_RQ_OP_INT (0x900ull)
-#define NIX_LF_RQ_OP_OCTS (0x910ull)
-#define NIX_LF_RQ_OP_PKTS (0x920ull)
-#define NIX_LF_RQ_OP_DROP_OCTS (0x930ull)
-#define NIX_LF_RQ_OP_DROP_PKTS (0x940ull)
-#define NIX_LF_RQ_OP_RE_PKTS (0x950ull)
-#define NIX_LF_OP_IPSEC_DYNO_CNT (0x980ull)
-#define NIX_LF_SQ_OP_INT (0xa00ull)
-#define NIX_LF_SQ_OP_OCTS (0xa10ull)
-#define NIX_LF_SQ_OP_PKTS (0xa20ull)
-#define NIX_LF_SQ_OP_STATUS (0xa30ull)
-#define NIX_LF_SQ_OP_DROP_OCTS (0xa40ull)
-#define NIX_LF_SQ_OP_DROP_PKTS (0xa50ull)
-#define NIX_LF_CQ_OP_INT (0xb00ull)
-#define NIX_LF_CQ_OP_DOOR (0xb30ull)
-#define NIX_LF_CQ_OP_STATUS (0xb40ull)
-#define NIX_LF_QINTX_CNT(a) (0xc00ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_INT(a) (0xc10ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_ENA_W1S(a) (0xc20ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_ENA_W1C(a) (0xc30ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_CNT(a) (0xd00ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_WAIT(a) (0xd10ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_INT(a) (0xd20ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_INT_W1S(a) (0xd30ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_ENA_W1S(a) (0xd40ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_ENA_W1C(a) (0xd50ull | (uint64_t)(a) << 12)
-
-
-/* Enum offsets */
-
-#define NIX_TX_VTAGOP_NOP (0x0ull)
-#define NIX_TX_VTAGOP_INSERT (0x1ull)
-#define NIX_TX_VTAGOP_REPLACE (0x2ull)
-
-#define NIX_TX_ACTIONOP_DROP (0x0ull)
-#define NIX_TX_ACTIONOP_UCAST_DEFAULT (0x1ull)
-#define NIX_TX_ACTIONOP_UCAST_CHAN (0x2ull)
-#define NIX_TX_ACTIONOP_MCAST (0x3ull)
-#define NIX_TX_ACTIONOP_DROP_VIOL (0x5ull)
-
-#define NIX_INTF_RX (0x0ull)
-#define NIX_INTF_TX (0x1ull)
-
-#define NIX_TXLAYER_OL3 (0x0ull)
-#define NIX_TXLAYER_OL4 (0x1ull)
-#define NIX_TXLAYER_IL3 (0x2ull)
-#define NIX_TXLAYER_IL4 (0x3ull)
-
-#define NIX_SUBDC_NOP (0x0ull)
-#define NIX_SUBDC_EXT (0x1ull)
-#define NIX_SUBDC_CRC (0x2ull)
-#define NIX_SUBDC_IMM (0x3ull)
-#define NIX_SUBDC_SG (0x4ull)
-#define NIX_SUBDC_MEM (0x5ull)
-#define NIX_SUBDC_JUMP (0x6ull)
-#define NIX_SUBDC_WORK (0x7ull)
-#define NIX_SUBDC_SOD (0xfull)
-
-#define NIX_STYPE_STF (0x0ull)
-#define NIX_STYPE_STT (0x1ull)
-#define NIX_STYPE_STP (0x2ull)
-
-#define NIX_STAT_LF_TX_TX_UCAST (0x0ull)
-#define NIX_STAT_LF_TX_TX_BCAST (0x1ull)
-#define NIX_STAT_LF_TX_TX_MCAST (0x2ull)
-#define NIX_STAT_LF_TX_TX_DROP (0x3ull)
-#define NIX_STAT_LF_TX_TX_OCTS (0x4ull)
-
-#define NIX_STAT_LF_RX_RX_OCTS (0x0ull)
-#define NIX_STAT_LF_RX_RX_UCAST (0x1ull)
-#define NIX_STAT_LF_RX_RX_BCAST (0x2ull)
-#define NIX_STAT_LF_RX_RX_MCAST (0x3ull)
-#define NIX_STAT_LF_RX_RX_DROP (0x4ull)
-#define NIX_STAT_LF_RX_RX_DROP_OCTS (0x5ull)
-#define NIX_STAT_LF_RX_RX_FCS (0x6ull)
-#define NIX_STAT_LF_RX_RX_ERR (0x7ull)
-#define NIX_STAT_LF_RX_RX_DRP_BCAST (0x8ull)
-#define NIX_STAT_LF_RX_RX_DRP_MCAST (0x9ull)
-#define NIX_STAT_LF_RX_RX_DRP_L3BCAST (0xaull)
-#define NIX_STAT_LF_RX_RX_DRP_L3MCAST (0xbull)
-
-#define NIX_SQOPERR_SQ_OOR (0x0ull)
-#define NIX_SQOPERR_SQ_CTX_FAULT (0x1ull)
-#define NIX_SQOPERR_SQ_CTX_POISON (0x2ull)
-#define NIX_SQOPERR_SQ_DISABLED (0x3ull)
-#define NIX_SQOPERR_MAX_SQE_SIZE_ERR (0x4ull)
-#define NIX_SQOPERR_SQE_OFLOW (0x5ull)
-#define NIX_SQOPERR_SQB_NULL (0x6ull)
-#define NIX_SQOPERR_SQB_FAULT (0x7ull)
-
-#define NIX_XQESZ_W64 (0x0ull)
-#define NIX_XQESZ_W16 (0x1ull)
-
-#define NIX_VTAGSIZE_T4 (0x0ull)
-#define NIX_VTAGSIZE_T8 (0x1ull)
-
-#define NIX_RX_ACTIONOP_DROP (0x0ull)
-#define NIX_RX_ACTIONOP_UCAST (0x1ull)
-#define NIX_RX_ACTIONOP_UCAST_IPSEC (0x2ull)
-#define NIX_RX_ACTIONOP_MCAST (0x3ull)
-#define NIX_RX_ACTIONOP_RSS (0x4ull)
-#define NIX_RX_ACTIONOP_PF_FUNC_DROP (0x5ull)
-#define NIX_RX_ACTIONOP_MIRROR (0x6ull)
-
-#define NIX_RX_VTAGACTION_VTAG0_RELPTR (0x0ull)
-#define NIX_RX_VTAGACTION_VTAG1_RELPTR (0x4ull)
-#define NIX_RX_VTAGACTION_VTAG_VALID (0x1ull)
-#define NIX_TX_VTAGACTION_VTAG0_RELPTR \
- (sizeof(struct nix_inst_hdr_s) + 2 * 6)
-#define NIX_TX_VTAGACTION_VTAG1_RELPTR \
- (sizeof(struct nix_inst_hdr_s) + 2 * 6 + 4)
-#define NIX_RQINT_DROP (0x0ull)
-#define NIX_RQINT_RED (0x1ull)
-#define NIX_RQINT_R2 (0x2ull)
-#define NIX_RQINT_R3 (0x3ull)
-#define NIX_RQINT_R4 (0x4ull)
-#define NIX_RQINT_R5 (0x5ull)
-#define NIX_RQINT_R6 (0x6ull)
-#define NIX_RQINT_R7 (0x7ull)
-
-#define NIX_MAXSQESZ_W16 (0x0ull)
-#define NIX_MAXSQESZ_W8 (0x1ull)
-
-#define NIX_LSOALG_NOP (0x0ull)
-#define NIX_LSOALG_ADD_SEGNUM (0x1ull)
-#define NIX_LSOALG_ADD_PAYLEN (0x2ull)
-#define NIX_LSOALG_ADD_OFFSET (0x3ull)
-#define NIX_LSOALG_TCP_FLAGS (0x4ull)
-
-#define NIX_MNQERR_SQ_CTX_FAULT (0x0ull)
-#define NIX_MNQERR_SQ_CTX_POISON (0x1ull)
-#define NIX_MNQERR_SQB_FAULT (0x2ull)
-#define NIX_MNQERR_SQB_POISON (0x3ull)
-#define NIX_MNQERR_TOTAL_ERR (0x4ull)
-#define NIX_MNQERR_LSO_ERR (0x5ull)
-#define NIX_MNQERR_CQ_QUERY_ERR (0x6ull)
-#define NIX_MNQERR_MAX_SQE_SIZE_ERR (0x7ull)
-#define NIX_MNQERR_MAXLEN_ERR (0x8ull)
-#define NIX_MNQERR_SQE_SIZEM1_ZERO (0x9ull)
-
-#define NIX_MDTYPE_RSVD (0x0ull)
-#define NIX_MDTYPE_FLUSH (0x1ull)
-#define NIX_MDTYPE_PMD (0x2ull)
-
-#define NIX_NDC_TX_PORT_LMT (0x0ull)
-#define NIX_NDC_TX_PORT_ENQ (0x1ull)
-#define NIX_NDC_TX_PORT_MNQ (0x2ull)
-#define NIX_NDC_TX_PORT_DEQ (0x3ull)
-#define NIX_NDC_TX_PORT_DMA (0x4ull)
-#define NIX_NDC_TX_PORT_XQE (0x5ull)
-
-#define NIX_NDC_RX_PORT_AQ (0x0ull)
-#define NIX_NDC_RX_PORT_CQ (0x1ull)
-#define NIX_NDC_RX_PORT_CINT (0x2ull)
-#define NIX_NDC_RX_PORT_MC (0x3ull)
-#define NIX_NDC_RX_PORT_PKT (0x4ull)
-#define NIX_NDC_RX_PORT_RQ (0x5ull)
-
-#define NIX_RE_OPCODE_RE_NONE (0x0ull)
-#define NIX_RE_OPCODE_RE_PARTIAL (0x1ull)
-#define NIX_RE_OPCODE_RE_JABBER (0x2ull)
-#define NIX_RE_OPCODE_RE_FCS (0x7ull)
-#define NIX_RE_OPCODE_RE_FCS_RCV (0x8ull)
-#define NIX_RE_OPCODE_RE_TERMINATE (0x9ull)
-#define NIX_RE_OPCODE_RE_RX_CTL (0xbull)
-#define NIX_RE_OPCODE_RE_SKIP (0xcull)
-#define NIX_RE_OPCODE_RE_DMAPKT (0xfull)
-#define NIX_RE_OPCODE_UNDERSIZE (0x10ull)
-#define NIX_RE_OPCODE_OVERSIZE (0x11ull)
-#define NIX_RE_OPCODE_OL2_LENMISM (0x12ull)
-
-#define NIX_REDALG_STD (0x0ull)
-#define NIX_REDALG_SEND (0x1ull)
-#define NIX_REDALG_STALL (0x2ull)
-#define NIX_REDALG_DISCARD (0x3ull)
-
-#define NIX_RX_MCOP_RQ (0x0ull)
-#define NIX_RX_MCOP_RSS (0x1ull)
-
-#define NIX_RX_PERRCODE_NPC_RESULT_ERR (0x2ull)
-#define NIX_RX_PERRCODE_MCAST_FAULT (0x4ull)
-#define NIX_RX_PERRCODE_MIRROR_FAULT (0x5ull)
-#define NIX_RX_PERRCODE_MCAST_POISON (0x6ull)
-#define NIX_RX_PERRCODE_MIRROR_POISON (0x7ull)
-#define NIX_RX_PERRCODE_DATA_FAULT (0x8ull)
-#define NIX_RX_PERRCODE_MEMOUT (0x9ull)
-#define NIX_RX_PERRCODE_BUFS_OFLOW (0xaull)
-#define NIX_RX_PERRCODE_OL3_LEN (0x10ull)
-#define NIX_RX_PERRCODE_OL4_LEN (0x11ull)
-#define NIX_RX_PERRCODE_OL4_CHK (0x12ull)
-#define NIX_RX_PERRCODE_OL4_PORT (0x13ull)
-#define NIX_RX_PERRCODE_IL3_LEN (0x20ull)
-#define NIX_RX_PERRCODE_IL4_LEN (0x21ull)
-#define NIX_RX_PERRCODE_IL4_CHK (0x22ull)
-#define NIX_RX_PERRCODE_IL4_PORT (0x23ull)
-
-#define NIX_SENDCRCALG_CRC32 (0x0ull)
-#define NIX_SENDCRCALG_CRC32C (0x1ull)
-#define NIX_SENDCRCALG_ONES16 (0x2ull)
-
-#define NIX_SENDL3TYPE_NONE (0x0ull)
-#define NIX_SENDL3TYPE_IP4 (0x2ull)
-#define NIX_SENDL3TYPE_IP4_CKSUM (0x3ull)
-#define NIX_SENDL3TYPE_IP6 (0x4ull)
-
-#define NIX_SENDL4TYPE_NONE (0x0ull)
-#define NIX_SENDL4TYPE_TCP_CKSUM (0x1ull)
-#define NIX_SENDL4TYPE_SCTP_CKSUM (0x2ull)
-#define NIX_SENDL4TYPE_UDP_CKSUM (0x3ull)
-
-#define NIX_SENDLDTYPE_LDD (0x0ull)
-#define NIX_SENDLDTYPE_LDT (0x1ull)
-#define NIX_SENDLDTYPE_LDWB (0x2ull)
-
-#define NIX_SENDMEMALG_SET (0x0ull)
-#define NIX_SENDMEMALG_SETTSTMP (0x1ull)
-#define NIX_SENDMEMALG_SETRSLT (0x2ull)
-#define NIX_SENDMEMALG_ADD (0x8ull)
-#define NIX_SENDMEMALG_SUB (0x9ull)
-#define NIX_SENDMEMALG_ADDLEN (0xaull)
-#define NIX_SENDMEMALG_SUBLEN (0xbull)
-#define NIX_SENDMEMALG_ADDMBUF (0xcull)
-#define NIX_SENDMEMALG_SUBMBUF (0xdull)
-
-#define NIX_SENDMEMDSZ_B64 (0x0ull)
-#define NIX_SENDMEMDSZ_B32 (0x1ull)
-#define NIX_SENDMEMDSZ_B16 (0x2ull)
-#define NIX_SENDMEMDSZ_B8 (0x3ull)
-
-#define NIX_SEND_STATUS_GOOD (0x0ull)
-#define NIX_SEND_STATUS_SQ_CTX_FAULT (0x1ull)
-#define NIX_SEND_STATUS_SQ_CTX_POISON (0x2ull)
-#define NIX_SEND_STATUS_SQB_FAULT (0x3ull)
-#define NIX_SEND_STATUS_SQB_POISON (0x4ull)
-#define NIX_SEND_STATUS_SEND_HDR_ERR (0x5ull)
-#define NIX_SEND_STATUS_SEND_EXT_ERR (0x6ull)
-#define NIX_SEND_STATUS_JUMP_FAULT (0x7ull)
-#define NIX_SEND_STATUS_JUMP_POISON (0x8ull)
-#define NIX_SEND_STATUS_SEND_CRC_ERR (0x10ull)
-#define NIX_SEND_STATUS_SEND_IMM_ERR (0x11ull)
-#define NIX_SEND_STATUS_SEND_SG_ERR (0x12ull)
-#define NIX_SEND_STATUS_SEND_MEM_ERR (0x13ull)
-#define NIX_SEND_STATUS_INVALID_SUBDC (0x14ull)
-#define NIX_SEND_STATUS_SUBDC_ORDER_ERR (0x15ull)
-#define NIX_SEND_STATUS_DATA_FAULT (0x16ull)
-#define NIX_SEND_STATUS_DATA_POISON (0x17ull)
-#define NIX_SEND_STATUS_NPC_DROP_ACTION (0x20ull)
-#define NIX_SEND_STATUS_LOCK_VIOL (0x21ull)
-#define NIX_SEND_STATUS_NPC_UCAST_CHAN_ERR (0x22ull)
-#define NIX_SEND_STATUS_NPC_MCAST_CHAN_ERR (0x23ull)
-#define NIX_SEND_STATUS_NPC_MCAST_ABORT (0x24ull)
-#define NIX_SEND_STATUS_NPC_VTAG_PTR_ERR (0x25ull)
-#define NIX_SEND_STATUS_NPC_VTAG_SIZE_ERR (0x26ull)
-#define NIX_SEND_STATUS_SEND_MEM_FAULT (0x27ull)
-
-#define NIX_SQINT_LMT_ERR (0x0ull)
-#define NIX_SQINT_MNQ_ERR (0x1ull)
-#define NIX_SQINT_SEND_ERR (0x2ull)
-#define NIX_SQINT_SQB_ALLOC_FAIL (0x3ull)
-
-#define NIX_XQE_TYPE_INVALID (0x0ull)
-#define NIX_XQE_TYPE_RX (0x1ull)
-#define NIX_XQE_TYPE_RX_IPSECS (0x2ull)
-#define NIX_XQE_TYPE_RX_IPSECH (0x3ull)
-#define NIX_XQE_TYPE_RX_IPSECD (0x4ull)
-#define NIX_XQE_TYPE_SEND (0x8ull)
-
-#define NIX_AQ_COMP_NOTDONE (0x0ull)
-#define NIX_AQ_COMP_GOOD (0x1ull)
-#define NIX_AQ_COMP_SWERR (0x2ull)
-#define NIX_AQ_COMP_CTX_POISON (0x3ull)
-#define NIX_AQ_COMP_CTX_FAULT (0x4ull)
-#define NIX_AQ_COMP_LOCKERR (0x5ull)
-#define NIX_AQ_COMP_SQB_ALLOC_FAIL (0x6ull)
-
-#define NIX_AF_INT_VEC_RVU (0x0ull)
-#define NIX_AF_INT_VEC_GEN (0x1ull)
-#define NIX_AF_INT_VEC_AQ_DONE (0x2ull)
-#define NIX_AF_INT_VEC_AF_ERR (0x3ull)
-#define NIX_AF_INT_VEC_POISON (0x4ull)
-
-#define NIX_AQINT_GEN_RX_MCAST_DROP (0x0ull)
-#define NIX_AQINT_GEN_RX_MIRROR_DROP (0x1ull)
-#define NIX_AQINT_GEN_TL1_DRAIN (0x3ull)
-#define NIX_AQINT_GEN_SMQ_FLUSH_DONE (0x4ull)
-
-#define NIX_AQ_INSTOP_NOP (0x0ull)
-#define NIX_AQ_INSTOP_INIT (0x1ull)
-#define NIX_AQ_INSTOP_WRITE (0x2ull)
-#define NIX_AQ_INSTOP_READ (0x3ull)
-#define NIX_AQ_INSTOP_LOCK (0x4ull)
-#define NIX_AQ_INSTOP_UNLOCK (0x5ull)
-
-#define NIX_AQ_CTYPE_RQ (0x0ull)
-#define NIX_AQ_CTYPE_SQ (0x1ull)
-#define NIX_AQ_CTYPE_CQ (0x2ull)
-#define NIX_AQ_CTYPE_MCE (0x3ull)
-#define NIX_AQ_CTYPE_RSS (0x4ull)
-#define NIX_AQ_CTYPE_DYNO (0x5ull)
-
-#define NIX_COLORRESULT_GREEN (0x0ull)
-#define NIX_COLORRESULT_YELLOW (0x1ull)
-#define NIX_COLORRESULT_RED_SEND (0x2ull)
-#define NIX_COLORRESULT_RED_DROP (0x3ull)
-
-#define NIX_CHAN_LBKX_CHX(a, b) \
- (0x000ull | ((uint64_t)(a) << 8) | (uint64_t)(b))
-#define NIX_CHAN_R4 (0x400ull)
-#define NIX_CHAN_R5 (0x500ull)
-#define NIX_CHAN_R6 (0x600ull)
-#define NIX_CHAN_SDP_CH_END (0x7ffull)
-#define NIX_CHAN_SDP_CH_START (0x700ull)
-#define NIX_CHAN_CGXX_LMACX_CHX(a, b, c) \
- (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | \
- (uint64_t)(c))
-
-#define NIX_INTF_SDP (0x4ull)
-#define NIX_INTF_CGX0 (0x0ull)
-#define NIX_INTF_CGX1 (0x1ull)
-#define NIX_INTF_CGX2 (0x2ull)
-#define NIX_INTF_LBK0 (0x3ull)
-
-#define NIX_CQERRINT_DOOR_ERR (0x0ull)
-#define NIX_CQERRINT_WR_FULL (0x1ull)
-#define NIX_CQERRINT_CQE_FAULT (0x2ull)
-
-#define NIX_LF_INT_VEC_GINT (0x80ull)
-#define NIX_LF_INT_VEC_ERR_INT (0x81ull)
-#define NIX_LF_INT_VEC_POISON (0x82ull)
-#define NIX_LF_INT_VEC_QINT_END (0x3full)
-#define NIX_LF_INT_VEC_QINT_START (0x0ull)
-#define NIX_LF_INT_VEC_CINT_END (0x7full)
-#define NIX_LF_INT_VEC_CINT_START (0x40ull)
-
-/* Enums definitions */
-
-/* Structures definitions */
-
-/* NIX admin queue instruction structure */
-struct nix_aq_inst_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t lf : 7;
- uint64_t rsvd_23_15 : 9;
- uint64_t cindex : 20;
- uint64_t rsvd_62_44 : 19;
- uint64_t doneint : 1;
- uint64_t res_addr : 64; /* W1 */
-};
-
-/* NIX admin queue result structure */
-struct nix_aq_res_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t compcode : 8;
- uint64_t doneint : 1;
- uint64_t rsvd_63_17 : 47;
- uint64_t rsvd_127_64 : 64; /* W1 */
-};
-
-/* NIX completion interrupt context hardware structure */
-struct nix_cint_hw_s {
- uint64_t ecount : 32;
- uint64_t qcount : 16;
- uint64_t intr : 1;
- uint64_t ena : 1;
- uint64_t timer_idx : 8;
- uint64_t rsvd_63_58 : 6;
- uint64_t ecount_wait : 32;
- uint64_t qcount_wait : 16;
- uint64_t time_wait : 8;
- uint64_t rsvd_127_120 : 8;
-};
-
-/* NIX completion queue entry header structure */
-struct nix_cqe_hdr_s {
- uint64_t tag : 32;
- uint64_t q : 20;
- uint64_t rsvd_57_52 : 6;
- uint64_t node : 2;
- uint64_t cqe_type : 4;
-};
-
-/* NIX completion queue context structure */
-struct nix_cq_ctx_s {
- uint64_t base : 64;/* W0 */
- uint64_t rsvd_67_64 : 4;
- uint64_t bp_ena : 1;
- uint64_t rsvd_71_69 : 3;
- uint64_t bpid : 9;
- uint64_t rsvd_83_81 : 3;
- uint64_t qint_idx : 7;
- uint64_t cq_err : 1;
- uint64_t cint_idx : 7;
- uint64_t avg_con : 9;
- uint64_t wrptr : 20;
- uint64_t tail : 20;
- uint64_t head : 20;
- uint64_t avg_level : 8;
- uint64_t update_time : 16;
- uint64_t bp : 8;
- uint64_t drop : 8;
- uint64_t drop_ena : 1;
- uint64_t ena : 1;
- uint64_t rsvd_211_210 : 2;
- uint64_t substream : 20;
- uint64_t caching : 1;
- uint64_t rsvd_235_233 : 3;
- uint64_t qsize : 4;
- uint64_t cq_err_int : 8;
- uint64_t cq_err_int_ena : 8;
-};
-
-/* NIX instruction header structure */
-struct nix_inst_hdr_s {
- uint64_t pf_func : 16;
- uint64_t sq : 20;
- uint64_t rsvd_63_36 : 28;
-};
-
-/* NIX i/o virtual address structure */
-struct nix_iova_s {
- uint64_t addr : 64; /* W0 */
-};
-
-/* NIX IPsec dynamic ordering counter structure */
-struct nix_ipsec_dyno_s {
- uint32_t count : 32; /* W0 */
-};
-
-/* NIX memory value structure */
-struct nix_mem_result_s {
- uint64_t v : 1;
- uint64_t color : 2;
- uint64_t rsvd_63_3 : 61;
-};
-
-/* NIX statistics operation write data structure */
-struct nix_op_q_wdata_s {
- uint64_t rsvd_31_0 : 32;
- uint64_t q : 20;
- uint64_t rsvd_63_52 : 12;
-};
-
-/* NIX queue interrupt context hardware structure */
-struct nix_qint_hw_s {
- uint32_t count : 22;
- uint32_t rsvd_30_22 : 9;
- uint32_t ena : 1;
-};
-
-/* NIX receive queue context structure */
-struct nix_rq_ctx_hw_s {
- uint64_t ena : 1;
- uint64_t sso_ena : 1;
- uint64_t ipsech_ena : 1;
- uint64_t ena_wqwd : 1;
- uint64_t cq : 20;
- uint64_t substream : 20;
- uint64_t wqe_aura : 20;
- uint64_t spb_aura : 20;
- uint64_t lpb_aura : 20;
- uint64_t sso_grp : 10;
- uint64_t sso_tt : 2;
- uint64_t pb_caching : 2;
- uint64_t wqe_caching : 1;
- uint64_t xqe_drop_ena : 1;
- uint64_t spb_drop_ena : 1;
- uint64_t lpb_drop_ena : 1;
- uint64_t wqe_skip : 2;
- uint64_t rsvd_127_124 : 4;
- uint64_t rsvd_139_128 : 12;
- uint64_t spb_sizem1 : 6;
- uint64_t rsvd_150_146 : 5;
- uint64_t spb_ena : 1;
- uint64_t lpb_sizem1 : 12;
- uint64_t first_skip : 7;
- uint64_t rsvd_171 : 1;
- uint64_t later_skip : 6;
- uint64_t xqe_imm_size : 6;
- uint64_t rsvd_189_184 : 6;
- uint64_t xqe_imm_copy : 1;
- uint64_t xqe_hdr_split : 1;
- uint64_t xqe_drop : 8;
- uint64_t xqe_pass : 8;
- uint64_t wqe_pool_drop : 8;
- uint64_t wqe_pool_pass : 8;
- uint64_t spb_aura_drop : 8;
- uint64_t spb_aura_pass : 8;
- uint64_t spb_pool_drop : 8;
- uint64_t spb_pool_pass : 8;
- uint64_t lpb_aura_drop : 8;
- uint64_t lpb_aura_pass : 8;
- uint64_t lpb_pool_drop : 8;
- uint64_t lpb_pool_pass : 8;
- uint64_t rsvd_319_288 : 32;
- uint64_t ltag : 24;
- uint64_t good_utag : 8;
- uint64_t bad_utag : 8;
- uint64_t flow_tagw : 6;
- uint64_t rsvd_383_366 : 18;
- uint64_t octs : 48;
- uint64_t rsvd_447_432 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_511_496 : 16;
- uint64_t drop_octs : 48;
- uint64_t rsvd_575_560 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_639_624 : 16;
- uint64_t re_pkts : 48;
- uint64_t rsvd_702_688 : 15;
- uint64_t ena_copy : 1;
- uint64_t rsvd_739_704 : 36;
- uint64_t rq_int : 8;
- uint64_t rq_int_ena : 8;
- uint64_t qint_idx : 7;
- uint64_t rsvd_767_763 : 5;
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NIX receive queue context structure */
-struct nix_rq_ctx_s {
- uint64_t ena : 1;
- uint64_t sso_ena : 1;
- uint64_t ipsech_ena : 1;
- uint64_t ena_wqwd : 1;
- uint64_t cq : 20;
- uint64_t substream : 20;
- uint64_t wqe_aura : 20;
- uint64_t spb_aura : 20;
- uint64_t lpb_aura : 20;
- uint64_t sso_grp : 10;
- uint64_t sso_tt : 2;
- uint64_t pb_caching : 2;
- uint64_t wqe_caching : 1;
- uint64_t xqe_drop_ena : 1;
- uint64_t spb_drop_ena : 1;
- uint64_t lpb_drop_ena : 1;
- uint64_t rsvd_127_122 : 6;
- uint64_t rsvd_139_128 : 12;
- uint64_t spb_sizem1 : 6;
- uint64_t wqe_skip : 2;
- uint64_t rsvd_150_148 : 3;
- uint64_t spb_ena : 1;
- uint64_t lpb_sizem1 : 12;
- uint64_t first_skip : 7;
- uint64_t rsvd_171 : 1;
- uint64_t later_skip : 6;
- uint64_t xqe_imm_size : 6;
- uint64_t rsvd_189_184 : 6;
- uint64_t xqe_imm_copy : 1;
- uint64_t xqe_hdr_split : 1;
- uint64_t xqe_drop : 8;
- uint64_t xqe_pass : 8;
- uint64_t wqe_pool_drop : 8;
- uint64_t wqe_pool_pass : 8;
- uint64_t spb_aura_drop : 8;
- uint64_t spb_aura_pass : 8;
- uint64_t spb_pool_drop : 8;
- uint64_t spb_pool_pass : 8;
- uint64_t lpb_aura_drop : 8;
- uint64_t lpb_aura_pass : 8;
- uint64_t lpb_pool_drop : 8;
- uint64_t lpb_pool_pass : 8;
- uint64_t rsvd_291_288 : 4;
- uint64_t rq_int : 8;
- uint64_t rq_int_ena : 8;
- uint64_t qint_idx : 7;
- uint64_t rsvd_319_315 : 5;
- uint64_t ltag : 24;
- uint64_t good_utag : 8;
- uint64_t bad_utag : 8;
- uint64_t flow_tagw : 6;
- uint64_t rsvd_383_366 : 18;
- uint64_t octs : 48;
- uint64_t rsvd_447_432 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_511_496 : 16;
- uint64_t drop_octs : 48;
- uint64_t rsvd_575_560 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_639_624 : 16;
- uint64_t re_pkts : 48;
- uint64_t rsvd_703_688 : 16;
- uint64_t rsvd_767_704 : 64;/* W11 */
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NIX receive side scaling entry structure */
-struct nix_rsse_s {
- uint32_t rq : 20;
- uint32_t rsvd_31_20 : 12;
-};
-
-/* NIX receive action structure */
-struct nix_rx_action_s {
- uint64_t op : 4;
- uint64_t pf_func : 16;
- uint64_t index : 20;
- uint64_t match_id : 16;
- uint64_t flow_key_alg : 5;
- uint64_t rsvd_63_61 : 3;
-};
-
-/* NIX receive immediate sub descriptor structure */
-struct nix_rx_imm_s {
- uint64_t size : 16;
- uint64_t apad : 3;
- uint64_t rsvd_59_19 : 41;
- uint64_t subdc : 4;
-};
-
-/* NIX receive multicast/mirror entry structure */
-struct nix_rx_mce_s {
- uint64_t op : 2;
- uint64_t rsvd_2 : 1;
- uint64_t eol : 1;
- uint64_t index : 20;
- uint64_t rsvd_31_24 : 8;
- uint64_t pf_func : 16;
- uint64_t next : 16;
-};
-
-/* NIX receive parse structure */
-struct nix_rx_parse_s {
- uint64_t chan : 12;
- uint64_t desc_sizem1 : 5;
- uint64_t imm_copy : 1;
- uint64_t express : 1;
- uint64_t wqwd : 1;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t latype : 4;
- uint64_t lbtype : 4;
- uint64_t lctype : 4;
- uint64_t ldtype : 4;
- uint64_t letype : 4;
- uint64_t lftype : 4;
- uint64_t lgtype : 4;
- uint64_t lhtype : 4;
- uint64_t pkt_lenm1 : 16;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t vtag0_valid : 1;
- uint64_t vtag0_gone : 1;
- uint64_t vtag1_valid : 1;
- uint64_t vtag1_gone : 1;
- uint64_t pkind : 6;
- uint64_t rsvd_95_94 : 2;
- uint64_t vtag0_tci : 16;
- uint64_t vtag1_tci : 16;
- uint64_t laflags : 8;
- uint64_t lbflags : 8;
- uint64_t lcflags : 8;
- uint64_t ldflags : 8;
- uint64_t leflags : 8;
- uint64_t lfflags : 8;
- uint64_t lgflags : 8;
- uint64_t lhflags : 8;
- uint64_t eoh_ptr : 8;
- uint64_t wqe_aura : 20;
- uint64_t pb_aura : 20;
- uint64_t match_id : 16;
- uint64_t laptr : 8;
- uint64_t lbptr : 8;
- uint64_t lcptr : 8;
- uint64_t ldptr : 8;
- uint64_t leptr : 8;
- uint64_t lfptr : 8;
- uint64_t lgptr : 8;
- uint64_t lhptr : 8;
- uint64_t vtag0_ptr : 8;
- uint64_t vtag1_ptr : 8;
- uint64_t flow_key_alg : 5;
- uint64_t rsvd_383_341 : 43;
- uint64_t rsvd_447_384 : 64; /* W6 */
-};
-
-/* NIX receive scatter/gather sub descriptor structure */
-struct nix_rx_sg_s {
- uint64_t seg1_size : 16;
- uint64_t seg2_size : 16;
- uint64_t seg3_size : 16;
- uint64_t segs : 2;
- uint64_t rsvd_59_50 : 10;
- uint64_t subdc : 4;
-};
-
-/* NIX receive vtag action structure */
-struct nix_rx_vtag_action_s {
- uint64_t vtag0_relptr : 8;
- uint64_t vtag0_lid : 3;
- uint64_t rsvd_11 : 1;
- uint64_t vtag0_type : 3;
- uint64_t vtag0_valid : 1;
- uint64_t rsvd_31_16 : 16;
- uint64_t vtag1_relptr : 8;
- uint64_t vtag1_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t vtag1_type : 3;
- uint64_t vtag1_valid : 1;
- uint64_t rsvd_63_48 : 16;
-};
-
-/* NIX send completion structure */
-struct nix_send_comp_s {
- uint64_t status : 8;
- uint64_t sqe_id : 16;
- uint64_t rsvd_63_24 : 40;
-};
-
-/* NIX send CRC sub descriptor structure */
-struct nix_send_crc_s {
- uint64_t size : 16;
- uint64_t start : 16;
- uint64_t insert : 16;
- uint64_t rsvd_57_48 : 10;
- uint64_t alg : 2;
- uint64_t subdc : 4;
- uint64_t iv : 32;
- uint64_t rsvd_127_96 : 32;
-};
-
-/* NIX send extended header sub descriptor structure */
-RTE_STD_C11
-union nix_send_ext_w0_u {
- uint64_t u;
- struct {
- uint64_t lso_mps : 14;
- uint64_t lso : 1;
- uint64_t tstmp : 1;
- uint64_t lso_sb : 8;
- uint64_t lso_format : 5;
- uint64_t rsvd_31_29 : 3;
- uint64_t shp_chg : 9;
- uint64_t shp_dis : 1;
- uint64_t shp_ra : 2;
- uint64_t markptr : 8;
- uint64_t markform : 7;
- uint64_t mark_en : 1;
- uint64_t subdc : 4;
- };
-};
-
-RTE_STD_C11
-union nix_send_ext_w1_u {
- uint64_t u;
- struct {
- uint64_t vlan0_ins_ptr : 8;
- uint64_t vlan0_ins_tci : 16;
- uint64_t vlan1_ins_ptr : 8;
- uint64_t vlan1_ins_tci : 16;
- uint64_t vlan0_ins_ena : 1;
- uint64_t vlan1_ins_ena : 1;
- uint64_t rsvd_127_114 : 14;
- };
-};
-
-struct nix_send_ext_s {
- union nix_send_ext_w0_u w0;
- union nix_send_ext_w1_u w1;
-};
-
-/* NIX send header sub descriptor structure */
-RTE_STD_C11
-union nix_send_hdr_w0_u {
- uint64_t u;
- struct {
- uint64_t total : 18;
- uint64_t rsvd_18 : 1;
- uint64_t df : 1;
- uint64_t aura : 20;
- uint64_t sizem1 : 3;
- uint64_t pnc : 1;
- uint64_t sq : 20;
- };
-};
-
-RTE_STD_C11
-union nix_send_hdr_w1_u {
- uint64_t u;
- struct {
- uint64_t ol3ptr : 8;
- uint64_t ol4ptr : 8;
- uint64_t il3ptr : 8;
- uint64_t il4ptr : 8;
- uint64_t ol3type : 4;
- uint64_t ol4type : 4;
- uint64_t il3type : 4;
- uint64_t il4type : 4;
- uint64_t sqe_id : 16;
- };
-};
-
-struct nix_send_hdr_s {
- union nix_send_hdr_w0_u w0;
- union nix_send_hdr_w1_u w1;
-};
-
-/* NIX send immediate sub descriptor structure */
-struct nix_send_imm_s {
- uint64_t size : 16;
- uint64_t apad : 3;
- uint64_t rsvd_59_19 : 41;
- uint64_t subdc : 4;
-};
-
-/* NIX send jump sub descriptor structure */
-struct nix_send_jump_s {
- uint64_t sizem1 : 7;
- uint64_t rsvd_13_7 : 7;
- uint64_t ld_type : 2;
- uint64_t aura : 20;
- uint64_t rsvd_58_36 : 23;
- uint64_t f : 1;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX send memory sub descriptor structure */
-struct nix_send_mem_s {
- uint64_t offset : 16;
- uint64_t rsvd_52_16 : 37;
- uint64_t wmem : 1;
- uint64_t dsz : 2;
- uint64_t alg : 4;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX send scatter/gather sub descriptor structure */
-RTE_STD_C11
-union nix_send_sg_s {
- uint64_t u;
- struct {
- uint64_t seg1_size : 16;
- uint64_t seg2_size : 16;
- uint64_t seg3_size : 16;
- uint64_t segs : 2;
- uint64_t rsvd_54_50 : 5;
- uint64_t i1 : 1;
- uint64_t i2 : 1;
- uint64_t i3 : 1;
- uint64_t ld_type : 2;
- uint64_t subdc : 4;
- };
-};
-
-/* NIX send work sub descriptor structure */
-struct nix_send_work_s {
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t rsvd_59_44 : 16;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX sq context hardware structure */
-struct nix_sq_ctx_hw_s {
- uint64_t ena : 1;
- uint64_t substream : 20;
- uint64_t max_sqe_size : 2;
- uint64_t sqe_way_mask : 16;
- uint64_t sqb_aura : 20;
- uint64_t gbl_rsvd1 : 5;
- uint64_t cq_id : 20;
- uint64_t cq_ena : 1;
- uint64_t qint_idx : 6;
- uint64_t gbl_rsvd2 : 1;
- uint64_t sq_int : 8;
- uint64_t sq_int_ena : 8;
- uint64_t xoff : 1;
- uint64_t sqe_stype : 2;
- uint64_t gbl_rsvd : 17;
- uint64_t head_sqb : 64;/* W2 */
- uint64_t head_offset : 6;
- uint64_t sqb_dequeue_count : 16;
- uint64_t default_chan : 12;
- uint64_t sdp_mcast : 1;
- uint64_t sso_ena : 1;
- uint64_t dse_rsvd1 : 28;
- uint64_t sqb_enqueue_count : 16;
- uint64_t tail_offset : 6;
- uint64_t lmt_dis : 1;
- uint64_t smq_rr_quantum : 24;
- uint64_t dnq_rsvd1 : 17;
- uint64_t tail_sqb : 64;/* W5 */
- uint64_t next_sqb : 64;/* W6 */
- uint64_t mnq_dis : 1;
- uint64_t smq : 9;
- uint64_t smq_pend : 1;
- uint64_t smq_next_sq : 20;
- uint64_t smq_next_sq_vld : 1;
- uint64_t scm1_rsvd2 : 32;
- uint64_t smenq_sqb : 64;/* W8 */
- uint64_t smenq_offset : 6;
- uint64_t cq_limit : 8;
- uint64_t smq_rr_count : 25;
- uint64_t scm_lso_rem : 18;
- uint64_t scm_dq_rsvd0 : 7;
- uint64_t smq_lso_segnum : 8;
- uint64_t vfi_lso_total : 18;
- uint64_t vfi_lso_sizem1 : 3;
- uint64_t vfi_lso_sb : 8;
- uint64_t vfi_lso_mps : 14;
- uint64_t vfi_lso_vlan0_ins_ena : 1;
- uint64_t vfi_lso_vlan1_ins_ena : 1;
- uint64_t vfi_lso_vld : 1;
- uint64_t smenq_next_sqb_vld : 1;
- uint64_t scm_dq_rsvd1 : 9;
- uint64_t smenq_next_sqb : 64;/* W11 */
- uint64_t seb_rsvd1 : 64;/* W12 */
- uint64_t drop_pkts : 48;
- uint64_t drop_octs_lsw : 16;
- uint64_t drop_octs_msw : 32;
- uint64_t pkts_lsw : 32;
- uint64_t pkts_msw : 16;
- uint64_t octs : 48;
-};
-
-/* NIX send queue context structure */
-struct nix_sq_ctx_s {
- uint64_t ena : 1;
- uint64_t qint_idx : 6;
- uint64_t substream : 20;
- uint64_t sdp_mcast : 1;
- uint64_t cq : 20;
- uint64_t sqe_way_mask : 16;
- uint64_t smq : 9;
- uint64_t cq_ena : 1;
- uint64_t xoff : 1;
- uint64_t sso_ena : 1;
- uint64_t smq_rr_quantum : 24;
- uint64_t default_chan : 12;
- uint64_t sqb_count : 16;
- uint64_t smq_rr_count : 25;
- uint64_t sqb_aura : 20;
- uint64_t sq_int : 8;
- uint64_t sq_int_ena : 8;
- uint64_t sqe_stype : 2;
- uint64_t rsvd_191 : 1;
- uint64_t max_sqe_size : 2;
- uint64_t cq_limit : 8;
- uint64_t lmt_dis : 1;
- uint64_t mnq_dis : 1;
- uint64_t smq_next_sq : 20;
- uint64_t smq_lso_segnum : 8;
- uint64_t tail_offset : 6;
- uint64_t smenq_offset : 6;
- uint64_t head_offset : 6;
- uint64_t smenq_next_sqb_vld : 1;
- uint64_t smq_pend : 1;
- uint64_t smq_next_sq_vld : 1;
- uint64_t rsvd_255_253 : 3;
- uint64_t next_sqb : 64;/* W4 */
- uint64_t tail_sqb : 64;/* W5 */
- uint64_t smenq_sqb : 64;/* W6 */
- uint64_t smenq_next_sqb : 64;/* W7 */
- uint64_t head_sqb : 64;/* W8 */
- uint64_t rsvd_583_576 : 8;
- uint64_t vfi_lso_total : 18;
- uint64_t vfi_lso_sizem1 : 3;
- uint64_t vfi_lso_sb : 8;
- uint64_t vfi_lso_mps : 14;
- uint64_t vfi_lso_vlan0_ins_ena : 1;
- uint64_t vfi_lso_vlan1_ins_ena : 1;
- uint64_t vfi_lso_vld : 1;
- uint64_t rsvd_639_630 : 10;
- uint64_t scm_lso_rem : 18;
- uint64_t rsvd_703_658 : 46;
- uint64_t octs : 48;
- uint64_t rsvd_767_752 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_831_816 : 16;
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t drop_octs : 48;
- uint64_t rsvd_959_944 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_1023_1008 : 16;
-};
-
-/* NIX transmit action structure */
-struct nix_tx_action_s {
- uint64_t op : 4;
- uint64_t rsvd_11_4 : 8;
- uint64_t index : 20;
- uint64_t match_id : 16;
- uint64_t rsvd_63_48 : 16;
-};
-
-/* NIX transmit vtag action structure */
-struct nix_tx_vtag_action_s {
- uint64_t vtag0_relptr : 8;
- uint64_t vtag0_lid : 3;
- uint64_t rsvd_11 : 1;
- uint64_t vtag0_op : 2;
- uint64_t rsvd_15_14 : 2;
- uint64_t vtag0_def : 10;
- uint64_t rsvd_31_26 : 6;
- uint64_t vtag1_relptr : 8;
- uint64_t vtag1_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t vtag1_op : 2;
- uint64_t rsvd_47_46 : 2;
- uint64_t vtag1_def : 10;
- uint64_t rsvd_63_58 : 6;
-};
-
-/* NIX work queue entry header structure */
-struct nix_wqe_hdr_s {
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t node : 2;
- uint64_t q : 14;
- uint64_t wqe_type : 4;
-};
-
-/* NIX Rx flow key algorithm field structure */
-struct nix_rx_flowkey_alg {
- uint64_t key_offset :6;
- uint64_t ln_mask :1;
- uint64_t fn_mask :1;
- uint64_t hdr_offset :8;
- uint64_t bytesm1 :5;
- uint64_t lid :3;
- uint64_t reserved_24_24 :1;
- uint64_t ena :1;
- uint64_t sel_chan :1;
- uint64_t ltype_mask :4;
- uint64_t ltype_match :4;
- uint64_t reserved_35_63 :29;
-};
-
-/* NIX LSO format field structure */
-struct nix_lso_format {
- uint64_t offset : 8;
- uint64_t layer : 2;
- uint64_t rsvd_10_11 : 2;
- uint64_t sizem1 : 2;
- uint64_t rsvd_14_15 : 2;
- uint64_t alg : 3;
- uint64_t rsvd_19_63 : 45;
-};
-
-#define NIX_LSO_FIELD_MAX (8)
-#define NIX_LSO_FIELD_ALG_MASK GENMASK(18, 16)
-#define NIX_LSO_FIELD_SZ_MASK GENMASK(13, 12)
-#define NIX_LSO_FIELD_LY_MASK GENMASK(9, 8)
-#define NIX_LSO_FIELD_OFF_MASK GENMASK(7, 0)
-
-#define NIX_LSO_FIELD_MASK \
- (NIX_LSO_FIELD_OFF_MASK | \
- NIX_LSO_FIELD_LY_MASK | \
- NIX_LSO_FIELD_SZ_MASK | \
- NIX_LSO_FIELD_ALG_MASK)
-
-#endif /* __OTX2_NIX_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_npa.h b/drivers/common/octeontx2/hw/otx2_npa.h
deleted file mode 100644
index 2224216c96..0000000000
--- a/drivers/common/octeontx2/hw/otx2_npa.h
+++ /dev/null
@@ -1,305 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NPA_HW_H__
-#define __OTX2_NPA_HW_H__
-
-/* Register offsets */
-
-#define NPA_AF_BLK_RST (0x0ull)
-#define NPA_AF_CONST (0x10ull)
-#define NPA_AF_CONST1 (0x18ull)
-#define NPA_AF_LF_RST (0x20ull)
-#define NPA_AF_GEN_CFG (0x30ull)
-#define NPA_AF_NDC_CFG (0x40ull)
-#define NPA_AF_NDC_SYNC (0x50ull)
-#define NPA_AF_INP_CTL (0xd0ull)
-#define NPA_AF_ACTIVE_CYCLES_PC (0xf0ull)
-#define NPA_AF_AVG_DELAY (0x100ull)
-#define NPA_AF_GEN_INT (0x140ull)
-#define NPA_AF_GEN_INT_W1S (0x148ull)
-#define NPA_AF_GEN_INT_ENA_W1S (0x150ull)
-#define NPA_AF_GEN_INT_ENA_W1C (0x158ull)
-#define NPA_AF_RVU_INT (0x160ull)
-#define NPA_AF_RVU_INT_W1S (0x168ull)
-#define NPA_AF_RVU_INT_ENA_W1S (0x170ull)
-#define NPA_AF_RVU_INT_ENA_W1C (0x178ull)
-#define NPA_AF_ERR_INT (0x180ull)
-#define NPA_AF_ERR_INT_W1S (0x188ull)
-#define NPA_AF_ERR_INT_ENA_W1S (0x190ull)
-#define NPA_AF_ERR_INT_ENA_W1C (0x198ull)
-#define NPA_AF_RAS (0x1a0ull)
-#define NPA_AF_RAS_W1S (0x1a8ull)
-#define NPA_AF_RAS_ENA_W1S (0x1b0ull)
-#define NPA_AF_RAS_ENA_W1C (0x1b8ull)
-#define NPA_AF_AQ_CFG (0x600ull)
-#define NPA_AF_AQ_BASE (0x610ull)
-#define NPA_AF_AQ_STATUS (0x620ull)
-#define NPA_AF_AQ_DOOR (0x630ull)
-#define NPA_AF_AQ_DONE_WAIT (0x640ull)
-#define NPA_AF_AQ_DONE (0x650ull)
-#define NPA_AF_AQ_DONE_ACK (0x660ull)
-#define NPA_AF_AQ_DONE_TIMER (0x670ull)
-#define NPA_AF_AQ_DONE_INT (0x680ull)
-#define NPA_AF_AQ_DONE_ENA_W1S (0x690ull)
-#define NPA_AF_AQ_DONE_ENA_W1C (0x698ull)
-#define NPA_AF_LFX_AURAS_CFG(a) (0x4000ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 18)
-#define NPA_PRIV_AF_INT_CFG (0x10000ull)
-#define NPA_PRIV_LFX_CFG(a) (0x10010ull | (uint64_t)(a) << 8)
-#define NPA_PRIV_LFX_INT_CFG(a) (0x10020ull | (uint64_t)(a) << 8)
-#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030ull)
-#define NPA_AF_DTX_FILTER_CTL (0x10040ull)
-
-#define NPA_LF_AURA_OP_ALLOCX(a) (0x10ull | (uint64_t)(a) << 3)
-#define NPA_LF_AURA_OP_FREE0 (0x20ull)
-#define NPA_LF_AURA_OP_FREE1 (0x28ull)
-#define NPA_LF_AURA_OP_CNT (0x30ull)
-#define NPA_LF_AURA_OP_LIMIT (0x50ull)
-#define NPA_LF_AURA_OP_INT (0x60ull)
-#define NPA_LF_AURA_OP_THRESH (0x70ull)
-#define NPA_LF_POOL_OP_PC (0x100ull)
-#define NPA_LF_POOL_OP_AVAILABLE (0x110ull)
-#define NPA_LF_POOL_OP_PTR_START0 (0x120ull)
-#define NPA_LF_POOL_OP_PTR_START1 (0x128ull)
-#define NPA_LF_POOL_OP_PTR_END0 (0x130ull)
-#define NPA_LF_POOL_OP_PTR_END1 (0x138ull)
-#define NPA_LF_POOL_OP_INT (0x160ull)
-#define NPA_LF_POOL_OP_THRESH (0x170ull)
-#define NPA_LF_ERR_INT (0x200ull)
-#define NPA_LF_ERR_INT_W1S (0x208ull)
-#define NPA_LF_ERR_INT_ENA_W1C (0x210ull)
-#define NPA_LF_ERR_INT_ENA_W1S (0x218ull)
-#define NPA_LF_RAS (0x220ull)
-#define NPA_LF_RAS_W1S (0x228ull)
-#define NPA_LF_RAS_ENA_W1C (0x230ull)
-#define NPA_LF_RAS_ENA_W1S (0x238ull)
-#define NPA_LF_QINTX_CNT(a) (0x300ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_INT(a) (0x310ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_ENA_W1S(a) (0x320ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_ENA_W1C(a) (0x330ull | (uint64_t)(a) << 12)
-
-
-/* Enum offsets */
-
-#define NPA_AQ_COMP_NOTDONE (0x0ull)
-#define NPA_AQ_COMP_GOOD (0x1ull)
-#define NPA_AQ_COMP_SWERR (0x2ull)
-#define NPA_AQ_COMP_CTX_POISON (0x3ull)
-#define NPA_AQ_COMP_CTX_FAULT (0x4ull)
-#define NPA_AQ_COMP_LOCKERR (0x5ull)
-
-#define NPA_AF_INT_VEC_RVU (0x0ull)
-#define NPA_AF_INT_VEC_GEN (0x1ull)
-#define NPA_AF_INT_VEC_AQ_DONE (0x2ull)
-#define NPA_AF_INT_VEC_AF_ERR (0x3ull)
-#define NPA_AF_INT_VEC_POISON (0x4ull)
-
-#define NPA_AQ_INSTOP_NOP (0x0ull)
-#define NPA_AQ_INSTOP_INIT (0x1ull)
-#define NPA_AQ_INSTOP_WRITE (0x2ull)
-#define NPA_AQ_INSTOP_READ (0x3ull)
-#define NPA_AQ_INSTOP_LOCK (0x4ull)
-#define NPA_AQ_INSTOP_UNLOCK (0x5ull)
-
-#define NPA_AQ_CTYPE_AURA (0x0ull)
-#define NPA_AQ_CTYPE_POOL (0x1ull)
-
-#define NPA_BPINTF_NIX0_RX (0x0ull)
-#define NPA_BPINTF_NIX1_RX (0x1ull)
-
-#define NPA_AURA_ERR_INT_AURA_FREE_UNDER (0x0ull)
-#define NPA_AURA_ERR_INT_AURA_ADD_OVER (0x1ull)
-#define NPA_AURA_ERR_INT_AURA_ADD_UNDER (0x2ull)
-#define NPA_AURA_ERR_INT_POOL_DIS (0x3ull)
-#define NPA_AURA_ERR_INT_R4 (0x4ull)
-#define NPA_AURA_ERR_INT_R5 (0x5ull)
-#define NPA_AURA_ERR_INT_R6 (0x6ull)
-#define NPA_AURA_ERR_INT_R7 (0x7ull)
-
-#define NPA_LF_INT_VEC_ERR_INT (0x40ull)
-#define NPA_LF_INT_VEC_POISON (0x41ull)
-#define NPA_LF_INT_VEC_QINT_END (0x3full)
-#define NPA_LF_INT_VEC_QINT_START (0x0ull)
-
-#define NPA_INPQ_SSO (0x4ull)
-#define NPA_INPQ_TIM (0x5ull)
-#define NPA_INPQ_DPI (0x6ull)
-#define NPA_INPQ_AURA_OP (0xeull)
-#define NPA_INPQ_INTERNAL_RSV (0xfull)
-#define NPA_INPQ_NIX0_RX (0x0ull)
-#define NPA_INPQ_NIX1_RX (0x2ull)
-#define NPA_INPQ_NIX0_TX (0x1ull)
-#define NPA_INPQ_NIX1_TX (0x3ull)
-#define NPA_INPQ_R_END (0xdull)
-#define NPA_INPQ_R_START (0x7ull)
-
-#define NPA_POOL_ERR_INT_OVFLS (0x0ull)
-#define NPA_POOL_ERR_INT_RANGE (0x1ull)
-#define NPA_POOL_ERR_INT_PERR (0x2ull)
-#define NPA_POOL_ERR_INT_R3 (0x3ull)
-#define NPA_POOL_ERR_INT_R4 (0x4ull)
-#define NPA_POOL_ERR_INT_R5 (0x5ull)
-#define NPA_POOL_ERR_INT_R6 (0x6ull)
-#define NPA_POOL_ERR_INT_R7 (0x7ull)
-
-#define NPA_NDC0_PORT_AURA0 (0x0ull)
-#define NPA_NDC0_PORT_AURA1 (0x1ull)
-#define NPA_NDC0_PORT_POOL0 (0x2ull)
-#define NPA_NDC0_PORT_POOL1 (0x3ull)
-#define NPA_NDC0_PORT_STACK0 (0x4ull)
-#define NPA_NDC0_PORT_STACK1 (0x5ull)
-
-#define NPA_LF_ERR_INT_AURA_DIS (0x0ull)
-#define NPA_LF_ERR_INT_AURA_OOR (0x1ull)
-#define NPA_LF_ERR_INT_AURA_FAULT (0xcull)
-#define NPA_LF_ERR_INT_POOL_FAULT (0xdull)
-#define NPA_LF_ERR_INT_STACK_FAULT (0xeull)
-#define NPA_LF_ERR_INT_QINT_FAULT (0xfull)
-
-/* Structures definitions */
-
-/* NPA admin queue instruction structure */
-struct npa_aq_inst_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t lf : 9;
- uint64_t rsvd_23_17 : 7;
- uint64_t cindex : 20;
- uint64_t rsvd_62_44 : 19;
- uint64_t doneint : 1;
- uint64_t res_addr : 64; /* W1 */
-};
-
-/* NPA admin queue result structure */
-struct npa_aq_res_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t compcode : 8;
- uint64_t doneint : 1;
- uint64_t rsvd_63_17 : 47;
- uint64_t rsvd_127_64 : 64; /* W1 */
-};
-
-/* NPA aura operation write data structure */
-struct npa_aura_op_wdata_s {
- uint64_t aura : 20;
- uint64_t rsvd_62_20 : 43;
- uint64_t drop : 1;
-};
-
-/* NPA aura context structure */
-struct npa_aura_s {
- uint64_t pool_addr : 64;/* W0 */
- uint64_t ena : 1;
- uint64_t rsvd_66_65 : 2;
- uint64_t pool_caching : 1;
- uint64_t pool_way_mask : 16;
- uint64_t avg_con : 9;
- uint64_t rsvd_93 : 1;
- uint64_t pool_drop_ena : 1;
- uint64_t aura_drop_ena : 1;
- uint64_t bp_ena : 2;
- uint64_t rsvd_103_98 : 6;
- uint64_t aura_drop : 8;
- uint64_t shift : 6;
- uint64_t rsvd_119_118 : 2;
- uint64_t avg_level : 8;
- uint64_t count : 36;
- uint64_t rsvd_167_164 : 4;
- uint64_t nix0_bpid : 9;
- uint64_t rsvd_179_177 : 3;
- uint64_t nix1_bpid : 9;
- uint64_t rsvd_191_189 : 3;
- uint64_t limit : 36;
- uint64_t rsvd_231_228 : 4;
- uint64_t bp : 8;
- uint64_t rsvd_243_240 : 4;
- uint64_t fc_ena : 1;
- uint64_t fc_up_crossing : 1;
- uint64_t fc_stype : 2;
- uint64_t fc_hyst_bits : 4;
- uint64_t rsvd_255_252 : 4;
- uint64_t fc_addr : 64;/* W4 */
- uint64_t pool_drop : 8;
- uint64_t update_time : 16;
- uint64_t err_int : 8;
- uint64_t err_int_ena : 8;
- uint64_t thresh_int : 1;
- uint64_t thresh_int_ena : 1;
- uint64_t thresh_up : 1;
- uint64_t rsvd_363 : 1;
- uint64_t thresh_qint_idx : 7;
- uint64_t rsvd_371 : 1;
- uint64_t err_qint_idx : 7;
- uint64_t rsvd_383_379 : 5;
- uint64_t thresh : 36;
- uint64_t rsvd_447_420 : 28;
- uint64_t rsvd_511_448 : 64;/* W7 */
-};
-
-/* NPA pool context structure */
-struct npa_pool_s {
- uint64_t stack_base : 64;/* W0 */
- uint64_t ena : 1;
- uint64_t nat_align : 1;
- uint64_t rsvd_67_66 : 2;
- uint64_t stack_caching : 1;
- uint64_t rsvd_71_69 : 3;
- uint64_t stack_way_mask : 16;
- uint64_t buf_offset : 12;
- uint64_t rsvd_103_100 : 4;
- uint64_t buf_size : 11;
- uint64_t rsvd_127_115 : 13;
- uint64_t stack_max_pages : 32;
- uint64_t stack_pages : 32;
- uint64_t op_pc : 48;
- uint64_t rsvd_255_240 : 16;
- uint64_t stack_offset : 4;
- uint64_t rsvd_263_260 : 4;
- uint64_t shift : 6;
- uint64_t rsvd_271_270 : 2;
- uint64_t avg_level : 8;
- uint64_t avg_con : 9;
- uint64_t fc_ena : 1;
- uint64_t fc_stype : 2;
- uint64_t fc_hyst_bits : 4;
- uint64_t fc_up_crossing : 1;
- uint64_t rsvd_299_297 : 3;
- uint64_t update_time : 16;
- uint64_t rsvd_319_316 : 4;
- uint64_t fc_addr : 64;/* W5 */
- uint64_t ptr_start : 64;/* W6 */
- uint64_t ptr_end : 64;/* W7 */
- uint64_t rsvd_535_512 : 24;
- uint64_t err_int : 8;
- uint64_t err_int_ena : 8;
- uint64_t thresh_int : 1;
- uint64_t thresh_int_ena : 1;
- uint64_t thresh_up : 1;
- uint64_t rsvd_555 : 1;
- uint64_t thresh_qint_idx : 7;
- uint64_t rsvd_563 : 1;
- uint64_t err_qint_idx : 7;
- uint64_t rsvd_575_571 : 5;
- uint64_t thresh : 36;
- uint64_t rsvd_639_612 : 28;
- uint64_t rsvd_703_640 : 64;/* W10 */
- uint64_t rsvd_767_704 : 64;/* W11 */
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NPA queue interrupt context hardware structure */
-struct npa_qint_hw_s {
- uint32_t count : 22;
- uint32_t rsvd_30_22 : 9;
- uint32_t ena : 1;
-};
-
-#endif /* __OTX2_NPA_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_npc.h b/drivers/common/octeontx2/hw/otx2_npc.h
deleted file mode 100644
index b4e3c1eedc..0000000000
--- a/drivers/common/octeontx2/hw/otx2_npc.h
+++ /dev/null
@@ -1,503 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NPC_HW_H__
-#define __OTX2_NPC_HW_H__
-
-/* Register offsets */
-
-#define NPC_AF_CFG (0x0ull)
-#define NPC_AF_ACTIVE_PC (0x10ull)
-#define NPC_AF_CONST (0x20ull)
-#define NPC_AF_CONST1 (0x30ull)
-#define NPC_AF_BLK_RST (0x40ull)
-#define NPC_AF_MCAM_SCRUB_CTL (0xa0ull)
-#define NPC_AF_KCAM_SCRUB_CTL (0xb0ull)
-#define NPC_AF_KPUX_CFG(a) \
- (0x500ull | (uint64_t)(a) << 3)
-#define NPC_AF_PCK_CFG (0x600ull)
-#define NPC_AF_PCK_DEF_OL2 (0x610ull)
-#define NPC_AF_PCK_DEF_OIP4 (0x620ull)
-#define NPC_AF_PCK_DEF_OIP6 (0x630ull)
-#define NPC_AF_PCK_DEF_IIP4 (0x640ull)
-#define NPC_AF_KEX_LDATAX_FLAGS_CFG(a) \
- (0x800ull | (uint64_t)(a) << 3)
-#define NPC_AF_INTFX_KEX_CFG(a) \
- (0x1010ull | (uint64_t)(a) << 8)
-#define NPC_AF_PKINDX_ACTION0(a) \
- (0x80000ull | (uint64_t)(a) << 6)
-#define NPC_AF_PKINDX_ACTION1(a) \
- (0x80008ull | (uint64_t)(a) << 6)
-#define NPC_AF_PKINDX_CPI_DEFX(a, b) \
- (0x80020ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
-#define NPC_AF_CHLEN90B_PKIND (0x3bull)
-#define NPC_AF_KPUX_ENTRYX_CAMX(a, b, c) \
- (0x100000ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_KPUX_ENTRYX_ACTION0(a, b) \
- (0x100020ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
-#define NPC_AF_KPUX_ENTRYX_ACTION1(a, b) \
- (0x100028ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
-#define NPC_AF_KPUX_ENTRY_DISX(a, b) \
- (0x180000ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
-#define NPC_AF_CPIX_CFG(a) \
- (0x200000ull | (uint64_t)(a) << 3)
-#define NPC_AF_INTFX_LIDX_LTX_LDX_CFG(a, b, c, d) \
- (0x900000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
- (uint64_t)(c) << 5 | (uint64_t)(d) << 3)
-#define NPC_AF_INTFX_LDATAX_FLAGSX_CFG(a, b, c) \
- (0x980000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_INTF(a, b, c) \
- (0x1000000ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_W0(a, b, c) \
- (0x1000010ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_W1(a, b, c) \
- (0x1000020ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CFG(a, b) \
- (0x1800000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MCAMEX_BANKX_STAT_ACT(a, b) \
- (0x1880000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MATCH_STATX(a) \
- (0x1880008ull | (uint64_t)(a) << 8)
-#define NPC_AF_INTFX_MISS_STAT_ACT(a) \
- (0x1880040ull + (uint64_t)(a) * 0x8)
-#define NPC_AF_MCAMEX_BANKX_ACTION(a, b) \
- (0x1900000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MCAMEX_BANKX_TAG_ACT(a, b) \
- (0x1900008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_INTFX_MISS_ACT(a) \
- (0x1a00000ull | (uint64_t)(a) << 4)
-#define NPC_AF_INTFX_MISS_TAG_ACT(a) \
- (0x1b00008ull | (uint64_t)(a) << 4)
-#define NPC_AF_MCAM_BANKX_HITX(a, b) \
- (0x1c80000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_LKUP_CTL (0x2000000ull)
-#define NPC_AF_LKUP_DATAX(a) \
- (0x2000200ull | (uint64_t)(a) << 4)
-#define NPC_AF_LKUP_RESULTX(a) \
- (0x2000400ull | (uint64_t)(a) << 4)
-#define NPC_AF_INTFX_STAT(a) \
- (0x2000800ull | (uint64_t)(a) << 4)
-#define NPC_AF_DBG_CTL (0x3000000ull)
-#define NPC_AF_DBG_STATUS (0x3000010ull)
-#define NPC_AF_KPUX_DBG(a) \
- (0x3000020ull | (uint64_t)(a) << 8)
-#define NPC_AF_IKPU_ERR_CTL (0x3000080ull)
-#define NPC_AF_KPUX_ERR_CTL(a) \
- (0x30000a0ull | (uint64_t)(a) << 8)
-#define NPC_AF_MCAM_DBG (0x3001000ull)
-#define NPC_AF_DBG_DATAX(a) \
- (0x3001400ull | (uint64_t)(a) << 4)
-#define NPC_AF_DBG_RESULTX(a) \
- (0x3001800ull | (uint64_t)(a) << 4)
-
-
-/* Enum offsets */
-
-#define NPC_INTF_NIX0_RX (0x0ull)
-#define NPC_INTF_NIX0_TX (0x1ull)
-
-#define NPC_LKUPOP_PKT (0x0ull)
-#define NPC_LKUPOP_KEY (0x1ull)
-
-#define NPC_MCAM_KEY_X1 (0x0ull)
-#define NPC_MCAM_KEY_X2 (0x1ull)
-#define NPC_MCAM_KEY_X4 (0x2ull)
-
-enum NPC_ERRLEV_E {
- NPC_ERRLEV_RE = 0,
- NPC_ERRLEV_LA = 1,
- NPC_ERRLEV_LB = 2,
- NPC_ERRLEV_LC = 3,
- NPC_ERRLEV_LD = 4,
- NPC_ERRLEV_LE = 5,
- NPC_ERRLEV_LF = 6,
- NPC_ERRLEV_LG = 7,
- NPC_ERRLEV_LH = 8,
- NPC_ERRLEV_R9 = 9,
- NPC_ERRLEV_R10 = 10,
- NPC_ERRLEV_R11 = 11,
- NPC_ERRLEV_R12 = 12,
- NPC_ERRLEV_R13 = 13,
- NPC_ERRLEV_R14 = 14,
- NPC_ERRLEV_NIX = 15,
- NPC_ERRLEV_ENUM_LAST = 16,
-};
-
-enum npc_kpu_err_code {
- NPC_EC_NOERR = 0, /* has to be zero */
- NPC_EC_UNK,
- NPC_EC_IH_LENGTH,
- NPC_EC_EDSA_UNK,
- NPC_EC_L2_K1,
- NPC_EC_L2_K2,
- NPC_EC_L2_K3,
- NPC_EC_L2_K3_ETYPE_UNK,
- NPC_EC_L2_K4,
- NPC_EC_MPLS_2MANY,
- NPC_EC_MPLS_UNK,
- NPC_EC_NSH_UNK,
- NPC_EC_IP_TTL_0,
- NPC_EC_IP_FRAG_OFFSET_1,
- NPC_EC_IP_VER,
- NPC_EC_IP6_HOP_0,
- NPC_EC_IP6_VER,
- NPC_EC_TCP_FLAGS_FIN_ONLY,
- NPC_EC_TCP_FLAGS_ZERO,
- NPC_EC_TCP_FLAGS_RST_FIN,
- NPC_EC_TCP_FLAGS_URG_SYN,
- NPC_EC_TCP_FLAGS_RST_SYN,
- NPC_EC_TCP_FLAGS_SYN_FIN,
- NPC_EC_VXLAN,
- NPC_EC_NVGRE,
- NPC_EC_GRE,
- NPC_EC_GRE_VER1,
- NPC_EC_L4,
- NPC_EC_OIP4_CSUM,
- NPC_EC_IIP4_CSUM,
- NPC_EC_LAST /* has to be the last item */
-};
-
-enum NPC_LID_E {
- NPC_LID_LA = 0,
- NPC_LID_LB,
- NPC_LID_LC,
- NPC_LID_LD,
- NPC_LID_LE,
- NPC_LID_LF,
- NPC_LID_LG,
- NPC_LID_LH,
-};
-
-#define NPC_LT_NA 0
-
-enum npc_kpu_la_ltype {
- NPC_LT_LA_8023 = 1,
- NPC_LT_LA_ETHER,
- NPC_LT_LA_IH_NIX_ETHER,
- NPC_LT_LA_IH_8_ETHER,
- NPC_LT_LA_IH_4_ETHER,
- NPC_LT_LA_IH_2_ETHER,
- NPC_LT_LA_HIGIG2_ETHER,
- NPC_LT_LA_IH_NIX_HIGIG2_ETHER,
- NPC_LT_LA_CUSTOM_L2_90B_ETHER,
- NPC_LT_LA_CPT_HDR,
- NPC_LT_LA_CUSTOM_L2_24B_ETHER,
- NPC_LT_LA_CUSTOM0 = 0xE,
- NPC_LT_LA_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lb_ltype {
- NPC_LT_LB_ETAG = 1,
- NPC_LT_LB_CTAG,
- NPC_LT_LB_STAG_QINQ,
- NPC_LT_LB_BTAG,
- NPC_LT_LB_ITAG,
- NPC_LT_LB_DSA,
- NPC_LT_LB_DSA_VLAN,
- NPC_LT_LB_EDSA,
- NPC_LT_LB_EDSA_VLAN,
- NPC_LT_LB_EXDSA,
- NPC_LT_LB_EXDSA_VLAN,
- NPC_LT_LB_FDSA,
- NPC_LT_LB_VLAN_EXDSA,
- NPC_LT_LB_CUSTOM0 = 0xE,
- NPC_LT_LB_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lc_ltype {
- NPC_LT_LC_PTP = 1,
- NPC_LT_LC_IP,
- NPC_LT_LC_IP_OPT,
- NPC_LT_LC_IP6,
- NPC_LT_LC_IP6_EXT,
- NPC_LT_LC_ARP,
- NPC_LT_LC_RARP,
- NPC_LT_LC_MPLS,
- NPC_LT_LC_NSH,
- NPC_LT_LC_FCOE,
- NPC_LT_LC_NGIO,
- NPC_LT_LC_CUSTOM0 = 0xE,
- NPC_LT_LC_CUSTOM1 = 0xF,
-};
-
-/* Don't modify Ltypes up to SCTP, otherwise it will
- * effect flow tag calculation and thus RSS.
- */
-enum npc_kpu_ld_ltype {
- NPC_LT_LD_TCP = 1,
- NPC_LT_LD_UDP,
- NPC_LT_LD_ICMP,
- NPC_LT_LD_SCTP,
- NPC_LT_LD_ICMP6,
- NPC_LT_LD_CUSTOM0,
- NPC_LT_LD_CUSTOM1,
- NPC_LT_LD_IGMP = 8,
- NPC_LT_LD_AH,
- NPC_LT_LD_GRE,
- NPC_LT_LD_NVGRE,
- NPC_LT_LD_NSH,
- NPC_LT_LD_TU_MPLS_IN_NSH,
- NPC_LT_LD_TU_MPLS_IN_IP,
-};
-
-enum npc_kpu_le_ltype {
- NPC_LT_LE_VXLAN = 1,
- NPC_LT_LE_GENEVE,
- NPC_LT_LE_ESP,
- NPC_LT_LE_GTPU = 4,
- NPC_LT_LE_VXLANGPE,
- NPC_LT_LE_GTPC,
- NPC_LT_LE_NSH,
- NPC_LT_LE_TU_MPLS_IN_GRE,
- NPC_LT_LE_TU_NSH_IN_GRE,
- NPC_LT_LE_TU_MPLS_IN_UDP,
- NPC_LT_LE_CUSTOM0 = 0xE,
- NPC_LT_LE_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lf_ltype {
- NPC_LT_LF_TU_ETHER = 1,
- NPC_LT_LF_TU_PPP,
- NPC_LT_LF_TU_MPLS_IN_VXLANGPE,
- NPC_LT_LF_TU_NSH_IN_VXLANGPE,
- NPC_LT_LF_TU_MPLS_IN_NSH,
- NPC_LT_LF_TU_3RD_NSH,
- NPC_LT_LF_CUSTOM0 = 0xE,
- NPC_LT_LF_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lg_ltype {
- NPC_LT_LG_TU_IP = 1,
- NPC_LT_LG_TU_IP6,
- NPC_LT_LG_TU_ARP,
- NPC_LT_LG_TU_ETHER_IN_NSH,
- NPC_LT_LG_CUSTOM0 = 0xE,
- NPC_LT_LG_CUSTOM1 = 0xF,
-};
-
-/* Don't modify Ltypes up to SCTP, otherwise it will
- * effect flow tag calculation and thus RSS.
- */
-enum npc_kpu_lh_ltype {
- NPC_LT_LH_TU_TCP = 1,
- NPC_LT_LH_TU_UDP,
- NPC_LT_LH_TU_ICMP,
- NPC_LT_LH_TU_SCTP,
- NPC_LT_LH_TU_ICMP6,
- NPC_LT_LH_TU_IGMP = 8,
- NPC_LT_LH_TU_ESP,
- NPC_LT_LH_TU_AH,
- NPC_LT_LH_CUSTOM0 = 0xE,
- NPC_LT_LH_CUSTOM1 = 0xF,
-};
-
-/* Structures definitions */
-struct npc_kpu_profile_cam {
- uint8_t state;
- uint8_t state_mask;
- uint16_t dp0;
- uint16_t dp0_mask;
- uint16_t dp1;
- uint16_t dp1_mask;
- uint16_t dp2;
- uint16_t dp2_mask;
-};
-
-struct npc_kpu_profile_action {
- uint8_t errlev;
- uint8_t errcode;
- uint8_t dp0_offset;
- uint8_t dp1_offset;
- uint8_t dp2_offset;
- uint8_t bypass_count;
- uint8_t parse_done;
- uint8_t next_state;
- uint8_t ptr_advance;
- uint8_t cap_ena;
- uint8_t lid;
- uint8_t ltype;
- uint8_t flags;
- uint8_t offset;
- uint8_t mask;
- uint8_t right;
- uint8_t shift;
-};
-
-struct npc_kpu_profile {
- int cam_entries;
- int action_entries;
- struct npc_kpu_profile_cam *cam;
- struct npc_kpu_profile_action *action;
-};
-
-/* NPC KPU register formats */
-struct npc_kpu_cam {
- uint64_t dp0_data : 16;
- uint64_t dp1_data : 16;
- uint64_t dp2_data : 16;
- uint64_t state : 8;
- uint64_t rsvd_63_56 : 8;
-};
-
-struct npc_kpu_action0 {
- uint64_t var_len_shift : 3;
- uint64_t var_len_right : 1;
- uint64_t var_len_mask : 8;
- uint64_t var_len_offset : 8;
- uint64_t ptr_advance : 8;
- uint64_t capture_flags : 8;
- uint64_t capture_ltype : 4;
- uint64_t capture_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t next_state : 8;
- uint64_t parse_done : 1;
- uint64_t capture_ena : 1;
- uint64_t byp_count : 3;
- uint64_t rsvd_63_57 : 7;
-};
-
-struct npc_kpu_action1 {
- uint64_t dp0_offset : 8;
- uint64_t dp1_offset : 8;
- uint64_t dp2_offset : 8;
- uint64_t errcode : 8;
- uint64_t errlev : 4;
- uint64_t rsvd_63_36 : 28;
-};
-
-struct npc_kpu_pkind_cpi_def {
- uint64_t cpi_base : 10;
- uint64_t rsvd_11_10 : 2;
- uint64_t add_shift : 3;
- uint64_t rsvd_15 : 1;
- uint64_t add_mask : 8;
- uint64_t add_offset : 8;
- uint64_t flags_mask : 8;
- uint64_t flags_match : 8;
- uint64_t ltype_mask : 4;
- uint64_t ltype_match : 4;
- uint64_t lid : 3;
- uint64_t rsvd_62_59 : 4;
- uint64_t ena : 1;
-};
-
-struct nix_rx_action {
- uint64_t op :4;
- uint64_t pf_func :16;
- uint64_t index :20;
- uint64_t match_id :16;
- uint64_t flow_key_alg :5;
- uint64_t rsvd_63_61 :3;
-};
-
-struct nix_tx_action {
- uint64_t op :4;
- uint64_t rsvd_11_4 :8;
- uint64_t index :20;
- uint64_t match_id :16;
- uint64_t rsvd_63_48 :16;
-};
-
-/* NPC layer parse information structure */
-struct npc_layer_info_s {
- uint32_t lptr : 8;
- uint32_t flags : 8;
- uint32_t ltype : 4;
- uint32_t rsvd_31_20 : 12;
-};
-
-/* NPC layer mcam search key extract structure */
-struct npc_layer_kex_s {
- uint16_t flags : 8;
- uint16_t ltype : 4;
- uint16_t rsvd_15_12 : 4;
-};
-
-/* NPC mcam search key x1 structure */
-struct npc_mcam_key_x1_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 48;
- uint64_t rsvd_191_176 : 16;
-};
-
-/* NPC mcam search key x2 structure */
-struct npc_mcam_key_x2_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 64; /* W2 */
- uint64_t kw2 : 64; /* W3 */
- uint64_t kw3 : 32;
- uint64_t rsvd_319_288 : 32;
-};
-
-/* NPC mcam search key x4 structure */
-struct npc_mcam_key_x4_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 64; /* W2 */
- uint64_t kw2 : 64; /* W3 */
- uint64_t kw3 : 64; /* W4 */
- uint64_t kw4 : 64; /* W5 */
- uint64_t kw5 : 64; /* W6 */
- uint64_t kw6 : 64; /* W7 */
-};
-
-/* NPC parse key extract structure */
-struct npc_parse_kex_s {
- uint64_t chan : 12;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t la : 12;
- uint64_t lb : 12;
- uint64_t lc : 12;
- uint64_t ld : 12;
- uint64_t le : 12;
- uint64_t lf : 12;
- uint64_t lg : 12;
- uint64_t lh : 12;
- uint64_t rsvd_127_124 : 4;
-};
-
-/* NPC result structure */
-struct npc_result_s {
- uint64_t intf : 2;
- uint64_t pkind : 6;
- uint64_t chan : 12;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t eoh_ptr : 8;
- uint64_t rsvd_63_44 : 20;
- uint64_t action : 64; /* W1 */
- uint64_t vtag_action : 64; /* W2 */
- uint64_t la : 20;
- uint64_t lb : 20;
- uint64_t lc : 20;
- uint64_t rsvd_255_252 : 4;
- uint64_t ld : 20;
- uint64_t le : 20;
- uint64_t lf : 20;
- uint64_t rsvd_319_316 : 4;
- uint64_t lg : 20;
- uint64_t lh : 20;
- uint64_t rsvd_383_360 : 24;
-};
-
-#endif /* __OTX2_NPC_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_ree.h b/drivers/common/octeontx2/hw/otx2_ree.h
deleted file mode 100644
index b7481f125f..0000000000
--- a/drivers/common/octeontx2/hw/otx2_ree.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_REE_HW_H__
-#define __OTX2_REE_HW_H__
-
-/* REE BAR0*/
-#define REE_AF_REEXM_MAX_MATCH (0x80c8)
-
-/* REE BAR02 */
-#define REE_LF_MISC_INT (0x300)
-#define REE_LF_DONE_INT (0x120)
-
-#define REE_AF_QUEX_GMCTL(a) (0x800 | (a) << 3)
-
-#define REE_AF_INT_VEC_RAS (0x0ull)
-#define REE_AF_INT_VEC_RVU (0x1ull)
-#define REE_AF_INT_VEC_QUE_DONE (0x2ull)
-#define REE_AF_INT_VEC_AQ (0x3ull)
-
-/* ENUMS */
-
-#define REE_LF_INT_VEC_QUE_DONE (0x0ull)
-#define REE_LF_INT_VEC_MISC (0x1ull)
-
-#endif /* __OTX2_REE_HW_H__*/
diff --git a/drivers/common/octeontx2/hw/otx2_rvu.h b/drivers/common/octeontx2/hw/otx2_rvu.h
deleted file mode 100644
index b98dbcb1cd..0000000000
--- a/drivers/common/octeontx2/hw/otx2_rvu.h
+++ /dev/null
@@ -1,219 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_RVU_HW_H__
-#define __OTX2_RVU_HW_H__
-
-/* Register offsets */
-
-#define RVU_AF_MSIXTR_BASE (0x10ull)
-#define RVU_AF_BLK_RST (0x30ull)
-#define RVU_AF_PF_BAR4_ADDR (0x40ull)
-#define RVU_AF_RAS (0x100ull)
-#define RVU_AF_RAS_W1S (0x108ull)
-#define RVU_AF_RAS_ENA_W1S (0x110ull)
-#define RVU_AF_RAS_ENA_W1C (0x118ull)
-#define RVU_AF_GEN_INT (0x120ull)
-#define RVU_AF_GEN_INT_W1S (0x128ull)
-#define RVU_AF_GEN_INT_ENA_W1S (0x130ull)
-#define RVU_AF_GEN_INT_ENA_W1C (0x138ull)
-#define RVU_AF_AFPFX_MBOXX(a, b) \
- (0x2000ull | (uint64_t)(a) << 4 | (uint64_t)(b) << 3)
-#define RVU_AF_PFME_STATUS (0x2800ull)
-#define RVU_AF_PFTRPEND (0x2810ull)
-#define RVU_AF_PFTRPEND_W1S (0x2820ull)
-#define RVU_AF_PF_RST (0x2840ull)
-#define RVU_AF_HWVF_RST (0x2850ull)
-#define RVU_AF_PFAF_MBOX_INT (0x2880ull)
-#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888ull)
-#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890ull)
-#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898ull)
-#define RVU_AF_PFFLR_INT (0x28a0ull)
-#define RVU_AF_PFFLR_INT_W1S (0x28a8ull)
-#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0ull)
-#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8ull)
-#define RVU_AF_PFME_INT (0x28c0ull)
-#define RVU_AF_PFME_INT_W1S (0x28c8ull)
-#define RVU_AF_PFME_INT_ENA_W1S (0x28d0ull)
-#define RVU_AF_PFME_INT_ENA_W1C (0x28d8ull)
-#define RVU_PRIV_CONST (0x8000000ull)
-#define RVU_PRIV_GEN_CFG (0x8000010ull)
-#define RVU_PRIV_CLK_CFG (0x8000020ull)
-#define RVU_PRIV_ACTIVE_PC (0x8000030ull)
-#define RVU_PRIV_PFX_CFG(a) (0x8000100ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_NIXX_CFG(a, b) \
- (0x8000300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_PFX_NPA_CFG(a) (0x8000310ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_SSO_CFG(a) (0x8000320ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_SSOW_CFG(a) (0x8000330ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_TIM_CFG(a) (0x8000340ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_CPTX_CFG(a, b) \
- (0x8000350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400ull | (uint64_t)(a) << 3)
-#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_NIXX_CFG(a, b) \
- (0x8001300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_HWVFX_NPA_CFG(a) (0x8001310ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_SSO_CFG(a) (0x8001320ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_SSOW_CFG(a) (0x8001330ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_TIM_CFG(a) (0x8001340ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_CPTX_CFG(a, b) \
- (0x8001350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-
-#define RVU_PF_VFX_PFVF_MBOXX(a, b) \
- (0x0ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 3)
-#define RVU_PF_VF_BAR4_ADDR (0x10ull)
-#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_STATUSX(a) (0x800ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFTRPENDX(a) (0x820ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFTRPEND_W1SX(a) (0x840ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INTX(a) (0x880ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8a0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8c0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8e0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INTX(a) (0x900ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_W1SX(a) (0x920ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INTX(a) (0x980ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_W1SX(a) (0x9a0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9c0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9e0ull | (uint64_t)(a) << 3)
-#define RVU_PF_PFAF_MBOXX(a) (0xc00ull | (uint64_t)(a) << 3)
-#define RVU_PF_INT (0xc20ull)
-#define RVU_PF_INT_W1S (0xc28ull)
-#define RVU_PF_INT_ENA_W1S (0xc30ull)
-#define RVU_PF_INT_ENA_W1C (0xc38ull)
-#define RVU_PF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
-#define RVU_PF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
-#define RVU_PF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
-#define RVU_VF_VFPF_MBOXX(a) (0x0ull | (uint64_t)(a) << 3)
-#define RVU_VF_INT (0x20ull)
-#define RVU_VF_INT_W1S (0x28ull)
-#define RVU_VF_INT_ENA_W1S (0x30ull)
-#define RVU_VF_INT_ENA_W1C (0x38ull)
-#define RVU_VF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
-#define RVU_VF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
-#define RVU_VF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
-#define RVU_VF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
-
-
-/* Enum offsets */
-
-#define RVU_BAR_RVU_PF_END_BAR0 (0x84f000000000ull)
-#define RVU_BAR_RVU_PF_START_BAR0 (0x840000000000ull)
-#define RVU_BAR_RVU_PFX_FUNCX_BAR2(a, b) \
- (0x840200000000ull | ((uint64_t)(a) << 36) | ((uint64_t)(b) << 25))
-
-#define RVU_AF_INT_VEC_POISON (0x0ull)
-#define RVU_AF_INT_VEC_PFFLR (0x1ull)
-#define RVU_AF_INT_VEC_PFME (0x2ull)
-#define RVU_AF_INT_VEC_GEN (0x3ull)
-#define RVU_AF_INT_VEC_MBOX (0x4ull)
-
-#define RVU_BLOCK_TYPE_RVUM (0x0ull)
-#define RVU_BLOCK_TYPE_LMT (0x2ull)
-#define RVU_BLOCK_TYPE_NIX (0x3ull)
-#define RVU_BLOCK_TYPE_NPA (0x4ull)
-#define RVU_BLOCK_TYPE_NPC (0x5ull)
-#define RVU_BLOCK_TYPE_SSO (0x6ull)
-#define RVU_BLOCK_TYPE_SSOW (0x7ull)
-#define RVU_BLOCK_TYPE_TIM (0x8ull)
-#define RVU_BLOCK_TYPE_CPT (0x9ull)
-#define RVU_BLOCK_TYPE_NDC (0xaull)
-#define RVU_BLOCK_TYPE_DDF (0xbull)
-#define RVU_BLOCK_TYPE_ZIP (0xcull)
-#define RVU_BLOCK_TYPE_RAD (0xdull)
-#define RVU_BLOCK_TYPE_DFA (0xeull)
-#define RVU_BLOCK_TYPE_HNA (0xfull)
-#define RVU_BLOCK_TYPE_REE (0xeull)
-
-#define RVU_BLOCK_ADDR_RVUM (0x0ull)
-#define RVU_BLOCK_ADDR_LMT (0x1ull)
-#define RVU_BLOCK_ADDR_NPA (0x3ull)
-#define RVU_BLOCK_ADDR_NIX0 (0x4ull)
-#define RVU_BLOCK_ADDR_NIX1 (0x5ull)
-#define RVU_BLOCK_ADDR_NPC (0x6ull)
-#define RVU_BLOCK_ADDR_SSO (0x7ull)
-#define RVU_BLOCK_ADDR_SSOW (0x8ull)
-#define RVU_BLOCK_ADDR_TIM (0x9ull)
-#define RVU_BLOCK_ADDR_CPT0 (0xaull)
-#define RVU_BLOCK_ADDR_CPT1 (0xbull)
-#define RVU_BLOCK_ADDR_NDC0 (0xcull)
-#define RVU_BLOCK_ADDR_NDC1 (0xdull)
-#define RVU_BLOCK_ADDR_NDC2 (0xeull)
-#define RVU_BLOCK_ADDR_R_END (0x1full)
-#define RVU_BLOCK_ADDR_R_START (0x14ull)
-#define RVU_BLOCK_ADDR_REE0 (0x14ull)
-#define RVU_BLOCK_ADDR_REE1 (0x15ull)
-
-#define RVU_VF_INT_VEC_MBOX (0x0ull)
-
-#define RVU_PF_INT_VEC_AFPF_MBOX (0x6ull)
-#define RVU_PF_INT_VEC_VFFLR0 (0x0ull)
-#define RVU_PF_INT_VEC_VFFLR1 (0x1ull)
-#define RVU_PF_INT_VEC_VFME0 (0x2ull)
-#define RVU_PF_INT_VEC_VFME1 (0x3ull)
-#define RVU_PF_INT_VEC_VFPF_MBOX0 (0x4ull)
-#define RVU_PF_INT_VEC_VFPF_MBOX1 (0x5ull)
-
-
-#define AF_BAR2_ALIASX_SIZE (0x100000ull)
-
-#define TIM_AF_BAR2_SEL (0x9000000ull)
-#define SSO_AF_BAR2_SEL (0x9000000ull)
-#define NIX_AF_BAR2_SEL (0x9000000ull)
-#define SSOW_AF_BAR2_SEL (0x9000000ull)
-#define NPA_AF_BAR2_SEL (0x9000000ull)
-#define CPT_AF_BAR2_SEL (0x9000000ull)
-#define RVU_AF_BAR2_SEL (0x9000000ull)
-#define REE_AF_BAR2_SEL (0x9000000ull)
-
-#define AF_BAR2_ALIASX(a, b) \
- (0x9100000ull | (uint64_t)(a) << 12 | (uint64_t)(b))
-#define TIM_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define SSO_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define NIX_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
-#define SSOW_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define NPA_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
-#define CPT_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define RVU_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define REE_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-
-/* Structures definitions */
-
-/* RVU admin function register address structure */
-struct rvu_af_addr_s {
- uint64_t addr : 28;
- uint64_t block : 5;
- uint64_t rsvd_63_33 : 31;
-};
-
-/* RVU function-unique address structure */
-struct rvu_func_addr_s {
- uint32_t addr : 12;
- uint32_t lf_slot : 8;
- uint32_t block : 5;
- uint32_t rsvd_31_25 : 7;
-};
-
-/* RVU msi-x vector structure */
-struct rvu_msix_vec_s {
- uint64_t addr : 64; /* W0 */
- uint64_t data : 32;
- uint64_t mask : 1;
- uint64_t pend : 1;
- uint64_t rsvd_127_98 : 30;
-};
-
-/* RVU pf function identification structure */
-struct rvu_pf_func_s {
- uint16_t func : 10;
- uint16_t pf : 6;
-};
-
-#endif /* __OTX2_RVU_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_sdp.h b/drivers/common/octeontx2/hw/otx2_sdp.h
deleted file mode 100644
index 1e690f8b32..0000000000
--- a/drivers/common/octeontx2/hw/otx2_sdp.h
+++ /dev/null
@@ -1,184 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SDP_HW_H_
-#define __OTX2_SDP_HW_H_
-
-/* SDP VF IOQs */
-#define SDP_MIN_RINGS_PER_VF (1)
-#define SDP_MAX_RINGS_PER_VF (8)
-
-/* SDP VF IQ configuration */
-#define SDP_VF_MAX_IQ_DESCRIPTORS (512)
-#define SDP_VF_MIN_IQ_DESCRIPTORS (128)
-
-#define SDP_VF_DB_MIN (1)
-#define SDP_VF_DB_TIMEOUT (1)
-#define SDP_VF_INTR_THRESHOLD (0xFFFFFFFF)
-
-#define SDP_VF_64BYTE_INSTR (64)
-#define SDP_VF_32BYTE_INSTR (32)
-
-/* SDP VF OQ configuration */
-#define SDP_VF_MAX_OQ_DESCRIPTORS (512)
-#define SDP_VF_MIN_OQ_DESCRIPTORS (128)
-#define SDP_VF_OQ_BUF_SIZE (2048)
-#define SDP_VF_OQ_REFIL_THRESHOLD (16)
-
-#define SDP_VF_OQ_INFOPTR_MODE (1)
-#define SDP_VF_OQ_BUFPTR_MODE (0)
-
-#define SDP_VF_OQ_INTR_PKT (1)
-#define SDP_VF_OQ_INTR_TIME (10)
-#define SDP_VF_CFG_IO_QUEUES SDP_MAX_RINGS_PER_VF
-
-/* Wait time in milliseconds for FLR */
-#define SDP_VF_PCI_FLR_WAIT (100)
-#define SDP_VF_BUSY_LOOP_COUNT (10000)
-
-#define SDP_VF_MAX_IO_QUEUES SDP_MAX_RINGS_PER_VF
-#define SDP_VF_MIN_IO_QUEUES SDP_MIN_RINGS_PER_VF
-
-/* SDP VF IOQs per rawdev */
-#define SDP_VF_MAX_IOQS_PER_RAWDEV SDP_VF_MAX_IO_QUEUES
-#define SDP_VF_DEFAULT_IOQS_PER_RAWDEV SDP_VF_MIN_IO_QUEUES
-
-/* SDP VF Register definitions */
-#define SDP_VF_RING_OFFSET (0x1ull << 17)
-
-/* SDP VF IQ Registers */
-#define SDP_VF_R_IN_CONTROL_START (0x10000)
-#define SDP_VF_R_IN_ENABLE_START (0x10010)
-#define SDP_VF_R_IN_INSTR_BADDR_START (0x10020)
-#define SDP_VF_R_IN_INSTR_RSIZE_START (0x10030)
-#define SDP_VF_R_IN_INSTR_DBELL_START (0x10040)
-#define SDP_VF_R_IN_CNTS_START (0x10050)
-#define SDP_VF_R_IN_INT_LEVELS_START (0x10060)
-#define SDP_VF_R_IN_PKT_CNT_START (0x10080)
-#define SDP_VF_R_IN_BYTE_CNT_START (0x10090)
-
-#define SDP_VF_R_IN_CONTROL(ring) \
- (SDP_VF_R_IN_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_ENABLE(ring) \
- (SDP_VF_R_IN_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_BADDR(ring) \
- (SDP_VF_R_IN_INSTR_BADDR_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_RSIZE(ring) \
- (SDP_VF_R_IN_INSTR_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_DBELL(ring) \
- (SDP_VF_R_IN_INSTR_DBELL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_CNTS(ring) \
- (SDP_VF_R_IN_CNTS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INT_LEVELS(ring) \
- (SDP_VF_R_IN_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_PKT_CNT(ring) \
- (SDP_VF_R_IN_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_BYTE_CNT(ring) \
- (SDP_VF_R_IN_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-/* SDP VF IQ Masks */
-#define SDP_VF_R_IN_CTL_RPVF_MASK (0xF)
-#define SDP_VF_R_IN_CTL_RPVF_POS (48)
-
-#define SDP_VF_R_IN_CTL_IDLE (0x1ull << 28)
-#define SDP_VF_R_IN_CTL_RDSIZE (0x3ull << 25) /* Setting to max(4) */
-#define SDP_VF_R_IN_CTL_IS_64B (0x1ull << 24)
-#define SDP_VF_R_IN_CTL_D_NSR (0x1ull << 8)
-#define SDP_VF_R_IN_CTL_D_ESR (0x1ull << 6)
-#define SDP_VF_R_IN_CTL_D_ROR (0x1ull << 5)
-#define SDP_VF_R_IN_CTL_NSR (0x1ull << 3)
-#define SDP_VF_R_IN_CTL_ESR (0x1ull << 1)
-#define SDP_VF_R_IN_CTL_ROR (0x1ull << 0)
-
-#define SDP_VF_R_IN_CTL_MASK \
- (SDP_VF_R_IN_CTL_RDSIZE | SDP_VF_R_IN_CTL_IS_64B)
-
-/* SDP VF OQ Registers */
-#define SDP_VF_R_OUT_CNTS_START (0x10100)
-#define SDP_VF_R_OUT_INT_LEVELS_START (0x10110)
-#define SDP_VF_R_OUT_SLIST_BADDR_START (0x10120)
-#define SDP_VF_R_OUT_SLIST_RSIZE_START (0x10130)
-#define SDP_VF_R_OUT_SLIST_DBELL_START (0x10140)
-#define SDP_VF_R_OUT_CONTROL_START (0x10150)
-#define SDP_VF_R_OUT_ENABLE_START (0x10160)
-#define SDP_VF_R_OUT_PKT_CNT_START (0x10180)
-#define SDP_VF_R_OUT_BYTE_CNT_START (0x10190)
-
-#define SDP_VF_R_OUT_CONTROL(ring) \
- (SDP_VF_R_OUT_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_ENABLE(ring) \
- (SDP_VF_R_OUT_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_BADDR(ring) \
- (SDP_VF_R_OUT_SLIST_BADDR_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_RSIZE(ring) \
- (SDP_VF_R_OUT_SLIST_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_DBELL(ring) \
- (SDP_VF_R_OUT_SLIST_DBELL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_CNTS(ring) \
- (SDP_VF_R_OUT_CNTS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_INT_LEVELS(ring) \
- (SDP_VF_R_OUT_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_PKT_CNT(ring) \
- (SDP_VF_R_OUT_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_BYTE_CNT(ring) \
- (SDP_VF_R_OUT_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-/* SDP VF OQ Masks */
-#define SDP_VF_R_OUT_CTL_IDLE (1ull << 40)
-#define SDP_VF_R_OUT_CTL_ES_I (1ull << 34)
-#define SDP_VF_R_OUT_CTL_NSR_I (1ull << 33)
-#define SDP_VF_R_OUT_CTL_ROR_I (1ull << 32)
-#define SDP_VF_R_OUT_CTL_ES_D (1ull << 30)
-#define SDP_VF_R_OUT_CTL_NSR_D (1ull << 29)
-#define SDP_VF_R_OUT_CTL_ROR_D (1ull << 28)
-#define SDP_VF_R_OUT_CTL_ES_P (1ull << 26)
-#define SDP_VF_R_OUT_CTL_NSR_P (1ull << 25)
-#define SDP_VF_R_OUT_CTL_ROR_P (1ull << 24)
-#define SDP_VF_R_OUT_CTL_IMODE (1ull << 23)
-
-#define SDP_VF_R_OUT_INT_LEVELS_BMODE (1ull << 63)
-#define SDP_VF_R_OUT_INT_LEVELS_TIMET (32)
-
-/* SDP Instruction Header */
-struct sdp_instr_ih {
- /* Data Len */
- uint64_t tlen:16;
-
- /* Reserved1 */
- uint64_t rsvd1:20;
-
- /* PKIND for SDP */
- uint64_t pkind:6;
-
- /* Front Data size */
- uint64_t fsz:6;
-
- /* No. of entries in gather list */
- uint64_t gsz:14;
-
- /* Gather indicator */
- uint64_t gather:1;
-
- /* Reserved2 */
- uint64_t rsvd2:1;
-} __rte_packed;
-
-#endif /* __OTX2_SDP_HW_H_ */
-
diff --git a/drivers/common/octeontx2/hw/otx2_sso.h b/drivers/common/octeontx2/hw/otx2_sso.h
deleted file mode 100644
index 98a8130b16..0000000000
--- a/drivers/common/octeontx2/hw/otx2_sso.h
+++ /dev/null
@@ -1,209 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SSO_HW_H__
-#define __OTX2_SSO_HW_H__
-
-/* Register offsets */
-
-#define SSO_AF_CONST (0x1000ull)
-#define SSO_AF_CONST1 (0x1008ull)
-#define SSO_AF_WQ_INT_PC (0x1020ull)
-#define SSO_AF_NOS_CNT (0x1050ull)
-#define SSO_AF_AW_WE (0x1080ull)
-#define SSO_AF_WS_CFG (0x1088ull)
-#define SSO_AF_GWE_CFG (0x1098ull)
-#define SSO_AF_GWE_RANDOM (0x10b0ull)
-#define SSO_AF_LF_HWGRP_RST (0x10e0ull)
-#define SSO_AF_AW_CFG (0x10f0ull)
-#define SSO_AF_BLK_RST (0x10f8ull)
-#define SSO_AF_ACTIVE_CYCLES0 (0x1100ull)
-#define SSO_AF_ACTIVE_CYCLES1 (0x1108ull)
-#define SSO_AF_ACTIVE_CYCLES2 (0x1110ull)
-#define SSO_AF_ERR0 (0x1220ull)
-#define SSO_AF_ERR0_W1S (0x1228ull)
-#define SSO_AF_ERR0_ENA_W1C (0x1230ull)
-#define SSO_AF_ERR0_ENA_W1S (0x1238ull)
-#define SSO_AF_ERR2 (0x1260ull)
-#define SSO_AF_ERR2_W1S (0x1268ull)
-#define SSO_AF_ERR2_ENA_W1C (0x1270ull)
-#define SSO_AF_ERR2_ENA_W1S (0x1278ull)
-#define SSO_AF_UNMAP_INFO (0x12f0ull)
-#define SSO_AF_UNMAP_INFO2 (0x1300ull)
-#define SSO_AF_UNMAP_INFO3 (0x1310ull)
-#define SSO_AF_RAS (0x1420ull)
-#define SSO_AF_RAS_W1S (0x1430ull)
-#define SSO_AF_RAS_ENA_W1C (0x1460ull)
-#define SSO_AF_RAS_ENA_W1S (0x1470ull)
-#define SSO_AF_AW_INP_CTL (0x2070ull)
-#define SSO_AF_AW_ADD (0x2080ull)
-#define SSO_AF_AW_READ_ARB (0x2090ull)
-#define SSO_AF_XAQ_REQ_PC (0x20b0ull)
-#define SSO_AF_XAQ_LATENCY_PC (0x20b8ull)
-#define SSO_AF_TAQ_CNT (0x20c0ull)
-#define SSO_AF_TAQ_ADD (0x20e0ull)
-#define SSO_AF_POISONX(a) (0x2100ull | (uint64_t)(a) << 3)
-#define SSO_AF_POISONX_W1S(a) (0x2200ull | (uint64_t)(a) << 3)
-#define SSO_PRIV_AF_INT_CFG (0x3000ull)
-#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800ull)
-#define SSO_PRIV_LFX_HWGRP_CFG(a) (0x10000ull | (uint64_t)(a) << 3)
-#define SSO_PRIV_LFX_HWGRP_INT_CFG(a) (0x20000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IU_ACCNTX_CFG(a) (0x50000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IU_ACCNTX_RST(a) (0x60000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_HEAD_PTR(a) (0x80000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_TAIL_PTR(a) (0x90000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_HEAD_NEXT(a) (0xa0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_TAIL_NEXT(a) (0xb0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TIAQX_STATUS(a) (0xc0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TOAQX_STATUS(a) (0xd0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_GMCTL(a) (0xe0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_HWGRPX_IAQ_THR(a) (0x200000ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_TAQ_THR(a) (0x200010ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_PRI(a) (0x200020ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_WS_PC(a) (0x200050ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_EXT_PC(a) (0x200060ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_WA_PC(a) (0x200070ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_TS_PC(a) (0x200080ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_DS_PC(a) (0x200090ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_DQ_PC(a) (0x2000A0ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_PAGE_CNT(a) (0x200100ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_STATUS(a) (0x200110ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_CFG(a) (0x200120ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_TAGSPACE(a) (0x200130ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_XAQ_AURA(a) (0x200140ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_XAQ_LIMIT(a) (0x200220ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_IU_ACCNT(a) (0x200230ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_ARB(a) (0x400100ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_INV(a) (0x400180ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_GMCTL(a) (0x400200ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_SX_GRPMSKX(a, b, c) \
- (0x400400ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 5 | \
- (uint64_t)(c) << 3)
-#define SSO_AF_IPL_FREEX(a) (0x800000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_IAQX(a) (0x840000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_DESCHEDX(a) (0x860000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_CONFX(a) (0x880000ull | (uint64_t)(a) << 3)
-#define SSO_AF_NPA_DIGESTX(a) (0x900000ull | (uint64_t)(a) << 3)
-#define SSO_AF_NPA_DIGESTX_W1S(a) (0x900100ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFP_DIGESTX(a) (0x900200ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFP_DIGESTX_W1S(a) (0x900300ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFPN_DIGESTX(a) (0x900400ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFPN_DIGESTX_W1S(a) (0x900500ull | (uint64_t)(a) << 3)
-#define SSO_AF_GRPDIS_DIGESTX(a) (0x900600ull | (uint64_t)(a) << 3)
-#define SSO_AF_GRPDIS_DIGESTX_W1S(a) (0x900700ull | (uint64_t)(a) << 3)
-#define SSO_AF_AWEMPTY_DIGESTX(a) (0x900800ull | (uint64_t)(a) << 3)
-#define SSO_AF_AWEMPTY_DIGESTX_W1S(a) (0x900900ull | (uint64_t)(a) << 3)
-#define SSO_AF_WQP0_DIGESTX(a) (0x900a00ull | (uint64_t)(a) << 3)
-#define SSO_AF_WQP0_DIGESTX_W1S(a) (0x900b00ull | (uint64_t)(a) << 3)
-#define SSO_AF_AW_DROPPED_DIGESTX(a) (0x900c00ull | (uint64_t)(a) << 3)
-#define SSO_AF_AW_DROPPED_DIGESTX_W1S(a) (0x900d00ull | (uint64_t)(a) << 3)
-#define SSO_AF_QCTLDIS_DIGESTX(a) (0x900e00ull | (uint64_t)(a) << 3)
-#define SSO_AF_QCTLDIS_DIGESTX_W1S(a) (0x900f00ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQDIS_DIGESTX(a) (0x901000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQDIS_DIGESTX_W1S(a) (0x901100ull | (uint64_t)(a) << 3)
-#define SSO_AF_FLR_AQ_DIGESTX(a) (0x901200ull | (uint64_t)(a) << 3)
-#define SSO_AF_FLR_AQ_DIGESTX_W1S(a) (0x901300ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GMULTI_DIGESTX(a) (0x902000ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GMULTI_DIGESTX_W1S(a) (0x902100ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GUNMAP_DIGESTX(a) (0x902200ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GUNMAP_DIGESTX_W1S(a) (0x902300ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_AWE_DIGESTX(a) (0x902400ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_AWE_DIGESTX_W1S(a) (0x902500ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GWI_DIGESTX(a) (0x902600ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GWI_DIGESTX_W1S(a) (0x902700ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_NE_DIGESTX(a) (0x902800ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_NE_DIGESTX_W1S(a) (0x902900ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_TAG(a) (0xa00000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_GRP(a) (0xa20000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_PENDTAG(a) (0xa40000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_LINKS(a) (0xa60000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_QLINKS(a) (0xa80000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_WQP(a) (0xaa0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TAQX_LINK(a) (0xc00000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TAQX_WAEX_TAG(a, b) \
- (0xe00000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define SSO_AF_TAQX_WAEX_WQP(a, b) \
- (0xe00008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-
-#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
-#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
-#define SSO_LF_GGRP_QCTL (0x20ull)
-#define SSO_LF_GGRP_EXE_DIS (0x80ull)
-#define SSO_LF_GGRP_INT (0x100ull)
-#define SSO_LF_GGRP_INT_W1S (0x108ull)
-#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
-#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
-#define SSO_LF_GGRP_INT_THR (0x140ull)
-#define SSO_LF_GGRP_INT_CNT (0x180ull)
-#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
-#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
-#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
-#define SSO_LF_GGRP_MISC_CNT (0x200ull)
-
-#define SSO_AF_IAQ_FREE_CNT_MASK 0x3FFFull
-#define SSO_AF_IAQ_RSVD_FREE_MASK 0x3FFFull
-#define SSO_AF_IAQ_RSVD_FREE_SHIFT 16
-#define SSO_AF_IAQ_FREE_CNT_MAX SSO_AF_IAQ_FREE_CNT_MASK
-#define SSO_AF_AW_ADD_RSVD_FREE_MASK 0x3FFFull
-#define SSO_AF_AW_ADD_RSVD_FREE_SHIFT 16
-#define SSO_HWGRP_IAQ_MAX_THR_MASK 0x3FFFull
-#define SSO_HWGRP_IAQ_RSVD_THR_MASK 0x3FFFull
-#define SSO_HWGRP_IAQ_MAX_THR_SHIFT 32
-#define SSO_HWGRP_IAQ_RSVD_THR 0x2
-
-#define SSO_AF_TAQ_FREE_CNT_MASK 0x7FFull
-#define SSO_AF_TAQ_RSVD_FREE_MASK 0x7FFull
-#define SSO_AF_TAQ_RSVD_FREE_SHIFT 16
-#define SSO_AF_TAQ_FREE_CNT_MAX SSO_AF_TAQ_FREE_CNT_MASK
-#define SSO_AF_TAQ_ADD_RSVD_FREE_MASK 0x1FFFull
-#define SSO_AF_TAQ_ADD_RSVD_FREE_SHIFT 16
-#define SSO_HWGRP_TAQ_MAX_THR_MASK 0x7FFull
-#define SSO_HWGRP_TAQ_RSVD_THR_MASK 0x7FFull
-#define SSO_HWGRP_TAQ_MAX_THR_SHIFT 32
-#define SSO_HWGRP_TAQ_RSVD_THR 0x3
-
-#define SSO_HWGRP_PRI_AFF_MASK 0xFull
-#define SSO_HWGRP_PRI_AFF_SHIFT 8
-#define SSO_HWGRP_PRI_WGT_MASK 0x3Full
-#define SSO_HWGRP_PRI_WGT_SHIFT 16
-#define SSO_HWGRP_PRI_WGT_LEFT_MASK 0x3Full
-#define SSO_HWGRP_PRI_WGT_LEFT_SHIFT 24
-
-#define SSO_HWGRP_AW_CFG_RWEN BIT_ULL(0)
-#define SSO_HWGRP_AW_CFG_LDWB BIT_ULL(1)
-#define SSO_HWGRP_AW_CFG_LDT BIT_ULL(2)
-#define SSO_HWGRP_AW_CFG_STT BIT_ULL(3)
-#define SSO_HWGRP_AW_CFG_XAQ_BYP_DIS BIT_ULL(4)
-
-#define SSO_HWGRP_AW_STS_TPTR_VLD BIT_ULL(8)
-#define SSO_HWGRP_AW_STS_NPA_FETCH BIT_ULL(9)
-#define SSO_HWGRP_AW_STS_XAQ_BUFSC_MASK 0x7ull
-#define SSO_HWGRP_AW_STS_INIT_STS 0x18ull
-
-/* Enum offsets */
-
-#define SSO_LF_INT_VEC_GRP (0x0ull)
-
-#define SSO_AF_INT_VEC_ERR0 (0x0ull)
-#define SSO_AF_INT_VEC_ERR2 (0x1ull)
-#define SSO_AF_INT_VEC_RAS (0x2ull)
-
-#define SSO_WA_IOBN (0x0ull)
-#define SSO_WA_NIXRX (0x1ull)
-#define SSO_WA_CPT (0x2ull)
-#define SSO_WA_ADDWQ (0x3ull)
-#define SSO_WA_DPI (0x4ull)
-#define SSO_WA_NIXTX (0x5ull)
-#define SSO_WA_TIM (0x6ull)
-#define SSO_WA_ZIP (0x7ull)
-
-#define SSO_TT_ORDERED (0x0ull)
-#define SSO_TT_ATOMIC (0x1ull)
-#define SSO_TT_UNTAGGED (0x2ull)
-#define SSO_TT_EMPTY (0x3ull)
-
-
-/* Structures definitions */
-
-#endif /* __OTX2_SSO_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_ssow.h b/drivers/common/octeontx2/hw/otx2_ssow.h
deleted file mode 100644
index 8a44578036..0000000000
--- a/drivers/common/octeontx2/hw/otx2_ssow.h
+++ /dev/null
@@ -1,56 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SSOW_HW_H__
-#define __OTX2_SSOW_HW_H__
-
-/* Register offsets */
-
-#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x10ull)
-#define SSOW_AF_LF_HWS_RST (0x30ull)
-#define SSOW_PRIV_LFX_HWS_CFG(a) (0x1000ull | (uint64_t)(a) << 3)
-#define SSOW_PRIV_LFX_HWS_INT_CFG(a) (0x2000ull | (uint64_t)(a) << 3)
-#define SSOW_AF_SCRATCH_WS (0x100000ull)
-#define SSOW_AF_SCRATCH_GW (0x200000ull)
-#define SSOW_AF_SCRATCH_AW (0x300000ull)
-
-#define SSOW_LF_GWS_LINKS (0x10ull)
-#define SSOW_LF_GWS_PENDWQP (0x40ull)
-#define SSOW_LF_GWS_PENDSTATE (0x50ull)
-#define SSOW_LF_GWS_NW_TIM (0x70ull)
-#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
-#define SSOW_LF_GWS_INT (0x100ull)
-#define SSOW_LF_GWS_INT_W1S (0x108ull)
-#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
-#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
-#define SSOW_LF_GWS_TAG (0x200ull)
-#define SSOW_LF_GWS_WQP (0x210ull)
-#define SSOW_LF_GWS_SWTP (0x220ull)
-#define SSOW_LF_GWS_PENDTAG (0x230ull)
-#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
-#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
-#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
-#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
-#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
-#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
-#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
-#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
-#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
-#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
-#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
-#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
-
-
-/* Enum offsets */
-
-#define SSOW_LF_INT_VEC_IOP (0x0ull)
-
-
-#endif /* __OTX2_SSOW_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_tim.h b/drivers/common/octeontx2/hw/otx2_tim.h
deleted file mode 100644
index 41442ad0a8..0000000000
--- a/drivers/common/octeontx2/hw/otx2_tim.h
+++ /dev/null
@@ -1,34 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_HW_H__
-#define __OTX2_TIM_HW_H__
-
-/* TIM */
-#define TIM_AF_CONST (0x90)
-#define TIM_PRIV_LFX_CFG(a) (0x20000 | (a) << 3)
-#define TIM_PRIV_LFX_INT_CFG(a) (0x24000 | (a) << 3)
-#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000)
-#define TIM_AF_BLK_RST (0x10)
-#define TIM_AF_LF_RST (0x20)
-#define TIM_AF_BLK_RST (0x10)
-#define TIM_AF_RINGX_GMCTL(a) (0x2000 | (a) << 3)
-#define TIM_AF_RINGX_CTL0(a) (0x4000 | (a) << 3)
-#define TIM_AF_RINGX_CTL1(a) (0x6000 | (a) << 3)
-#define TIM_AF_RINGX_CTL2(a) (0x8000 | (a) << 3)
-#define TIM_AF_FLAGS_REG (0x80)
-#define TIM_AF_FLAGS_REG_ENA_TIM BIT_ULL(0)
-#define TIM_AF_RINGX_CTL1_ENA BIT_ULL(47)
-#define TIM_AF_RINGX_CTL1_RCF_BUSY BIT_ULL(50)
-#define TIM_AF_RINGX_CLT1_CLK_10NS (0)
-#define TIM_AF_RINGX_CLT1_CLK_GPIO (1)
-#define TIM_AF_RINGX_CLT1_CLK_GTI (2)
-#define TIM_AF_RINGX_CLT1_CLK_PTP (3)
-
-/* ENUMS */
-
-#define TIM_LF_INT_VEC_NRSPERR_INT (0x0ull)
-#define TIM_LF_INT_VEC_RAS_INT (0x1ull)
-
-#endif /* __OTX2_TIM_HW_H__ */
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
deleted file mode 100644
index 223ba5ef51..0000000000
--- a/drivers/common/octeontx2/meson.build
+++ /dev/null
@@ -1,24 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources= files(
- 'otx2_common.c',
- 'otx2_dev.c',
- 'otx2_irq.c',
- 'otx2_mbox.c',
- 'otx2_sec_idev.c',
-)
-
-deps = ['eal', 'pci', 'ethdev', 'kvargs']
-includes += include_directories(
- '../../common/octeontx2',
- '../../mempool/octeontx2',
- '../../bus/pci',
-)
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
deleted file mode 100644
index d23c50242e..0000000000
--- a/drivers/common/octeontx2/otx2_common.c
+++ /dev/null
@@ -1,216 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_malloc.h>
-#include <rte_log.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_mbox.h"
-
-/**
- * @internal
- * Set default NPA configuration.
- */
-void
-otx2_npa_set_defaults(struct otx2_idev_cfg *idev)
-{
- idev->npa_pf_func = 0;
- rte_atomic16_set(&idev->npa_refcnt, 0);
-}
-
-/**
- * @internal
- * Get intra device config structure.
- */
-struct otx2_idev_cfg *
-otx2_intra_dev_get_cfg(void)
-{
- const char name[] = "octeontx2_intra_device_conf";
- const struct rte_memzone *mz;
- struct otx2_idev_cfg *idev;
-
- mz = rte_memzone_lookup(name);
- if (mz != NULL)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_cfg),
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz != NULL) {
- idev = mz->addr;
- idev->sso_pf_func = 0;
- idev->npa_lf = NULL;
- otx2_npa_set_defaults(idev);
- return idev;
- }
- return NULL;
-}
-
-/**
- * @internal
- * Get SSO PF_FUNC.
- */
-uint16_t
-otx2_sso_pf_func_get(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t sso_pf_func;
-
- sso_pf_func = 0;
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL)
- sso_pf_func = idev->sso_pf_func;
-
- return sso_pf_func;
-}
-
-/**
- * @internal
- * Set SSO PF_FUNC.
- */
-void
-otx2_sso_pf_func_set(uint16_t sso_pf_func)
-{
- struct otx2_idev_cfg *idev;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL) {
- idev->sso_pf_func = sso_pf_func;
- rte_smp_wmb();
- }
-}
-
-/**
- * @internal
- * Get NPA PF_FUNC.
- */
-uint16_t
-otx2_npa_pf_func_get(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t npa_pf_func;
-
- npa_pf_func = 0;
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL)
- npa_pf_func = idev->npa_pf_func;
-
- return npa_pf_func;
-}
-
-/**
- * @internal
- * Get NPA lf object.
- */
-struct otx2_npa_lf *
-otx2_npa_lf_obj_get(void)
-{
- struct otx2_idev_cfg *idev;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL && rte_atomic16_read(&idev->npa_refcnt))
- return idev->npa_lf;
-
- return NULL;
-}
-
-/**
- * @internal
- * Is NPA lf active for the given device?.
- */
-int
-otx2_npa_lf_active(void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
-
- /* Check if npalf is actively used on this dev */
- idev = otx2_intra_dev_get_cfg();
- if (!idev || !idev->npa_lf || idev->npa_lf->mbox != dev->mbox)
- return 0;
-
- return rte_atomic16_read(&idev->npa_refcnt);
-}
-
-/*
- * @internal
- * Gets reference only to existing NPA LF object.
- */
-int otx2_npa_lf_obj_ref(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t cnt;
- int rc;
-
- idev = otx2_intra_dev_get_cfg();
-
- /* Check if ref not possible */
- if (idev == NULL)
- return -EINVAL;
-
-
- /* Get ref only if > 0 */
- cnt = rte_atomic16_read(&idev->npa_refcnt);
- while (cnt != 0) {
- rc = rte_atomic16_cmpset(&idev->npa_refcnt_u16, cnt, cnt + 1);
- if (rc)
- break;
-
- cnt = rte_atomic16_read(&idev->npa_refcnt);
- }
-
- return cnt ? 0 : -EINVAL;
-}
-
-static int
-parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint64_t val;
-
- val = strtoull(value, NULL, 16);
-
- *(uint64_t *)extra_args = val;
-
- return 0;
-}
-
-/*
- * @internal
- * Parse common device arguments
- */
-void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
-{
-
- struct otx2_idev_cfg *idev;
- uint64_t npa_lock_mask = 0;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
- &parse_npa_lock_mask, &npa_lock_mask);
-
- idev->npa_lock_mask = npa_lock_mask;
-}
-
-RTE_LOG_REGISTER(otx2_logtype_base, pmd.octeontx2.base, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_mbox, pmd.octeontx2.mbox, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_npa, pmd.mempool.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_nix, pmd.net.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_npc, pmd.net.octeontx2.flow, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_tm, pmd.net.octeontx2.tm, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_sso, pmd.event.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_tim, pmd.event.octeontx2.timer, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_dpi, pmd.raw.octeontx2.dpi, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_ep, pmd.raw.octeontx2.ep, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_ree, pmd.regex.octeontx2, NOTICE);
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
deleted file mode 100644
index cd52e098e6..0000000000
--- a/drivers/common/octeontx2/otx2_common.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_COMMON_H_
-#define _OTX2_COMMON_H_
-
-#include <rte_atomic.h>
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_kvargs.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_io.h>
-
-#include "hw/otx2_rvu.h"
-#include "hw/otx2_nix.h"
-#include "hw/otx2_npc.h"
-#include "hw/otx2_npa.h"
-#include "hw/otx2_sdp.h"
-#include "hw/otx2_sso.h"
-#include "hw/otx2_ssow.h"
-#include "hw/otx2_tim.h"
-#include "hw/otx2_ree.h"
-
-/* Alignment */
-#define OTX2_ALIGN 128
-
-/* Bits manipulation */
-#ifndef BIT_ULL
-#define BIT_ULL(nr) (1ULL << (nr))
-#endif
-#ifndef BIT
-#define BIT(nr) (1UL << (nr))
-#endif
-
-#ifndef BITS_PER_LONG
-#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
-#endif
-#ifndef BITS_PER_LONG_LONG
-#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8)
-#endif
-
-#ifndef GENMASK
-#define GENMASK(h, l) \
- (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
-#endif
-#ifndef GENMASK_ULL
-#define GENMASK_ULL(h, l) \
- (((~0ULL) - (1ULL << (l)) + 1) & \
- (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
-#endif
-
-#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
-
-/* Intra device related functions */
-struct otx2_npa_lf;
-struct otx2_idev_cfg {
- uint16_t sso_pf_func;
- uint16_t npa_pf_func;
- struct otx2_npa_lf *npa_lf;
- RTE_STD_C11
- union {
- rte_atomic16_t npa_refcnt;
- uint16_t npa_refcnt_u16;
- };
- uint64_t npa_lock_mask;
-};
-
-__rte_internal
-struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
-__rte_internal
-void otx2_sso_pf_func_set(uint16_t sso_pf_func);
-__rte_internal
-uint16_t otx2_sso_pf_func_get(void);
-__rte_internal
-uint16_t otx2_npa_pf_func_get(void);
-__rte_internal
-struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
-__rte_internal
-void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
-__rte_internal
-int otx2_npa_lf_active(void *dev);
-__rte_internal
-int otx2_npa_lf_obj_ref(void);
-__rte_internal
-void otx2_parse_common_devargs(struct rte_kvargs *kvlist);
-
-/* Log */
-extern int otx2_logtype_base;
-extern int otx2_logtype_mbox;
-extern int otx2_logtype_npa;
-extern int otx2_logtype_nix;
-extern int otx2_logtype_sso;
-extern int otx2_logtype_npc;
-extern int otx2_logtype_tm;
-extern int otx2_logtype_tim;
-extern int otx2_logtype_dpi;
-extern int otx2_logtype_ep;
-extern int otx2_logtype_ree;
-
-#define otx2_err(fmt, args...) \
- RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", \
- __func__, __LINE__, ## args)
-
-#define otx2_info(fmt, args...) \
- RTE_LOG(INFO, PMD, fmt"\n", ## args)
-
-#define otx2_dbg(subsystem, fmt, args...) \
- rte_log(RTE_LOG_DEBUG, otx2_logtype_ ## subsystem, \
- "[%s] %s():%u " fmt "\n", \
- #subsystem, __func__, __LINE__, ##args)
-
-#define otx2_base_dbg(fmt, ...) otx2_dbg(base, fmt, ##__VA_ARGS__)
-#define otx2_mbox_dbg(fmt, ...) otx2_dbg(mbox, fmt, ##__VA_ARGS__)
-#define otx2_npa_dbg(fmt, ...) otx2_dbg(npa, fmt, ##__VA_ARGS__)
-#define otx2_nix_dbg(fmt, ...) otx2_dbg(nix, fmt, ##__VA_ARGS__)
-#define otx2_sso_dbg(fmt, ...) otx2_dbg(sso, fmt, ##__VA_ARGS__)
-#define otx2_npc_dbg(fmt, ...) otx2_dbg(npc, fmt, ##__VA_ARGS__)
-#define otx2_tm_dbg(fmt, ...) otx2_dbg(tm, fmt, ##__VA_ARGS__)
-#define otx2_tim_dbg(fmt, ...) otx2_dbg(tim, fmt, ##__VA_ARGS__)
-#define otx2_dpi_dbg(fmt, ...) otx2_dbg(dpi, fmt, ##__VA_ARGS__)
-#define otx2_sdp_dbg(fmt, ...) otx2_dbg(ep, fmt, ##__VA_ARGS__)
-#define otx2_ree_dbg(fmt, ...) otx2_dbg(ree, fmt, ##__VA_ARGS__)
-
-/* PCI IDs */
-#define PCI_VENDOR_ID_CAVIUM 0x177D
-#define PCI_DEVID_OCTEONTX2_RVU_PF 0xA063
-#define PCI_DEVID_OCTEONTX2_RVU_VF 0xA064
-#define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065
-#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF 0xA0F9
-#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF 0xA0FA
-#define PCI_DEVID_OCTEONTX2_RVU_NPA_PF 0xA0FB
-#define PCI_DEVID_OCTEONTX2_RVU_NPA_VF 0xA0FC
-#define PCI_DEVID_OCTEONTX2_RVU_CPT_PF 0xA0FD
-#define PCI_DEVID_OCTEONTX2_RVU_CPT_VF 0xA0FE
-#define PCI_DEVID_OCTEONTX2_RVU_AF_VF 0xA0f8
-#define PCI_DEVID_OCTEONTX2_DPI_VF 0xA081
-#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */
-/* OCTEON TX2 98xx EP mode */
-#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103
-#define PCI_DEVID_OCTEONTX2_EP_RAW_VF 0xB204 /* OCTEON TX2 EP mode */
-#define PCI_DEVID_OCTEONTX2_RVU_SDP_PF 0xA0f6
-#define PCI_DEVID_OCTEONTX2_RVU_SDP_VF 0xA0f7
-#define PCI_DEVID_OCTEONTX2_RVU_REE_PF 0xA0f4
-#define PCI_DEVID_OCTEONTX2_RVU_REE_VF 0xA0f5
-
-/*
- * REVID for RVU PCIe devices.
- * Bits 0..1: minor pass
- * Bits 3..2: major pass
- * Bits 7..4: midr id, 0:96, 1:95, 2:loki, f:unknown
- */
-
-#define RVU_PCI_REV_MIDR_ID(rev_id) (rev_id >> 4)
-#define RVU_PCI_REV_MAJOR(rev_id) ((rev_id >> 2) & 0x3)
-#define RVU_PCI_REV_MINOR(rev_id) (rev_id & 0x3)
-
-#define RVU_PCI_CN96XX_MIDR_ID 0x0
-#define RVU_PCI_CNF95XX_MIDR_ID 0x1
-
-/* PCI Config offsets */
-#define RVU_PCI_REVISION_ID 0x08
-
-/* IO Access */
-#define otx2_read64(addr) rte_read64_relaxed((void *)(addr))
-#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr))
-
-#if defined(RTE_ARCH_ARM64)
-#include "otx2_io_arm64.h"
-#else
-#include "otx2_io_generic.h"
-#endif
-
-/* Fastpath lookup */
-#define OTX2_NIX_FASTPATH_LOOKUP_MEM "otx2_nix_fastpath_lookup_mem"
-#define OTX2_NIX_SA_TBL_START (4096*4 + 69632*2)
-
-#endif /* _OTX2_COMMON_H_ */
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
deleted file mode 100644
index 08dca87848..0000000000
--- a/drivers/common/octeontx2/otx2_dev.c
+++ /dev/null
@@ -1,1074 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <fcntl.h>
-#include <inttypes.h>
-#include <sys/mman.h>
-#include <unistd.h>
-
-#include <rte_alarm.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_memcpy.h>
-#include <rte_eal_paging.h>
-
-#include "otx2_dev.h"
-#include "otx2_mbox.h"
-
-#define RVU_MAX_VF 64 /* RVU_PF_VFPF_MBOX_INT(0..1) */
-#define RVU_MAX_INT_RETRY 3
-
-/* PF/VF message handling timer */
-#define VF_PF_MBOX_TIMER_MS (20 * 1000)
-
-static void *
-mbox_mem_map(off_t off, size_t size)
-{
- void *va = MAP_FAILED;
- int mem_fd;
-
- if (size <= 0)
- goto error;
-
- mem_fd = open("/dev/mem", O_RDWR);
- if (mem_fd < 0)
- goto error;
-
- va = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE,
- RTE_MAP_SHARED, mem_fd, off);
- close(mem_fd);
-
- if (va == NULL)
- otx2_err("Failed to mmap sz=0x%zx, fd=%d, off=%jd",
- size, mem_fd, (intmax_t)off);
-error:
- return va;
-}
-
-static void
-mbox_mem_unmap(void *va, size_t size)
-{
- if (va)
- rte_mem_unmap(va, size);
-}
-
-static int
-pf_af_sync_msg(struct otx2_dev *dev, struct mbox_msghdr **rsp)
-{
- uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- volatile uint64_t int_status;
- struct mbox_msghdr *msghdr;
- uint64_t off;
- int rc = 0;
-
- /* We need to disable PF interrupts. We are in timer interrupt */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- /* Send message */
- otx2_mbox_msg_send(mbox, 0);
-
- do {
- rte_delay_ms(sleep);
- timeout += sleep;
- if (timeout >= MBOX_RSP_TIMEOUT) {
- otx2_err("Message timeout: %dms", MBOX_RSP_TIMEOUT);
- rc = -EIO;
- break;
- }
- int_status = otx2_read64(dev->bar2 + RVU_PF_INT);
- } while ((int_status & 0x1) != 0x1);
-
- /* Clear */
- otx2_write64(int_status, dev->bar2 + RVU_PF_INT);
-
- /* Enable interrupts */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- if (rc == 0) {
- /* Get message */
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + off);
- if (rsp)
- *rsp = msghdr;
- rc = msghdr->rc;
- }
-
- return rc;
-}
-
-static int
-af_pf_wait_msg(struct otx2_dev *dev, uint16_t vf, int num_msg)
-{
- uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- volatile uint64_t int_status;
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- struct mbox_msghdr *rsp;
- uint64_t offset;
- size_t size;
- int i;
-
- /* We need to disable PF interrupts. We are in timer interrupt */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- /* Send message */
- otx2_mbox_msg_send(mbox, 0);
-
- do {
- rte_delay_ms(sleep);
- timeout++;
- if (timeout >= MBOX_RSP_TIMEOUT) {
- otx2_err("Routed messages %d timeout: %dms",
- num_msg, MBOX_RSP_TIMEOUT);
- break;
- }
- int_status = otx2_read64(dev->bar2 + RVU_PF_INT);
- } while ((int_status & 0x1) != 0x1);
-
- /* Clear */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT);
-
- /* Enable interrupts */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- rte_spinlock_lock(&mdev->mbox_lock);
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs != num_msg)
- otx2_err("Routed messages: %d received: %d", num_msg,
- req_hdr->num_msgs);
-
- /* Get messages from mbox */
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- size = mbox->rx_start + msg->next_msgoff - offset;
-
- /* Reserve PF/VF mbox message */
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- rsp = otx2_mbox_alloc_msg(&dev->mbox_vfpf, vf, size);
- otx2_mbox_rsp_init(msg->id, rsp);
-
- /* Copy message from AF<->PF mbox to PF<->VF mbox */
- otx2_mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr),
- (uint8_t *)msg + sizeof(struct mbox_msghdr),
- size - sizeof(struct mbox_msghdr));
-
- /* Set status and sender pf_func data */
- rsp->rc = msg->rc;
- rsp->pcifunc = msg->pcifunc;
-
- /* Whenever a PF comes up, AF sends the link status to it but
- * when VF comes up no such event is sent to respective VF.
- * Using MBOX_MSG_NIX_LF_START_RX response from AF for the
- * purpose and send the link status of PF to VF.
- */
- if (msg->id == MBOX_MSG_NIX_LF_START_RX) {
- /* Send link status to VF */
- struct cgx_link_user_info linfo;
- struct mbox_msghdr *vf_msg;
- size_t sz;
-
- /* Get the link status */
- if (dev->ops && dev->ops->link_status_get)
- dev->ops->link_status_get(dev, &linfo);
-
- sz = RTE_ALIGN(otx2_mbox_id2size(
- MBOX_MSG_CGX_LINK_EVENT), MBOX_MSG_ALIGN);
- /* Prepare the message to be sent */
- vf_msg = otx2_mbox_alloc_msg(&dev->mbox_vfpf_up, vf,
- sz);
- otx2_mbox_req_init(MBOX_MSG_CGX_LINK_EVENT, vf_msg);
- memcpy((uint8_t *)vf_msg + sizeof(struct mbox_msghdr),
- &linfo, sizeof(struct cgx_link_user_info));
-
- vf_msg->rc = msg->rc;
- vf_msg->pcifunc = msg->pcifunc;
- /* Send to VF */
- otx2_mbox_msg_send(&dev->mbox_vfpf_up, vf);
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return req_hdr->num_msgs;
-}
-
-static int
-vf_pf_process_msgs(struct otx2_dev *dev, uint16_t vf)
-{
- int offset, routed = 0; struct otx2_mbox *mbox = &dev->mbox_vfpf;
- struct otx2_mbox_dev *mdev = &mbox->dev[vf];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- size_t size;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (!req_hdr->num_msgs)
- return 0;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < req_hdr->num_msgs; i++) {
-
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- size = mbox->rx_start + msg->next_msgoff - offset;
-
- /* RVU_PF_FUNC_S */
- msg->pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- if (msg->id == MBOX_MSG_READY) {
- struct ready_msg_rsp *rsp;
- uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8;
-
- /* Handle READY message in PF */
- dev->active_vfs[vf / max_bits] |=
- BIT_ULL(vf % max_bits);
- rsp = (struct ready_msg_rsp *)
- otx2_mbox_alloc_msg(mbox, vf, sizeof(*rsp));
- otx2_mbox_rsp_init(msg->id, rsp);
-
- /* PF/VF function ID */
- rsp->hdr.pcifunc = msg->pcifunc;
- rsp->hdr.rc = 0;
- } else {
- struct mbox_msghdr *af_req;
- /* Reserve AF/PF mbox message */
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- af_req = otx2_mbox_alloc_msg(dev->mbox, 0, size);
- otx2_mbox_req_init(msg->id, af_req);
-
- /* Copy message from VF<->PF mbox to PF<->AF mbox */
- otx2_mbox_memcpy((uint8_t *)af_req +
- sizeof(struct mbox_msghdr),
- (uint8_t *)msg + sizeof(struct mbox_msghdr),
- size - sizeof(struct mbox_msghdr));
- af_req->pcifunc = msg->pcifunc;
- routed++;
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
-
- if (routed > 0) {
- otx2_base_dbg("pf:%d routed %d messages from vf:%d to AF",
- dev->pf, routed, vf);
- af_pf_wait_msg(dev, vf, routed);
- otx2_mbox_reset(dev->mbox, 0);
- }
-
- /* Send mbox responses to VF */
- if (mdev->num_msgs) {
- otx2_base_dbg("pf:%d reply %d messages to vf:%d",
- dev->pf, mdev->num_msgs, vf);
- otx2_mbox_msg_send(mbox, vf);
- }
-
- return i;
-}
-
-static int
-vf_pf_process_up_msgs(struct otx2_dev *dev, uint16_t vf)
-{
- struct otx2_mbox *mbox = &dev->mbox_vfpf_up;
- struct otx2_mbox_dev *mdev = &mbox->dev[vf];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int msgs_acked = 0;
- int offset;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return 0;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- msgs_acked++;
- /* RVU_PF_FUNC_S */
- msg->pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- switch (msg->id) {
- case MBOX_MSG_CGX_LINK_EVENT:
- otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc, otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- break;
- case MBOX_MSG_CGX_PTP_RX_INFO:
- otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc, otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- break;
- default:
- otx2_err("Not handled UP msg 0x%x (%s) func:0x%x",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc);
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
- otx2_mbox_reset(mbox, vf);
- mdev->msgs_acked = msgs_acked;
- rte_wmb();
-
- return i;
-}
-
-static void
-otx2_vf_pf_mbox_handle_msg(void *param)
-{
- uint16_t vf, max_vf, max_bits;
- struct otx2_dev *dev = param;
-
- max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t);
- max_vf = max_bits * MAX_VFPF_DWORD_BITS;
-
- for (vf = 0; vf < max_vf; vf++) {
- if (dev->intr.bits[vf/max_bits] & BIT_ULL(vf%max_bits)) {
- otx2_base_dbg("Process vf:%d request (pf:%d, vf:%d)",
- vf, dev->pf, dev->vf);
- vf_pf_process_msgs(dev, vf);
- /* UP messages */
- vf_pf_process_up_msgs(dev, vf);
- dev->intr.bits[vf/max_bits] &= ~(BIT_ULL(vf%max_bits));
- }
- }
- dev->timer_set = 0;
-}
-
-static void
-otx2_vf_pf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- bool alarm_set = false;
- uint64_t intr;
- int vfpf;
-
- for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) {
- intr = otx2_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
- if (!intr)
- continue;
-
- otx2_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)",
- vfpf, intr, dev->pf, dev->vf);
-
- /* Save and clear intr bits */
- dev->intr.bits[vfpf] |= intr;
- otx2_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
- alarm_set = true;
- }
-
- if (!dev->timer_set && alarm_set) {
- dev->timer_set = 1;
- /* Start timer to handle messages */
- rte_eal_alarm_set(VF_PF_MBOX_TIMER_MS,
- otx2_vf_pf_mbox_handle_msg, dev);
- }
-}
-
-static void
-otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int msgs_acked = 0;
- int offset;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- msgs_acked++;
- otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d",
- msg->id, otx2_mbox_id2name(msg->id),
- otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
-
- switch (msg->id) {
- /* Add message id's that are handled here */
- case MBOX_MSG_READY:
- /* Get our identity */
- dev->pf_func = msg->pcifunc;
- break;
-
- default:
- if (msg->rc)
- otx2_err("Message (%s) response has err=%d",
- otx2_mbox_id2name(msg->id), msg->rc);
- break;
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
-
- otx2_mbox_reset(mbox, 0);
- /* Update acked if someone is waiting a message */
- mdev->msgs_acked = msgs_acked;
- rte_wmb();
-}
-
-/* Copies the message received from AF and sends it to VF */
-static void
-pf_vf_mbox_send_up_msg(struct otx2_dev *dev, void *rec_msg)
-{
- uint16_t max_bits = sizeof(dev->active_vfs[0]) * sizeof(uint64_t);
- struct otx2_mbox *vf_mbox = &dev->mbox_vfpf_up;
- struct msg_req *msg = rec_msg;
- struct mbox_msghdr *vf_msg;
- uint16_t vf;
- size_t size;
-
- size = RTE_ALIGN(otx2_mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN);
- /* Send UP message to all VF's */
- for (vf = 0; vf < vf_mbox->ndevs; vf++) {
- /* VF active */
- if (!(dev->active_vfs[vf / max_bits] & (BIT_ULL(vf))))
- continue;
-
- otx2_base_dbg("(%s) size: %zx to VF: %d",
- otx2_mbox_id2name(msg->hdr.id), size, vf);
-
- /* Reserve PF/VF mbox message */
- vf_msg = otx2_mbox_alloc_msg(vf_mbox, vf, size);
- if (!vf_msg) {
- otx2_err("Failed to alloc VF%d UP message", vf);
- continue;
- }
- otx2_mbox_req_init(msg->hdr.id, vf_msg);
-
- /*
- * Copy message from AF<->PF UP mbox
- * to PF<->VF UP mbox
- */
- otx2_mbox_memcpy((uint8_t *)vf_msg +
- sizeof(struct mbox_msghdr), (uint8_t *)msg
- + sizeof(struct mbox_msghdr), size -
- sizeof(struct mbox_msghdr));
-
- vf_msg->rc = msg->hdr.rc;
- /* Set PF to be a sender */
- vf_msg->pcifunc = dev->pf_func;
-
- /* Send to VF */
- otx2_mbox_msg_send(vf_mbox, vf);
- }
-}
-
-static int
-otx2_mbox_up_handler_cgx_link_event(struct otx2_dev *dev,
- struct cgx_link_info_msg *msg,
- struct msg_rsp *rsp)
-{
- struct cgx_link_user_info *linfo = &msg->link_info;
-
- otx2_base_dbg("pf:%d/vf:%d NIC Link %s --> 0x%x (%s) from: pf:%d/vf:%d",
- otx2_get_pf(dev->pf_func), otx2_get_vf(dev->pf_func),
- linfo->link_up ? "UP" : "DOWN", msg->hdr.id,
- otx2_mbox_id2name(msg->hdr.id),
- otx2_get_pf(msg->hdr.pcifunc),
- otx2_get_vf(msg->hdr.pcifunc));
-
- /* PF gets link notification from AF */
- if (otx2_get_pf(msg->hdr.pcifunc) == 0) {
- if (dev->ops && dev->ops->link_status_update)
- dev->ops->link_status_update(dev, linfo);
-
- /* Forward the same message as received from AF to VF */
- pf_vf_mbox_send_up_msg(dev, msg);
- } else {
- /* VF gets link up notification */
- if (dev->ops && dev->ops->link_status_update)
- dev->ops->link_status_update(dev, linfo);
- }
-
- rsp->hdr.rc = 0;
- return 0;
-}
-
-static int
-otx2_mbox_up_handler_cgx_ptp_rx_info(struct otx2_dev *dev,
- struct cgx_ptp_rx_info_msg *msg,
- struct msg_rsp *rsp)
-{
- otx2_nix_dbg("pf:%d/vf:%d PTP mode %s --> 0x%x (%s) from: pf:%d/vf:%d",
- otx2_get_pf(dev->pf_func),
- otx2_get_vf(dev->pf_func),
- msg->ptp_en ? "ENABLED" : "DISABLED",
- msg->hdr.id, otx2_mbox_id2name(msg->hdr.id),
- otx2_get_pf(msg->hdr.pcifunc),
- otx2_get_vf(msg->hdr.pcifunc));
-
- /* PF gets PTP notification from AF */
- if (otx2_get_pf(msg->hdr.pcifunc) == 0) {
- if (dev->ops && dev->ops->ptp_info_update)
- dev->ops->ptp_info_update(dev, msg->ptp_en);
-
- /* Forward the same message as received from AF to VF */
- pf_vf_mbox_send_up_msg(dev, msg);
- } else {
- /* VF gets PTP notification */
- if (dev->ops && dev->ops->ptp_info_update)
- dev->ops->ptp_info_update(dev, msg->ptp_en);
- }
-
- rsp->hdr.rc = 0;
- return 0;
-}
-
-static int
-mbox_process_msgs_up(struct otx2_dev *dev, struct mbox_msghdr *req)
-{
- /* Check if valid, if not reply with a invalid msg */
- if (req->sig != OTX2_MBOX_REQ_SIG)
- return -EIO;
-
- switch (req->id) {
-#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
- case _id: { \
- struct _rsp_type *rsp; \
- int err; \
- \
- rsp = (struct _rsp_type *)otx2_mbox_alloc_msg( \
- &dev->mbox_up, 0, \
- sizeof(struct _rsp_type)); \
- if (!rsp) \
- return -ENOMEM; \
- \
- rsp->hdr.id = _id; \
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG; \
- rsp->hdr.pcifunc = dev->pf_func; \
- rsp->hdr.rc = 0; \
- \
- err = otx2_mbox_up_handler_ ## _fn_name( \
- dev, (struct _req_type *)req, rsp); \
- return err; \
- }
-MBOX_UP_CGX_MESSAGES
-#undef M
-
- default :
- otx2_reply_invalid_msg(&dev->mbox_up, 0, 0, req->id);
- }
-
- return -ENODEV;
-}
-
-static void
-otx2_process_msgs_up(struct otx2_dev *dev, struct otx2_mbox *mbox)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int i, err, offset;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d",
- msg->id, otx2_mbox_id2name(msg->id),
- otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- err = mbox_process_msgs_up(dev, msg);
- if (err)
- otx2_err("Error %d handling 0x%x (%s)",
- err, msg->id, otx2_mbox_id2name(msg->id));
- offset = mbox->rx_start + msg->next_msgoff;
- }
- /* Send mbox responses */
- if (mdev->num_msgs) {
- otx2_base_dbg("Reply num_msgs:%d", mdev->num_msgs);
- otx2_mbox_msg_send(mbox, 0);
- }
-}
-
-static void
-otx2_pf_vf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- uint64_t intr;
-
- intr = otx2_read64(dev->bar2 + RVU_VF_INT);
- if (intr == 0)
- otx2_base_dbg("Proceeding to check mbox UP messages if any");
-
- otx2_write64(intr, dev->bar2 + RVU_VF_INT);
- otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
-
- /* First process all configuration messages */
- otx2_process_msgs(dev, dev->mbox);
-
- /* Process Uplink messages */
- otx2_process_msgs_up(dev, &dev->mbox_up);
-}
-
-static void
-otx2_af_pf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- uint64_t intr;
-
- intr = otx2_read64(dev->bar2 + RVU_PF_INT);
- if (intr == 0)
- otx2_base_dbg("Proceeding to check mbox UP messages if any");
-
- otx2_write64(intr, dev->bar2 + RVU_PF_INT);
- otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
-
- /* First process all configuration messages */
- otx2_process_msgs(dev, dev->mbox);
-
- /* Process Uplink messages */
- otx2_process_msgs_up(dev, &dev->mbox_up);
-}
-
-static int
-mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i, rc;
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- dev->timer_set = 0;
-
- /* MBOX interrupt for VF(0...63) <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX0);
-
- if (rc) {
- otx2_err("Fail to register PF(VF0-63) mbox irq");
- return rc;
- }
- /* MBOX interrupt for VF(64...128) <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX1);
-
- if (rc) {
- otx2_err("Fail to register PF(VF64-128) mbox irq");
- return rc;
- }
- /* MBOX interrupt AF <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_af_pf_mbox_irq,
- dev, RVU_PF_INT_VEC_AFPF_MBOX);
- if (rc) {
- otx2_err("Fail to register AF<->PF mbox irq");
- return rc;
- }
-
- /* HW enable intr */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT);
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- return rc;
-}
-
-static int
-mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int rc;
-
- /* Clear irq */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
-
- /* MBOX interrupt PF <-> VF */
- rc = otx2_register_irq(intr_handle, otx2_pf_vf_mbox_irq,
- dev, RVU_VF_INT_VEC_MBOX);
- if (rc) {
- otx2_err("Fail to register PF<->VF mbox irq");
- return rc;
- }
-
- /* HW enable intr */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT);
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1S);
-
- return rc;
-}
-
-static int
-mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- return mbox_register_vf_irq(pci_dev, dev);
- else
- return mbox_register_pf_irq(pci_dev, dev);
-}
-
-static void
-mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i;
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- dev->timer_set = 0;
-
- rte_eal_alarm_cancel(otx2_vf_pf_mbox_handle_msg, dev);
-
- /* Unregister the interrupt handler for each vectors */
- /* MBOX interrupt for VF(0...63) <-> PF */
- otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX0);
-
- /* MBOX interrupt for VF(64...128) <-> PF */
- otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX1);
-
- /* MBOX interrupt AF <-> PF */
- otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_AFPF_MBOX);
-
-}
-
-static void
-mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-
- /* Clear irq */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
-
- /* Unregister the interrupt handler */
- otx2_unregister_irq(intr_handle, otx2_pf_vf_mbox_irq, dev,
- RVU_VF_INT_VEC_MBOX);
-}
-
-static void
-mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- mbox_unregister_vf_irq(pci_dev, dev);
- else
- mbox_unregister_pf_irq(pci_dev, dev);
-}
-
-static int
-vf_flr_send_msg(struct otx2_dev *dev, uint16_t vf)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct msg_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_vf_flr(mbox);
- /* Overwrite pcifunc to indicate VF */
- req->hdr.pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- /* Sync message in interrupt context */
- rc = pf_af_sync_msg(dev, NULL);
- if (rc)
- otx2_err("Failed to send VF FLR mbox msg, rc=%d", rc);
-
- return rc;
-}
-
-static void
-otx2_pf_vf_flr_irq(void *param)
-{
- struct otx2_dev *dev = (struct otx2_dev *)param;
- uint16_t max_vf = 64, vf;
- uintptr_t bar2;
- uint64_t intr;
- int i;
-
- max_vf = (dev->maxvf > 0) ? dev->maxvf : 64;
- bar2 = dev->bar2;
-
- otx2_base_dbg("FLR VF interrupt: max_vf: %d", max_vf);
-
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
- intr = otx2_read64(bar2 + RVU_PF_VFFLR_INTX(i));
- if (!intr)
- continue;
-
- for (vf = 0; vf < max_vf; vf++) {
- if (!(intr & (1ULL << vf)))
- continue;
-
- otx2_base_dbg("FLR: i :%d intr: 0x%" PRIx64 ", vf-%d",
- i, intr, (64 * i + vf));
- /* Clear interrupt */
- otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFFLR_INTX(i));
- /* Disable the interrupt */
- otx2_write64(BIT_ULL(vf),
- bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
- /* Inform AF about VF reset */
- vf_flr_send_msg(dev, vf);
-
- /* Signal FLR finish */
- otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFTRPENDX(i));
- /* Enable interrupt */
- otx2_write64(~0ull,
- bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
- }
- }
-}
-
-static int
-vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i;
-
- otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
-
- otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR0);
-
- otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR1);
-
- return 0;
-}
-
-static int
-vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int i, rc;
-
- otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
-
- rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR0);
- if (rc)
- otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR0 rc=%d", rc);
-
- rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR1);
- if (rc)
- otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR1 rc=%d", rc);
-
- /* Enable HW interrupt */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INTX(i));
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFTRPENDX(i));
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
- }
- return 0;
-}
-
-/**
- * @internal
- * Get number of active VFs for the given PF device.
- */
-int
-otx2_dev_active_vfs(void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- int i, count = 0;
-
- for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
- count += __builtin_popcount(dev->active_vfs[i]);
-
- return count;
-}
-
-static void
-otx2_update_vf_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- switch (pci_dev->id.device_id) {
- case PCI_DEVID_OCTEONTX2_RVU_PF:
- break;
- case PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF:
- case PCI_DEVID_OCTEONTX2_RVU_NPA_VF:
- case PCI_DEVID_OCTEONTX2_RVU_CPT_VF:
- case PCI_DEVID_OCTEONTX2_RVU_AF_VF:
- case PCI_DEVID_OCTEONTX2_RVU_VF:
- case PCI_DEVID_OCTEONTX2_RVU_SDP_VF:
- dev->hwcap |= OTX2_HWCAP_F_VF;
- break;
- }
-}
-
-/**
- * @internal
- * Initialize the otx2 device
- */
-int
-otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- int up_direction = MBOX_DIR_PFAF_UP;
- int rc, direction = MBOX_DIR_PFAF;
- uint64_t intr_offset = RVU_PF_INT;
- struct otx2_dev *dev = otx2_dev;
- uintptr_t bar2, bar4;
- uint64_t bar4_addr;
- void *hwbase;
-
- bar2 = (uintptr_t)pci_dev->mem_resource[2].addr;
- bar4 = (uintptr_t)pci_dev->mem_resource[4].addr;
-
- if (bar2 == 0 || bar4 == 0) {
- otx2_err("Failed to get pci bars");
- rc = -ENODEV;
- goto error;
- }
-
- dev->node = pci_dev->device.numa_node;
- dev->maxvf = pci_dev->max_vfs;
- dev->bar2 = bar2;
- dev->bar4 = bar4;
-
- otx2_update_vf_hwcap(pci_dev, dev);
-
- if (otx2_dev_is_vf(dev)) {
- direction = MBOX_DIR_VFPF;
- up_direction = MBOX_DIR_VFPF_UP;
- intr_offset = RVU_VF_INT;
- }
-
- /* Initialize the local mbox */
- rc = otx2_mbox_init(&dev->mbox_local, bar4, bar2, direction, 1,
- intr_offset);
- if (rc)
- goto error;
- dev->mbox = &dev->mbox_local;
-
- rc = otx2_mbox_init(&dev->mbox_up, bar4, bar2, up_direction, 1,
- intr_offset);
- if (rc)
- goto error;
-
- /* Register mbox interrupts */
- rc = mbox_register_irq(pci_dev, dev);
- if (rc)
- goto mbox_fini;
-
- /* Check the readiness of PF/VF */
- rc = otx2_send_ready_msg(dev->mbox, &dev->pf_func);
- if (rc)
- goto mbox_unregister;
-
- dev->pf = otx2_get_pf(dev->pf_func);
- dev->vf = otx2_get_vf(dev->pf_func);
- memset(&dev->active_vfs, 0, sizeof(dev->active_vfs));
-
- /* Found VF devices in a PF device */
- if (pci_dev->max_vfs > 0) {
-
- /* Remap mbox area for all vf's */
- bar4_addr = otx2_read64(bar2 + RVU_PF_VF_BAR4_ADDR);
- if (bar4_addr == 0) {
- rc = -ENODEV;
- goto mbox_fini;
- }
-
- hwbase = mbox_mem_map(bar4_addr, MBOX_SIZE * pci_dev->max_vfs);
- if (hwbase == MAP_FAILED) {
- rc = -ENOMEM;
- goto mbox_fini;
- }
- /* Init mbox object */
- rc = otx2_mbox_init(&dev->mbox_vfpf, (uintptr_t)hwbase,
- bar2, MBOX_DIR_PFVF, pci_dev->max_vfs,
- intr_offset);
- if (rc)
- goto iounmap;
-
- /* PF -> VF UP messages */
- rc = otx2_mbox_init(&dev->mbox_vfpf_up, (uintptr_t)hwbase,
- bar2, MBOX_DIR_PFVF_UP, pci_dev->max_vfs,
- intr_offset);
- if (rc)
- goto mbox_fini;
- }
-
- /* Register VF-FLR irq handlers */
- if (otx2_dev_is_pf(dev)) {
- rc = vf_flr_register_irqs(pci_dev, dev);
- if (rc)
- goto iounmap;
- }
- dev->mbox_active = 1;
- return rc;
-
-iounmap:
- mbox_mem_unmap(hwbase, MBOX_SIZE * pci_dev->max_vfs);
-mbox_unregister:
- mbox_unregister_irq(pci_dev, dev);
-mbox_fini:
- otx2_mbox_fini(dev->mbox);
- otx2_mbox_fini(&dev->mbox_up);
-error:
- return rc;
-}
-
-/**
- * @internal
- * Finalize the otx2 device
- */
-void
-otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_mbox *mbox;
-
- /* Clear references to this pci dev */
- idev = otx2_intra_dev_get_cfg();
- if (idev->npa_lf && idev->npa_lf->pci_dev == pci_dev)
- idev->npa_lf = NULL;
-
- mbox_unregister_irq(pci_dev, dev);
-
- if (otx2_dev_is_pf(dev))
- vf_flr_unregister_irqs(pci_dev, dev);
- /* Release PF - VF */
- mbox = &dev->mbox_vfpf;
- if (mbox->hwbase && mbox->dev)
- mbox_mem_unmap((void *)mbox->hwbase,
- MBOX_SIZE * pci_dev->max_vfs);
- otx2_mbox_fini(mbox);
- mbox = &dev->mbox_vfpf_up;
- otx2_mbox_fini(mbox);
-
- /* Release PF - AF */
- mbox = dev->mbox;
- otx2_mbox_fini(mbox);
- mbox = &dev->mbox_up;
- otx2_mbox_fini(mbox);
- dev->mbox_active = 0;
-
- /* Disable MSIX vectors */
- otx2_disable_irqs(intr_handle);
-}
diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
deleted file mode 100644
index d5b2b0d9af..0000000000
--- a/drivers/common/octeontx2/otx2_dev.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_DEV_H
-#define _OTX2_DEV_H
-
-#include <rte_bus_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-#include "otx2_mbox.h"
-#include "otx2_mempool.h"
-
-/* Common HWCAP flags. Use from LSB bits */
-#define OTX2_HWCAP_F_VF BIT_ULL(8) /* VF device */
-#define otx2_dev_is_vf(dev) (dev->hwcap & OTX2_HWCAP_F_VF)
-#define otx2_dev_is_pf(dev) (!(dev->hwcap & OTX2_HWCAP_F_VF))
-#define otx2_dev_is_lbk(dev) ((dev->hwcap & OTX2_HWCAP_F_VF) && \
- (dev->tx_chan_base < 0x700))
-#define otx2_dev_revid(dev) (dev->hwcap & 0xFF)
-#define otx2_dev_is_sdp(dev) (dev->sdp_link)
-
-#define otx2_dev_is_vf_or_sdp(dev) \
- (otx2_dev_is_vf(dev) || otx2_dev_is_sdp(dev))
-
-#define otx2_dev_is_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0))
-#define otx2_dev_is_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_95xx_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1))
-#define otx2_dev_is_95xx_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1))
-
-#define otx2_dev_is_96xx_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-#define otx2_dev_is_96xx_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_96xx_Cx(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_96xx_C0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_98xx(dev) \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x3)
-
-struct otx2_dev;
-
-/* Link status update callback */
-typedef void (*otx2_link_status_update_t)(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-/* PTP info callback */
-typedef int (*otx2_ptp_info_t)(struct otx2_dev *dev, bool ptp_en);
-/* Link status get callback */
-typedef void (*otx2_link_status_get_t)(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-
-struct otx2_dev_ops {
- otx2_link_status_update_t link_status_update;
- otx2_ptp_info_t ptp_info_update;
- otx2_link_status_get_t link_status_get;
-};
-
-#define OTX2_DEV \
- int node __rte_cache_aligned; \
- uint16_t pf; \
- int16_t vf; \
- uint16_t pf_func; \
- uint8_t mbox_active; \
- bool drv_inited; \
- uint64_t active_vfs[MAX_VFPF_DWORD_BITS]; \
- uintptr_t bar2; \
- uintptr_t bar4; \
- struct otx2_mbox mbox_local; \
- struct otx2_mbox mbox_up; \
- struct otx2_mbox mbox_vfpf; \
- struct otx2_mbox mbox_vfpf_up; \
- otx2_intr_t intr; \
- int timer_set; /* ~0 : no alarm handling */ \
- uint64_t hwcap; \
- struct otx2_npa_lf npalf; \
- struct otx2_mbox *mbox; \
- uint16_t maxvf; \
- const struct otx2_dev_ops *ops
-
-struct otx2_dev {
- OTX2_DEV;
-};
-
-__rte_internal
-int otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev);
-
-/* Common dev init and fini routines */
-
-static __rte_always_inline int
-otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- uint8_t rev_id;
- int rc;
-
- rc = rte_pci_read_config(pci_dev, &rev_id,
- 1, RVU_PCI_REVISION_ID);
- if (rc != 1) {
- otx2_err("Failed to read pci revision id, rc=%d", rc);
- return rc;
- }
-
- dev->hwcap = rev_id;
- return otx2_dev_priv_init(pci_dev, otx2_dev);
-}
-
-__rte_internal
-void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev);
-__rte_internal
-int otx2_dev_active_vfs(void *otx2_dev);
-
-#define RVU_PFVF_PF_SHIFT 10
-#define RVU_PFVF_PF_MASK 0x3F
-#define RVU_PFVF_FUNC_SHIFT 0
-#define RVU_PFVF_FUNC_MASK 0x3FF
-
-static inline int
-otx2_get_vf(uint16_t pf_func)
-{
- return (((pf_func >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK) - 1);
-}
-
-static inline int
-otx2_get_pf(uint16_t pf_func)
-{
- return (pf_func >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
-}
-
-static inline int
-otx2_pfvf_func(int pf, int vf)
-{
- return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1);
-}
-
-static inline int
-otx2_is_afvf(uint16_t pf_func)
-{
- return !(pf_func & ~RVU_PFVF_FUNC_MASK);
-}
-
-#endif /* _OTX2_DEV_H */
diff --git a/drivers/common/octeontx2/otx2_io_arm64.h b/drivers/common/octeontx2/otx2_io_arm64.h
deleted file mode 100644
index 34268e3af3..0000000000
--- a/drivers/common/octeontx2/otx2_io_arm64.h
+++ /dev/null
@@ -1,114 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IO_ARM64_H_
-#define _OTX2_IO_ARM64_H_
-
-#define otx2_load_pair(val0, val1, addr) ({ \
- asm volatile( \
- "ldp %x[x0], %x[x1], [%x[p1]]" \
- :[x0]"=r"(val0), [x1]"=r"(val1) \
- :[p1]"r"(addr) \
- ); })
-
-#define otx2_store_pair(val0, val1, addr) ({ \
- asm volatile( \
- "stp %x[x0], %x[x1], [%x[p1],#0]!" \
- ::[x0]"r"(val0), [x1]"r"(val1), [p1]"r"(addr) \
- ); })
-
-#define otx2_prefetch_store_keep(ptr) ({\
- asm volatile("prfm pstl1keep, [%x0]\n" : : "r" (ptr)); })
-
-#if defined(__ARM_FEATURE_SVE)
-#define __LSE_PREAMBLE " .cpu generic+lse+sve\n"
-#else
-#define __LSE_PREAMBLE " .cpu generic+lse\n"
-#endif
-
-static __rte_always_inline uint64_t
-otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr)
-{
- uint64_t result;
-
- /* Atomic add with no ordering */
- asm volatile (
- __LSE_PREAMBLE
- "ldadd %x[i], %x[r], [%[b]]"
- : [r] "=r" (result), "+m" (*ptr)
- : [i] "r" (incr), [b] "r" (ptr)
- : "memory");
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_atomic64_add_sync(int64_t incr, int64_t *ptr)
-{
- uint64_t result;
-
- /* Atomic add with ordering */
- asm volatile (
- __LSE_PREAMBLE
- "ldadda %x[i], %x[r], [%[b]]"
- : [r] "=r" (result), "+m" (*ptr)
- : [i] "r" (incr), [b] "r" (ptr)
- : "memory");
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_lmt_submit(rte_iova_t io_address)
-{
- uint64_t result;
-
- asm volatile (
- __LSE_PREAMBLE
- "ldeor xzr,%x[rf],[%[rs]]" :
- [rf] "=r"(result): [rs] "r"(io_address));
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_lmt_submit_release(rte_iova_t io_address)
-{
- uint64_t result;
-
- asm volatile (
- __LSE_PREAMBLE
- "ldeorl xzr,%x[rf],[%[rs]]" :
- [rf] "=r"(result) : [rs] "r"(io_address));
- return result;
-}
-
-static __rte_always_inline void
-otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext)
-{
- volatile const __uint128_t *src128 = (const __uint128_t *)in;
- volatile __uint128_t *dst128 = (__uint128_t *)out;
- dst128[0] = src128[0];
- dst128[1] = src128[1];
- /* lmtext receives following value:
- * 1: NIX_SUBDC_EXT needed i.e. tx vlan case
- * 2: NIX_SUBDC_EXT + NIX_SUBDC_MEM i.e. tstamp case
- */
- if (lmtext) {
- dst128[2] = src128[2];
- if (lmtext > 1)
- dst128[3] = src128[3];
- }
-}
-
-static __rte_always_inline void
-otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
-{
- volatile const __uint128_t *src128 = (const __uint128_t *)in;
- volatile __uint128_t *dst128 = (__uint128_t *)out;
- uint8_t i;
-
- for (i = 0; i < segdw; i++)
- dst128[i] = src128[i];
-}
-
-#undef __LSE_PREAMBLE
-#endif /* _OTX2_IO_ARM64_H_ */
diff --git a/drivers/common/octeontx2/otx2_io_generic.h b/drivers/common/octeontx2/otx2_io_generic.h
deleted file mode 100644
index 3436a6c3d5..0000000000
--- a/drivers/common/octeontx2/otx2_io_generic.h
+++ /dev/null
@@ -1,75 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IO_GENERIC_H_
-#define _OTX2_IO_GENERIC_H_
-
-#include <string.h>
-
-#define otx2_load_pair(val0, val1, addr) \
-do { \
- val0 = rte_read64_relaxed((void *)(addr)); \
- val1 = rte_read64_relaxed((uint8_t *)(addr) + 8); \
-} while (0)
-
-#define otx2_store_pair(val0, val1, addr) \
-do { \
- rte_write64_relaxed(val0, (void *)(addr)); \
- rte_write64_relaxed(val1, (((uint8_t *)(addr)) + 8)); \
-} while (0)
-
-#define otx2_prefetch_store_keep(ptr) do {} while (0)
-
-static inline uint64_t
-otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr)
-{
- RTE_SET_USED(ptr);
- RTE_SET_USED(incr);
-
- return 0;
-}
-
-static inline uint64_t
-otx2_atomic64_add_sync(int64_t incr, int64_t *ptr)
-{
- RTE_SET_USED(ptr);
- RTE_SET_USED(incr);
-
- return 0;
-}
-
-static inline int64_t
-otx2_lmt_submit(uint64_t io_address)
-{
- RTE_SET_USED(io_address);
-
- return 0;
-}
-
-static inline int64_t
-otx2_lmt_submit_release(uint64_t io_address)
-{
- RTE_SET_USED(io_address);
-
- return 0;
-}
-
-static __rte_always_inline void
-otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext)
-{
- /* Copy four words if lmtext = 0
- * six words if lmtext = 1
- * eight words if lmtext =2
- */
- memcpy(out, in, (4 + (2 * lmtext)) * sizeof(uint64_t));
-}
-
-static __rte_always_inline void
-otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
-{
- RTE_SET_USED(out);
- RTE_SET_USED(in);
- RTE_SET_USED(segdw);
-}
-#endif /* _OTX2_IO_GENERIC_H_ */
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
deleted file mode 100644
index 93fc95c0e1..0000000000
--- a/drivers/common/octeontx2/otx2_irq.c
+++ /dev/null
@@ -1,288 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_alarm.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_interrupts.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-
-#ifdef RTE_EAL_VFIO
-
-#include <inttypes.h>
-#include <linux/vfio.h>
-#include <sys/eventfd.h>
-#include <sys/ioctl.h>
-#include <unistd.h>
-
-#define MAX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID
-#define MSIX_IRQ_SET_BUF_LEN (sizeof(struct vfio_irq_set) + \
- sizeof(int) * (MAX_INTR_VEC_ID))
-
-static int
-irq_get_info(struct rte_intr_handle *intr_handle)
-{
- struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc, vfio_dev_fd;
-
- irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
- if (rc < 0) {
- otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
- return rc;
- }
-
- otx2_base_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x",
- irq.flags, irq.index, irq.count, MAX_INTR_VEC_ID);
-
- if (irq.count > MAX_INTR_VEC_ID) {
- otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- rte_intr_max_intr_get(intr_handle),
- MAX_INTR_VEC_ID);
- if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
- return -1;
- } else {
- if (rte_intr_max_intr_set(intr_handle, irq.count))
- return -1;
- }
-
- return 0;
-}
-
-static int
-irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
-{
- char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- struct vfio_irq_set *irq_set;
- int len, rc, vfio_dev_fd;
- int32_t *fd_ptr;
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("vector=%d greater than max_intr=%d", vec,
- rte_intr_max_intr_get(intr_handle));
- return -EINVAL;
- }
-
- len = sizeof(struct vfio_irq_set) + sizeof(int32_t);
-
- irq_set = (struct vfio_irq_set *)irq_set_buf;
- irq_set->argsz = len;
-
- irq_set->start = vec;
- irq_set->count = 1;
- irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
- VFIO_IRQ_SET_ACTION_TRIGGER;
- irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- /* Use vec fd to set interrupt vectors */
- fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
- if (rc)
- otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
-
- return rc;
-}
-
-static int
-irq_init(struct rte_intr_handle *intr_handle)
-{
- char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- struct vfio_irq_set *irq_set;
- int len, rc, vfio_dev_fd;
- int32_t *fd_ptr;
- uint32_t i;
-
- if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
- otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- rte_intr_max_intr_get(intr_handle),
- MAX_INTR_VEC_ID);
- return -ERANGE;
- }
-
- len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
-
- irq_set = (struct vfio_irq_set *)irq_set_buf;
- irq_set->argsz = len;
- irq_set->start = 0;
- irq_set->count = rte_intr_max_intr_get(intr_handle);
- irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
- VFIO_IRQ_SET_ACTION_TRIGGER;
- irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- fd_ptr = (int32_t *)&irq_set->data[0];
- for (i = 0; i < irq_set->count; i++)
- fd_ptr[i] = -1;
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
- if (rc)
- otx2_err("Failed to set irqs vector rc=%d", rc);
-
- return rc;
-}
-
-/**
- * @internal
- * Disable IRQ
- */
-int
-otx2_disable_irqs(struct rte_intr_handle *intr_handle)
-{
- /* Clear max_intr to indicate re-init next time */
- if (rte_intr_max_intr_set(intr_handle, 0))
- return -1;
- return rte_intr_disable(intr_handle);
-}
-
-/**
- * @internal
- * Register IRQ
- */
-int
-otx2_register_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec)
-{
- struct rte_intr_handle *tmp_handle;
- uint32_t nb_efd, tmp_nb_efd;
- int rc, fd;
-
- /* If no max_intr read from VFIO */
- if (rte_intr_max_intr_get(intr_handle) == 0) {
- irq_get_info(intr_handle);
- irq_init(intr_handle);
- }
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("Vector=%d greater than max_intr=%d", vec,
- rte_intr_max_intr_get(intr_handle));
- return -EINVAL;
- }
-
- tmp_handle = intr_handle;
- /* Create new eventfd for interrupt vector */
- fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (fd == -1)
- return -ENODEV;
-
- if (rte_intr_fd_set(tmp_handle, fd))
- return errno;
-
- /* Register vector interrupt callback */
- rc = rte_intr_callback_register(tmp_handle, cb, data);
- if (rc) {
- otx2_err("Failed to register vector:0x%x irq callback.", vec);
- return rc;
- }
-
- rte_intr_efds_index_set(intr_handle, vec, fd);
- nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
- vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
- rte_intr_nb_efd_set(intr_handle, nb_efd);
-
- tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
- if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
- rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
-
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- rte_intr_nb_efd_get(intr_handle),
- rte_intr_max_intr_get(intr_handle));
-
- /* Enable MSIX vectors to VFIO */
- return irq_config(intr_handle, vec);
-}
-
-/**
- * @internal
- * Unregister IRQ
- */
-void
-otx2_unregister_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec)
-{
- struct rte_intr_handle *tmp_handle;
- uint8_t retries = 5; /* 5 ms */
- int rc, fd;
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, rte_intr_max_intr_get(intr_handle));
- return;
- }
-
- tmp_handle = intr_handle;
- fd = rte_intr_efds_index_get(intr_handle, vec);
- if (fd == -1)
- return;
-
- if (rte_intr_fd_set(tmp_handle, fd))
- return;
-
- do {
- /* Un-register callback func from platform lib */
- rc = rte_intr_callback_unregister(tmp_handle, cb, data);
- /* Retry only if -EAGAIN */
- if (rc != -EAGAIN)
- break;
- rte_delay_ms(1);
- retries--;
- } while (retries);
-
- if (rc < 0) {
- otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
- return;
- }
-
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- rte_intr_nb_efd_get(intr_handle),
- rte_intr_max_intr_get(intr_handle));
-
- if (rte_intr_efds_index_get(intr_handle, vec) != -1)
- close(rte_intr_efds_index_get(intr_handle, vec));
- /* Disable MSIX vectors from VFIO */
- rte_intr_efds_index_set(intr_handle, vec, -1);
- irq_config(intr_handle, vec);
-}
-
-#else
-
-/**
- * @internal
- * Register IRQ
- */
-int otx2_register_irq(__rte_unused struct rte_intr_handle *intr_handle,
- __rte_unused rte_intr_callback_fn cb,
- __rte_unused void *data, __rte_unused unsigned int vec)
-{
- return -ENOTSUP;
-}
-
-
-/**
- * @internal
- * Unregister IRQ
- */
-void otx2_unregister_irq(__rte_unused struct rte_intr_handle *intr_handle,
- __rte_unused rte_intr_callback_fn cb,
- __rte_unused void *data, __rte_unused unsigned int vec)
-{
-}
-
-/**
- * @internal
- * Disable IRQ
- */
-int otx2_disable_irqs(__rte_unused struct rte_intr_handle *intr_handle)
-{
- return -ENOTSUP;
-}
-
-#endif /* RTE_EAL_VFIO */
diff --git a/drivers/common/octeontx2/otx2_irq.h b/drivers/common/octeontx2/otx2_irq.h
deleted file mode 100644
index 0683cf5543..0000000000
--- a/drivers/common/octeontx2/otx2_irq.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IRQ_H_
-#define _OTX2_IRQ_H_
-
-#include <rte_pci.h>
-#include <rte_interrupts.h>
-
-#include "otx2_common.h"
-
-typedef struct {
-/* 128 devices translate to two 64 bits dwords */
-#define MAX_VFPF_DWORD_BITS 2
- uint64_t bits[MAX_VFPF_DWORD_BITS];
-} otx2_intr_t;
-
-__rte_internal
-int otx2_register_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec);
-__rte_internal
-void otx2_unregister_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec);
-__rte_internal
-int otx2_disable_irqs(struct rte_intr_handle *intr_handle);
-
-#endif /* _OTX2_IRQ_H_ */
diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c
deleted file mode 100644
index 6df1e8ea63..0000000000
--- a/drivers/common/octeontx2/otx2_mbox.c
+++ /dev/null
@@ -1,465 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <errno.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-
-#include <rte_atomic.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "otx2_mbox.h"
-#include "otx2_dev.h"
-
-#define RVU_AF_AFPF_MBOX0 (0x02000)
-#define RVU_AF_AFPF_MBOX1 (0x02008)
-
-#define RVU_PF_PFAF_MBOX0 (0xC00)
-#define RVU_PF_PFAF_MBOX1 (0xC08)
-
-#define RVU_PF_VFX_PFVF_MBOX0 (0x0000)
-#define RVU_PF_VFX_PFVF_MBOX1 (0x0008)
-
-#define RVU_VF_VFPF_MBOX0 (0x0000)
-#define RVU_VF_VFPF_MBOX1 (0x0008)
-
-static inline uint16_t
-msgs_offset(void)
-{
- return RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
-}
-
-void
-otx2_mbox_fini(struct otx2_mbox *mbox)
-{
- mbox->reg_base = 0;
- mbox->hwbase = 0;
- rte_free(mbox->dev);
- mbox->dev = NULL;
-}
-
-void
-otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
-
- rte_spinlock_lock(&mdev->mbox_lock);
- mdev->msg_size = 0;
- mdev->rsp_size = 0;
- tx_hdr->msg_size = 0;
- tx_hdr->num_msgs = 0;
- rx_hdr->msg_size = 0;
- rx_hdr->num_msgs = 0;
- rte_spinlock_unlock(&mdev->mbox_lock);
-}
-
-int
-otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
- int direction, int ndevs, uint64_t intr_offset)
-{
- struct otx2_mbox_dev *mdev;
- int devid;
-
- mbox->intr_offset = intr_offset;
- mbox->reg_base = reg_base;
- mbox->hwbase = hwbase;
-
- switch (direction) {
- case MBOX_DIR_AFPF:
- case MBOX_DIR_PFVF:
- mbox->tx_start = MBOX_DOWN_TX_START;
- mbox->rx_start = MBOX_DOWN_RX_START;
- mbox->tx_size = MBOX_DOWN_TX_SIZE;
- mbox->rx_size = MBOX_DOWN_RX_SIZE;
- break;
- case MBOX_DIR_PFAF:
- case MBOX_DIR_VFPF:
- mbox->tx_start = MBOX_DOWN_RX_START;
- mbox->rx_start = MBOX_DOWN_TX_START;
- mbox->tx_size = MBOX_DOWN_RX_SIZE;
- mbox->rx_size = MBOX_DOWN_TX_SIZE;
- break;
- case MBOX_DIR_AFPF_UP:
- case MBOX_DIR_PFVF_UP:
- mbox->tx_start = MBOX_UP_TX_START;
- mbox->rx_start = MBOX_UP_RX_START;
- mbox->tx_size = MBOX_UP_TX_SIZE;
- mbox->rx_size = MBOX_UP_RX_SIZE;
- break;
- case MBOX_DIR_PFAF_UP:
- case MBOX_DIR_VFPF_UP:
- mbox->tx_start = MBOX_UP_RX_START;
- mbox->rx_start = MBOX_UP_TX_START;
- mbox->tx_size = MBOX_UP_RX_SIZE;
- mbox->rx_size = MBOX_UP_TX_SIZE;
- break;
- default:
- return -ENODEV;
- }
-
- switch (direction) {
- case MBOX_DIR_AFPF:
- case MBOX_DIR_AFPF_UP:
- mbox->trigger = RVU_AF_AFPF_MBOX0;
- mbox->tr_shift = 4;
- break;
- case MBOX_DIR_PFAF:
- case MBOX_DIR_PFAF_UP:
- mbox->trigger = RVU_PF_PFAF_MBOX1;
- mbox->tr_shift = 0;
- break;
- case MBOX_DIR_PFVF:
- case MBOX_DIR_PFVF_UP:
- mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
- mbox->tr_shift = 12;
- break;
- case MBOX_DIR_VFPF:
- case MBOX_DIR_VFPF_UP:
- mbox->trigger = RVU_VF_VFPF_MBOX1;
- mbox->tr_shift = 0;
- break;
- default:
- return -ENODEV;
- }
-
- mbox->dev = rte_zmalloc("mbox dev",
- ndevs * sizeof(struct otx2_mbox_dev),
- OTX2_ALIGN);
- if (!mbox->dev) {
- otx2_mbox_fini(mbox);
- return -ENOMEM;
- }
- mbox->ndevs = ndevs;
- for (devid = 0; devid < ndevs; devid++) {
- mdev = &mbox->dev[devid];
- mdev->mbase = (void *)(mbox->hwbase + (devid * MBOX_SIZE));
- rte_spinlock_init(&mdev->mbox_lock);
- /* Init header to reset value */
- otx2_mbox_reset(mbox, devid);
- }
-
- return 0;
-}
-
-/**
- * @internal
- * Allocate a message response
- */
-struct mbox_msghdr *
-otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, int size,
- int size_rsp)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr = NULL;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- size_rsp = RTE_ALIGN(size_rsp, MBOX_MSG_ALIGN);
- /* Check if there is space in mailbox */
- if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset())
- goto exit;
- if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset())
- goto exit;
- if (mdev->msg_size == 0)
- mdev->num_msgs = 0;
- mdev->num_msgs++;
-
- msghdr = (struct mbox_msghdr *)(((uintptr_t)mdev->mbase +
- mbox->tx_start + msgs_offset() + mdev->msg_size));
-
- /* Clear the whole msg region */
- otx2_mbox_memset(msghdr, 0, sizeof(*msghdr) + size);
- /* Init message header with reset values */
- msghdr->ver = OTX2_MBOX_VERSION;
- mdev->msg_size += size;
- mdev->rsp_size += size_rsp;
- msghdr->next_msgoff = mdev->msg_size + msgs_offset();
-exit:
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return msghdr;
-}
-
-/**
- * @internal
- * Send a mailbox message
- */
-void
-otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
-
- /* Reset header for next messages */
- tx_hdr->msg_size = mdev->msg_size;
- mdev->msg_size = 0;
- mdev->rsp_size = 0;
- mdev->msgs_acked = 0;
-
- /* num_msgs != 0 signals to the peer that the buffer has a number of
- * messages. So this should be written after copying txmem
- */
- tx_hdr->num_msgs = mdev->num_msgs;
- rx_hdr->num_msgs = 0;
-
- /* Sync mbox data into memory */
- rte_wmb();
-
- /* The interrupt should be fired after num_msgs is written
- * to the shared memory
- */
- rte_write64(1, (volatile void *)(mbox->reg_base +
- (mbox->trigger | (devid << mbox->tr_shift))));
-}
-
-/**
- * @internal
- * Wait and get mailbox response
- */
-int
-otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr;
- uint64_t offset;
- int rc;
-
- rc = otx2_mbox_wait_for_rsp(mbox, devid);
- if (rc != 1)
- return -EIO;
-
- rte_rmb();
-
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- if (msg != NULL)
- *msg = msghdr;
-
- return msghdr->rc;
-}
-
-/**
- * Polling for given wait time to get mailbox response
- */
-static int
-mbox_poll(struct otx2_mbox *mbox, uint32_t wait)
-{
- uint32_t timeout = 0, sleep = 1;
- uint32_t wait_us = wait * 1000;
- uint64_t rsp_reg = 0;
- uintptr_t reg_addr;
-
- reg_addr = mbox->reg_base + mbox->intr_offset;
- do {
- rsp_reg = otx2_read64(reg_addr);
-
- if (timeout >= wait_us)
- return -ETIMEDOUT;
-
- rte_delay_us(sleep);
- timeout += sleep;
- } while (!rsp_reg);
-
- rte_smp_rmb();
-
- /* Clear interrupt */
- otx2_write64(rsp_reg, reg_addr);
-
- /* Reset mbox */
- otx2_mbox_reset(mbox, 0);
-
- return 0;
-}
-
-/**
- * @internal
- * Wait and get mailbox response with timeout
- */
-int
-otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
- uint32_t tmo)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr;
- uint64_t offset;
- int rc;
-
- rc = otx2_mbox_wait_for_rsp_tmo(mbox, devid, tmo);
- if (rc != 1)
- return -EIO;
-
- rte_rmb();
-
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- if (msg != NULL)
- *msg = msghdr;
-
- return msghdr->rc;
-}
-
-static int
-mbox_wait(struct otx2_mbox *mbox, int devid, uint32_t rst_timo)
-{
- volatile struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- uint32_t timeout = 0, sleep = 1;
-
- rst_timo = rst_timo * 1000; /* Milli seconds to micro seconds */
- while (mdev->num_msgs > mdev->msgs_acked) {
- rte_delay_us(sleep);
- timeout += sleep;
- if (timeout >= rst_timo) {
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase +
- mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase +
- mbox->rx_start);
-
- otx2_err("MBOX[devid: %d] message wait timeout %d, "
- "num_msgs: %d, msgs_acked: %d "
- "(tx/rx num_msgs: %d/%d), msg_size: %d, "
- "rsp_size: %d",
- devid, timeout, mdev->num_msgs,
- mdev->msgs_acked, tx_hdr->num_msgs,
- rx_hdr->num_msgs, mdev->msg_size,
- mdev->rsp_size);
-
- return -EIO;
- }
- rte_rmb();
- }
- return 0;
-}
-
-int
-otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int rc = 0;
-
- /* Sync with mbox region */
- rte_rmb();
-
- if (mbox->trigger == RVU_PF_VFX_PFVF_MBOX1 ||
- mbox->trigger == RVU_PF_VFX_PFVF_MBOX0) {
- /* In case of VF, Wait a bit more to account round trip delay */
- tmo = tmo * 2;
- }
-
- /* Wait message */
- if (rte_thread_is_intr())
- rc = mbox_poll(mbox, tmo);
- else
- rc = mbox_wait(mbox, devid, tmo);
-
- if (!rc)
- rc = mdev->num_msgs;
-
- return rc;
-}
-
-/**
- * @internal
- * Wait for the mailbox response
- */
-int
-otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
-{
- return otx2_mbox_wait_for_rsp_tmo(mbox, devid, MBOX_RSP_TIMEOUT);
-}
-
-int
-otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int avail;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- avail = mbox->tx_size - mdev->msg_size - msgs_offset();
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return avail;
-}
-
-int
-otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pcifunc)
-{
- struct ready_msg_rsp *rsp;
- int rc;
-
- otx2_mbox_alloc_msg_ready(mbox);
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->hdr.ver != OTX2_MBOX_VERSION) {
- otx2_err("Incompatible MBox versions(AF: 0x%04x DPDK: 0x%04x)",
- rsp->hdr.ver, OTX2_MBOX_VERSION);
- return -EPIPE;
- }
-
- if (pcifunc)
- *pcifunc = rsp->hdr.pcifunc;
-
- return 0;
-}
-
-int
-otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pcifunc,
- uint16_t id)
-{
- struct msg_rsp *rsp;
-
- rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp));
- if (!rsp)
- return -ENOMEM;
- rsp->hdr.id = id;
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
- rsp->hdr.rc = MBOX_MSG_INVALID;
- rsp->hdr.pcifunc = pcifunc;
-
- return 0;
-}
-
-/**
- * @internal
- * Convert mail box ID to name
- */
-const char *otx2_mbox_id2name(uint16_t id)
-{
- switch (id) {
-#define M(_name, _id, _1, _2, _3) case _id: return # _name;
- MBOX_MESSAGES
- MBOX_UP_CGX_MESSAGES
-#undef M
- default :
- return "INVALID ID";
- }
-}
-
-int otx2_mbox_id2size(uint16_t id)
-{
- switch (id) {
-#define M(_1, _id, _2, _req_type, _3) case _id: return sizeof(struct _req_type);
- MBOX_MESSAGES
- MBOX_UP_CGX_MESSAGES
-#undef M
- default :
- return 0;
- }
-}
diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h
deleted file mode 100644
index 25b521a7fa..0000000000
--- a/drivers/common/octeontx2/otx2_mbox.h
+++ /dev/null
@@ -1,1958 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_MBOX_H__
-#define __OTX2_MBOX_H__
-
-#include <errno.h>
-#include <stdbool.h>
-
-#include <rte_ether.h>
-#include <rte_spinlock.h>
-
-#include <otx2_common.h>
-
-#define SZ_64K (64ULL * 1024ULL)
-#define SZ_1K (1ULL * 1024ULL)
-#define MBOX_SIZE SZ_64K
-
-/* AF/PF: PF initiated, PF/VF VF initiated */
-#define MBOX_DOWN_RX_START 0
-#define MBOX_DOWN_RX_SIZE (46 * SZ_1K)
-#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
-#define MBOX_DOWN_TX_SIZE (16 * SZ_1K)
-/* AF/PF: AF initiated, PF/VF PF initiated */
-#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
-#define MBOX_UP_RX_SIZE SZ_1K
-#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
-#define MBOX_UP_TX_SIZE SZ_1K
-
-#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
-# error "Incorrect mailbox area sizes"
-#endif
-
-#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
-
-#define MBOX_RSP_TIMEOUT 3000 /* Time to wait for mbox response in ms */
-
-#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */
-
-/* Mailbox directions */
-#define MBOX_DIR_AFPF 0 /* AF replies to PF */
-#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */
-#define MBOX_DIR_PFVF 2 /* PF replies to VF */
-#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */
-#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */
-#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */
-#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */
-#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */
-
-/* Device memory does not support unaligned access, instruct compiler to
- * not optimize the memory access when working with mailbox memory.
- */
-#define __otx2_io volatile
-
-struct otx2_mbox_dev {
- void *mbase; /* This dev's mbox region */
- rte_spinlock_t mbox_lock;
- uint16_t msg_size; /* Total msg size to be sent */
- uint16_t rsp_size; /* Total rsp size to be sure the reply is ok */
- uint16_t num_msgs; /* No of msgs sent or waiting for response */
- uint16_t msgs_acked; /* No of msgs for which response is received */
-};
-
-struct otx2_mbox {
- uintptr_t hwbase; /* Mbox region advertised by HW */
- uintptr_t reg_base;/* CSR base for this dev */
- uint64_t trigger; /* Trigger mbox notification */
- uint16_t tr_shift; /* Mbox trigger shift */
- uint64_t rx_start; /* Offset of Rx region in mbox memory */
- uint64_t tx_start; /* Offset of Tx region in mbox memory */
- uint16_t rx_size; /* Size of Rx region */
- uint16_t tx_size; /* Size of Tx region */
- uint16_t ndevs; /* The number of peers */
- struct otx2_mbox_dev *dev;
- uint64_t intr_offset; /* Offset to interrupt register */
-};
-
-/* Header which precedes all mbox messages */
-struct mbox_hdr {
- uint64_t __otx2_io msg_size; /* Total msgs size embedded */
- uint16_t __otx2_io num_msgs; /* No of msgs embedded */
-};
-
-/* Header which precedes every msg and is also part of it */
-struct mbox_msghdr {
- uint16_t __otx2_io pcifunc; /* Who's sending this msg */
- uint16_t __otx2_io id; /* Mbox message ID */
-#define OTX2_MBOX_REQ_SIG (0xdead)
-#define OTX2_MBOX_RSP_SIG (0xbeef)
- /* Signature, for validating corrupted msgs */
- uint16_t __otx2_io sig;
-#define OTX2_MBOX_VERSION (0x000b)
- /* Version of msg's structure for this ID */
- uint16_t __otx2_io ver;
- /* Offset of next msg within mailbox region */
- uint16_t __otx2_io next_msgoff;
- int __otx2_io rc; /* Msg processed response code */
-};
-
-/* Mailbox message types */
-#define MBOX_MSG_MASK 0xFFFF
-#define MBOX_MSG_INVALID 0xFFFE
-#define MBOX_MSG_MAX 0xFFFF
-
-#define MBOX_MESSAGES \
-/* Generic mbox IDs (range 0x000 - 0x1FF) */ \
-M(READY, 0x001, ready, msg_req, ready_msg_rsp) \
-M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach_req, msg_rsp)\
-M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach_req, msg_rsp)\
-M(FREE_RSRC_CNT, 0x004, free_rsrc_cnt, msg_req, free_rsrcs_rsp) \
-M(MSIX_OFFSET, 0x005, msix_offset, msg_req, msix_offset_rsp) \
-M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \
-M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \
-M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \
-M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \
-/* CGX mbox IDs (range 0x200 - 0x3FF) */ \
-M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \
-M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \
-M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \
-M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get,\
- cgx_mac_addr_set_or_get) \
-M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get,\
- cgx_mac_addr_set_or_get) \
-M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \
-M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \
-M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \
-M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \
-M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, cgx_link_info_msg)\
-M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \
-M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \
-M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \
-M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \
-M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \
- cgx_pause_frm_cfg) \
-M(CGX_FW_DATA_GET, 0x20F, cgx_get_aux_link_info, msg_req, cgx_fw_data) \
-M(CGX_FEC_SET, 0x210, cgx_set_fec_param, fec_mode, fec_mode) \
-M(CGX_MAC_ADDR_ADD, 0x211, cgx_mac_addr_add, cgx_mac_addr_add_req, \
- cgx_mac_addr_add_rsp) \
-M(CGX_MAC_ADDR_DEL, 0x212, cgx_mac_addr_del, cgx_mac_addr_del_req, \
- msg_rsp) \
-M(CGX_MAC_MAX_ENTRIES_GET, 0x213, cgx_mac_max_entries_get, msg_req, \
- cgx_max_dmac_entries_get_rsp) \
-M(CGX_SET_LINK_STATE, 0x214, cgx_set_link_state, \
- cgx_set_link_state_msg, msg_rsp) \
-M(CGX_GET_PHY_MOD_TYPE, 0x215, cgx_get_phy_mod_type, msg_req, \
- cgx_phy_mod_type) \
-M(CGX_SET_PHY_MOD_TYPE, 0x216, cgx_set_phy_mod_type, cgx_phy_mod_type, \
- msg_rsp) \
-M(CGX_FEC_STATS, 0x217, cgx_fec_stats, msg_req, cgx_fec_stats_rsp) \
-M(CGX_SET_LINK_MODE, 0x218, cgx_set_link_mode, cgx_set_link_mode_req,\
- cgx_set_link_mode_rsp) \
-M(CGX_GET_PHY_FEC_STATS, 0x219, cgx_get_phy_fec_stats, msg_req, msg_rsp) \
-M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \
-/* NPA mbox IDs (range 0x400 - 0x5FF) */ \
-M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \
- npa_lf_alloc_rsp) \
-M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \
-M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp)\
-M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, msg_rsp)\
-/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \
-M(SSO_LF_ALLOC, 0x600, sso_lf_alloc, sso_lf_alloc_req, \
- sso_lf_alloc_rsp) \
-M(SSO_LF_FREE, 0x601, sso_lf_free, sso_lf_free_req, msg_rsp) \
-M(SSOW_LF_ALLOC, 0x602, ssow_lf_alloc, ssow_lf_alloc_req, msg_rsp)\
-M(SSOW_LF_FREE, 0x603, ssow_lf_free, ssow_lf_free_req, msg_rsp) \
-M(SSO_HW_SETCONFIG, 0x604, sso_hw_setconfig, sso_hw_setconfig, \
- msg_rsp) \
-M(SSO_GRP_SET_PRIORITY, 0x605, sso_grp_set_priority, sso_grp_priority, \
- msg_rsp) \
-M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \
- sso_grp_priority) \
-M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \
-M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \
- msg_rsp) \
-M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \
- sso_grp_stats) \
-M(SSO_HWS_GET_STATS, 0x610, sso_hws_get_stats, sso_info_req, \
- sso_hws_stats) \
-M(SSO_HW_RELEASE_XAQ, 0x611, sso_hw_release_xaq_aura, \
- sso_release_xaq, msg_rsp) \
-/* TIM mbox IDs (range 0x800 - 0x9FF) */ \
-M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \
- tim_lf_alloc_rsp) \
-M(TIM_LF_FREE, 0x801, tim_lf_free, tim_ring_req, msg_rsp) \
-M(TIM_CONFIG_RING, 0x802, tim_config_ring, tim_config_req, msg_rsp)\
-M(TIM_ENABLE_RING, 0x803, tim_enable_ring, tim_ring_req, \
- tim_enable_rsp) \
-M(TIM_DISABLE_RING, 0x804, tim_disable_ring, tim_ring_req, msg_rsp) \
-/* CPT mbox IDs (range 0xA00 - 0xBFF) */ \
-M(CPT_LF_ALLOC, 0xA00, cpt_lf_alloc, cpt_lf_alloc_req_msg, \
- cpt_lf_alloc_rsp_msg) \
-M(CPT_LF_FREE, 0xA01, cpt_lf_free, msg_req, msg_rsp) \
-M(CPT_RD_WR_REGISTER, 0xA02, cpt_rd_wr_register, cpt_rd_wr_reg_msg, \
- cpt_rd_wr_reg_msg) \
-M(CPT_SET_CRYPTO_GRP, 0xA03, cpt_set_crypto_grp, \
- cpt_set_crypto_grp_req_msg, \
- msg_rsp) \
-M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \
- cpt_inline_ipsec_cfg_msg, msg_rsp) \
-M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, \
- cpt_rx_inline_lf_cfg_msg, msg_rsp) \
-M(CPT_GET_CAPS, 0xBFD, cpt_caps_get, msg_req, cpt_caps_rsp_msg) \
-/* REE mbox IDs (range 0xE00 - 0xFFF) */ \
-M(REE_CONFIG_LF, 0xE01, ree_config_lf, ree_lf_req_msg, \
- msg_rsp) \
-M(REE_RD_WR_REGISTER, 0xE02, ree_rd_wr_register, ree_rd_wr_reg_msg, \
- ree_rd_wr_reg_msg) \
-M(REE_RULE_DB_PROG, 0xE03, ree_rule_db_prog, \
- ree_rule_db_prog_req_msg, \
- msg_rsp) \
-M(REE_RULE_DB_LEN_GET, 0xE04, ree_rule_db_len_get, ree_req_msg, \
- ree_rule_db_len_rsp_msg) \
-M(REE_RULE_DB_GET, 0xE05, ree_rule_db_get, \
- ree_rule_db_get_req_msg, \
- ree_rule_db_get_rsp_msg) \
-/* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \
-M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, \
- npc_mcam_alloc_entry_req, \
- npc_mcam_alloc_entry_rsp) \
-M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \
- npc_mcam_free_entry_req, msg_rsp) \
-M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \
- npc_mcam_write_entry_req, msg_rsp) \
-M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \
- npc_mcam_ena_dis_entry_req, msg_rsp) \
-M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \
- npc_mcam_ena_dis_entry_req, msg_rsp) \
-M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, \
- npc_mcam_shift_entry_req, \
- npc_mcam_shift_entry_rsp) \
-M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \
- npc_mcam_alloc_counter_req, \
- npc_mcam_alloc_counter_rsp) \
-M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \
- npc_mcam_oper_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \
- npc_mcam_unmap_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \
- npc_mcam_oper_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \
- npc_mcam_oper_counter_req, \
- npc_mcam_oper_counter_rsp) \
-M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry,\
- npc_mcam_alloc_and_write_entry_req, \
- npc_mcam_alloc_and_write_entry_rsp) \
-M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, msg_req, \
- npc_get_kex_cfg_rsp) \
-M(NPC_INSTALL_FLOW, 0x600d, npc_install_flow, \
- npc_install_flow_req, \
- npc_install_flow_rsp) \
-M(NPC_DELETE_FLOW, 0x600e, npc_delete_flow, \
- npc_delete_flow_req, msg_rsp) \
-M(NPC_MCAM_READ_ENTRY, 0x600f, npc_mcam_read_entry, \
- npc_mcam_read_entry_req, \
- npc_mcam_read_entry_rsp) \
-M(NPC_SET_PKIND, 0x6010, npc_set_pkind, \
- npc_set_pkind, \
- msg_rsp) \
-M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, msg_req, \
- npc_mcam_read_base_rule_rsp) \
-/* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \
-M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, nix_lf_alloc_req, \
- nix_lf_alloc_rsp) \
-M(NIX_LF_FREE, 0x8001, nix_lf_free, nix_lf_free_req, msg_rsp) \
-M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, \
- nix_aq_enq_rsp) \
-M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, hwctx_disable_req, \
- msg_rsp) \
-M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, nix_txsch_alloc_req, \
- nix_txsch_alloc_rsp) \
-M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, \
- msg_rsp) \
-M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, \
- nix_txschq_config) \
-M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \
-M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \
-M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \
- nix_rss_flowkey_cfg, \
- nix_rss_flowkey_cfg_rsp) \
-M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, \
- msg_rsp) \
-M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \
-M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \
-M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \
-M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \
-M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \
- nix_mark_format_cfg, \
- nix_mark_format_cfg_rsp) \
-M(NIX_SET_RX_CFG, 0x8010, nix_set_rx_cfg, nix_rx_cfg, msg_rsp) \
-M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, nix_lso_format_cfg, \
- nix_lso_format_cfg_rsp) \
-M(NIX_LF_PTP_TX_ENABLE, 0x8013, nix_lf_ptp_tx_enable, msg_req, \
- msg_rsp) \
-M(NIX_LF_PTP_TX_DISABLE, 0x8014, nix_lf_ptp_tx_disable, msg_req, \
- msg_rsp) \
-M(NIX_SET_VLAN_TPID, 0x8015, nix_set_vlan_tpid, nix_set_vlan_tpid, \
- msg_rsp) \
-M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \
- nix_bp_cfg_rsp) \
-M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp)\
-M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, \
- nix_get_mac_addr_rsp) \
-M(NIX_INLINE_IPSEC_CFG, 0x8019, nix_inline_ipsec_cfg, \
- nix_inline_ipsec_cfg, msg_rsp) \
-M(NIX_INLINE_IPSEC_LF_CFG, \
- 0x801a, nix_inline_ipsec_lf_cfg, \
- nix_inline_ipsec_lf_cfg, msg_rsp)
-
-/* Messages initiated by AF (range 0xC00 - 0xDFF) */
-#define MBOX_UP_CGX_MESSAGES \
-M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, \
- msg_rsp) \
-M(CGX_PTP_RX_INFO, 0xC01, cgx_ptp_rx_info, cgx_ptp_rx_info_msg, \
- msg_rsp)
-
-enum {
-#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id,
-MBOX_MESSAGES
-MBOX_UP_CGX_MESSAGES
-#undef M
-};
-
-/* Mailbox message formats */
-
-#define RVU_DEFAULT_PF_FUNC 0xFFFF
-
-/* Generic request msg used for those mbox messages which
- * don't send any data in the request.
- */
-struct msg_req {
- struct mbox_msghdr hdr;
-};
-
-/* Generic response msg used a ack or response for those mbox
- * messages which doesn't have a specific rsp msg format.
- */
-struct msg_rsp {
- struct mbox_msghdr hdr;
-};
-
-/* RVU mailbox error codes
- * Range 256 - 300.
- */
-enum rvu_af_status {
- RVU_INVALID_VF_ID = -256,
-};
-
-struct ready_msg_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sclk_feq; /* SCLK frequency */
- uint16_t __otx2_io rclk_freq; /* RCLK frequency */
-};
-
-enum npc_pkind_type {
- NPC_RX_CUSTOM_PRE_L2_PKIND = 55ULL,
- NPC_RX_VLAN_EXDSA_PKIND = 56ULL,
- NPC_RX_CHLEN24B_PKIND,
- NPC_RX_CPT_HDR_PKIND,
- NPC_RX_CHLEN90B_PKIND,
- NPC_TX_HIGIG_PKIND,
- NPC_RX_HIGIG_PKIND,
- NPC_RX_EXDSA_PKIND,
- NPC_RX_EDSA_PKIND,
- NPC_TX_DEF_PKIND,
-};
-
-#define OTX2_PRIV_FLAGS_CH_LEN_90B 254
-#define OTX2_PRIV_FLAGS_CH_LEN_24B 255
-
-/* Struct to set pkind */
-struct npc_set_pkind {
- struct mbox_msghdr hdr;
-#define OTX2_PRIV_FLAGS_DEFAULT BIT_ULL(0)
-#define OTX2_PRIV_FLAGS_EDSA BIT_ULL(1)
-#define OTX2_PRIV_FLAGS_HIGIG BIT_ULL(2)
-#define OTX2_PRIV_FLAGS_FDSA BIT_ULL(3)
-#define OTX2_PRIV_FLAGS_EXDSA BIT_ULL(4)
-#define OTX2_PRIV_FLAGS_VLAN_EXDSA BIT_ULL(5)
-#define OTX2_PRIV_FLAGS_CUSTOM BIT_ULL(63)
- uint64_t __otx2_io mode;
-#define PKIND_TX BIT_ULL(0)
-#define PKIND_RX BIT_ULL(1)
- uint8_t __otx2_io dir;
- uint8_t __otx2_io pkind; /* valid only in case custom flag */
- uint8_t __otx2_io var_len_off;
- /* Offset of custom header length field.
- * Valid only for pkind NPC_RX_CUSTOM_PRE_L2_PKIND
- */
- uint8_t __otx2_io var_len_off_mask; /* Mask for length with in offset */
- uint8_t __otx2_io shift_dir;
- /* Shift direction to get length of the
- * header at var_len_off
- */
-};
-
-/* Structure for requesting resource provisioning.
- * 'modify' flag to be used when either requesting more
- * or to detach partial of a certain resource type.
- * Rest of the fields specify how many of what type to
- * be attached.
- * To request LFs from two blocks of same type this mailbox
- * can be sent twice as below:
- * struct rsrc_attach *attach;
- * .. Allocate memory for message ..
- * attach->cptlfs = 3; <3 LFs from CPT0>
- * .. Send message ..
- * .. Allocate memory for message ..
- * attach->modify = 1;
- * attach->cpt_blkaddr = BLKADDR_CPT1;
- * attach->cptlfs = 2; <2 LFs from CPT1>
- * .. Send message ..
- */
-struct rsrc_attach_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io modify:1;
- uint8_t __otx2_io npalf:1;
- uint8_t __otx2_io nixlf:1;
- uint16_t __otx2_io sso;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io timlfs;
- uint16_t __otx2_io cptlfs;
- uint16_t __otx2_io reelfs;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- int __otx2_io cpt_blkaddr;
- /* BLKADDR_REE0/BLKADDR_REE1 or 0 for BLKADDR_REE0 */
- int __otx2_io ree_blkaddr;
-};
-
-/* Structure for relinquishing resources.
- * 'partial' flag to be used when relinquishing all resources
- * but only of a certain type. If not set, all resources of all
- * types provisioned to the RVU function will be detached.
- */
-struct rsrc_detach_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io partial:1;
- uint8_t __otx2_io npalf:1;
- uint8_t __otx2_io nixlf:1;
- uint8_t __otx2_io sso:1;
- uint8_t __otx2_io ssow:1;
- uint8_t __otx2_io timlfs:1;
- uint8_t __otx2_io cptlfs:1;
- uint8_t __otx2_io reelfs:1;
-};
-
-/* NIX Transmit schedulers */
-#define NIX_TXSCH_LVL_SMQ 0x0
-#define NIX_TXSCH_LVL_MDQ 0x0
-#define NIX_TXSCH_LVL_TL4 0x1
-#define NIX_TXSCH_LVL_TL3 0x2
-#define NIX_TXSCH_LVL_TL2 0x3
-#define NIX_TXSCH_LVL_TL1 0x4
-#define NIX_TXSCH_LVL_CNT 0x5
-
-/*
- * Number of resources available to the caller.
- * In reply to MBOX_MSG_FREE_RSRC_CNT.
- */
-struct free_rsrcs_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT];
- uint16_t __otx2_io sso;
- uint16_t __otx2_io tim;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io cpt;
- uint8_t __otx2_io npa;
- uint8_t __otx2_io nix;
- uint16_t __otx2_io schq_nix1[NIX_TXSCH_LVL_CNT];
- uint8_t __otx2_io nix1;
- uint8_t __otx2_io cpt1;
- uint8_t __otx2_io ree0;
- uint8_t __otx2_io ree1;
-};
-
-#define MSIX_VECTOR_INVALID 0xFFFF
-#define MAX_RVU_BLKLF_CNT 256
-
-struct msix_offset_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io npa_msixoff;
- uint16_t __otx2_io nix_msixoff;
- uint16_t __otx2_io sso;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io timlfs;
- uint16_t __otx2_io cptlfs;
- uint16_t __otx2_io sso_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ssow_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io timlf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io cptlf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io cpt1_lfs;
- uint16_t __otx2_io ree0_lfs;
- uint16_t __otx2_io ree1_lfs;
- uint16_t __otx2_io cpt1_lf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ree0_lf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ree1_lf_msixoff[MAX_RVU_BLKLF_CNT];
-
-};
-
-/* CGX mbox message formats */
-
-struct cgx_stats_rsp {
- struct mbox_msghdr hdr;
-#define CGX_RX_STATS_COUNT 13
-#define CGX_TX_STATS_COUNT 18
- uint64_t __otx2_io rx_stats[CGX_RX_STATS_COUNT];
- uint64_t __otx2_io tx_stats[CGX_TX_STATS_COUNT];
-};
-
-struct cgx_fec_stats_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io fec_corr_blks;
- uint64_t __otx2_io fec_uncorr_blks;
-};
-/* Structure for requesting the operation for
- * setting/getting mac address in the CGX interface
- */
-struct cgx_mac_addr_set_or_get {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-/* Structure for requesting the operation to
- * add DMAC filter entry into CGX interface
- */
-struct cgx_mac_addr_add_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-/* Structure for response against the operation to
- * add DMAC filter entry into CGX interface
- */
-struct cgx_mac_addr_add_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io index;
-};
-
-/* Structure for requesting the operation to
- * delete DMAC filter entry from CGX interface
- */
-struct cgx_mac_addr_del_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io index;
-};
-
-/* Structure for response against the operation to
- * get maximum supported DMAC filter entries
- */
-struct cgx_max_dmac_entries_get_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io max_dmac_filters;
-};
-
-struct cgx_link_user_info {
- uint64_t __otx2_io link_up:1;
- uint64_t __otx2_io full_duplex:1;
- uint64_t __otx2_io lmac_type_id:4;
- uint64_t __otx2_io speed:20; /* speed in Mbps */
- uint64_t __otx2_io an:1; /* AN supported or not */
- uint64_t __otx2_io fec:2; /* FEC type if enabled else 0 */
- uint64_t __otx2_io port:8;
-#define LMACTYPE_STR_LEN 16
- char lmac_type[LMACTYPE_STR_LEN];
-};
-
-struct cgx_link_info_msg {
- struct mbox_msghdr hdr;
- struct cgx_link_user_info link_info;
-};
-
-struct cgx_ptp_rx_info_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io ptp_en;
-};
-
-struct cgx_pause_frm_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io set;
- /* set = 1 if the request is to config pause frames */
- /* set = 0 if the request is to fetch pause frames config */
- uint8_t __otx2_io rx_pause;
- uint8_t __otx2_io tx_pause;
-};
-
-struct sfp_eeprom_s {
-#define SFP_EEPROM_SIZE 256
- uint16_t __otx2_io sff_id;
- uint8_t __otx2_io buf[SFP_EEPROM_SIZE];
- uint64_t __otx2_io reserved;
-};
-
-enum fec_type {
- OTX2_FEC_NONE,
- OTX2_FEC_BASER,
- OTX2_FEC_RS,
-};
-
-struct phy_s {
- uint64_t __otx2_io can_change_mod_type : 1;
- uint64_t __otx2_io mod_type : 1;
-};
-
-struct cgx_lmac_fwdata_s {
- uint16_t __otx2_io rw_valid;
- uint64_t __otx2_io supported_fec;
- uint64_t __otx2_io supported_an;
- uint64_t __otx2_io supported_link_modes;
- /* Only applicable if AN is supported */
- uint64_t __otx2_io advertised_fec;
- uint64_t __otx2_io advertised_link_modes;
- /* Only applicable if SFP/QSFP slot is present */
- struct sfp_eeprom_s sfp_eeprom;
- struct phy_s phy;
-#define LMAC_FWDATA_RESERVED_MEM 1023
- uint64_t __otx2_io reserved[LMAC_FWDATA_RESERVED_MEM];
-};
-
-struct cgx_fw_data {
- struct mbox_msghdr hdr;
- struct cgx_lmac_fwdata_s fwdata;
-};
-
-struct fec_mode {
- struct mbox_msghdr hdr;
- int __otx2_io fec;
-};
-
-struct cgx_set_link_state_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io enable;
-};
-
-struct cgx_phy_mod_type {
- struct mbox_msghdr hdr;
- int __otx2_io mod;
-};
-
-struct cgx_set_link_mode_args {
- uint32_t __otx2_io speed;
- uint8_t __otx2_io duplex;
- uint8_t __otx2_io an;
- uint8_t __otx2_io ports;
- uint64_t __otx2_io mode;
-};
-
-struct cgx_set_link_mode_req {
- struct mbox_msghdr hdr;
- struct cgx_set_link_mode_args args;
-};
-
-struct cgx_set_link_mode_rsp {
- struct mbox_msghdr hdr;
- int __otx2_io status;
-};
-/* NPA mbox message formats */
-
-/* NPA mailbox error codes
- * Range 301 - 400.
- */
-enum npa_af_status {
- NPA_AF_ERR_PARAM = -301,
- NPA_AF_ERR_AQ_FULL = -302,
- NPA_AF_ERR_AQ_ENQUEUE = -303,
- NPA_AF_ERR_AF_LF_INVALID = -304,
- NPA_AF_ERR_AF_LF_ALLOC = -305,
- NPA_AF_ERR_LF_RESET = -306,
-};
-
-#define NPA_AURA_SZ_0 0
-#define NPA_AURA_SZ_128 1
-#define NPA_AURA_SZ_256 2
-#define NPA_AURA_SZ_512 3
-#define NPA_AURA_SZ_1K 4
-#define NPA_AURA_SZ_2K 5
-#define NPA_AURA_SZ_4K 6
-#define NPA_AURA_SZ_8K 7
-#define NPA_AURA_SZ_16K 8
-#define NPA_AURA_SZ_32K 9
-#define NPA_AURA_SZ_64K 10
-#define NPA_AURA_SZ_128K 11
-#define NPA_AURA_SZ_256K 12
-#define NPA_AURA_SZ_512K 13
-#define NPA_AURA_SZ_1M 14
-#define NPA_AURA_SZ_MAX 15
-
-/* For NPA LF context alloc and init */
-struct npa_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- int __otx2_io aura_sz; /* No of auras. See NPA_AURA_SZ_* */
- uint32_t __otx2_io nr_pools; /* No of pools */
- uint64_t __otx2_io way_mask;
-};
-
-struct npa_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io stack_pg_ptrs; /* No of ptrs per stack page */
- uint32_t __otx2_io stack_pg_bytes; /* Size of stack page */
- uint16_t __otx2_io qints; /* NPA_AF_CONST::QINTS */
-};
-
-/* NPA AQ enqueue msg */
-struct npa_aq_enq_req {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io aura_id;
- uint8_t __otx2_io ctype;
- uint8_t __otx2_io op;
- union {
- /* Valid when op == WRITE/INIT and ctype == AURA.
- * LF fills the pool_id in aura.pool_addr. AF will translate
- * the pool_id to pool context pointer.
- */
- __otx2_io struct npa_aura_s aura;
- /* Valid when op == WRITE/INIT and ctype == POOL */
- __otx2_io struct npa_pool_s pool;
- };
- /* Mask data when op == WRITE (1=write, 0=don't write) */
- union {
- /* Valid when op == WRITE and ctype == AURA */
- __otx2_io struct npa_aura_s aura_mask;
- /* Valid when op == WRITE and ctype == POOL */
- __otx2_io struct npa_pool_s pool_mask;
- };
-};
-
-struct npa_aq_enq_rsp {
- struct mbox_msghdr hdr;
- union {
- /* Valid when op == READ and ctype == AURA */
- __otx2_io struct npa_aura_s aura;
- /* Valid when op == READ and ctype == POOL */
- __otx2_io struct npa_pool_s pool;
- };
-};
-
-/* Disable all contexts of type 'ctype' */
-struct hwctx_disable_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io ctype;
-};
-
-/* NIX mbox message formats */
-
-/* NIX mailbox error codes
- * Range 401 - 500.
- */
-enum nix_af_status {
- NIX_AF_ERR_PARAM = -401,
- NIX_AF_ERR_AQ_FULL = -402,
- NIX_AF_ERR_AQ_ENQUEUE = -403,
- NIX_AF_ERR_AF_LF_INVALID = -404,
- NIX_AF_ERR_AF_LF_ALLOC = -405,
- NIX_AF_ERR_TLX_ALLOC_FAIL = -406,
- NIX_AF_ERR_TLX_INVALID = -407,
- NIX_AF_ERR_RSS_SIZE_INVALID = -408,
- NIX_AF_ERR_RSS_GRPS_INVALID = -409,
- NIX_AF_ERR_FRS_INVALID = -410,
- NIX_AF_ERR_RX_LINK_INVALID = -411,
- NIX_AF_INVAL_TXSCHQ_CFG = -412,
- NIX_AF_SMQ_FLUSH_FAILED = -413,
- NIX_AF_ERR_LF_RESET = -414,
- NIX_AF_ERR_RSS_NOSPC_FIELD = -415,
- NIX_AF_ERR_RSS_NOSPC_ALGO = -416,
- NIX_AF_ERR_MARK_CFG_FAIL = -417,
- NIX_AF_ERR_LSO_CFG_FAIL = -418,
- NIX_AF_INVAL_NPA_PF_FUNC = -419,
- NIX_AF_INVAL_SSO_PF_FUNC = -420,
- NIX_AF_ERR_TX_VTAG_NOSPC = -421,
- NIX_AF_ERR_RX_VTAG_INUSE = -422,
- NIX_AF_ERR_PTP_CONFIG_FAIL = -423,
-};
-
-/* For NIX LF context alloc and init */
-struct nix_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint32_t __otx2_io rq_cnt; /* No of receive queues */
- uint32_t __otx2_io sq_cnt; /* No of send queues */
- uint32_t __otx2_io cq_cnt; /* No of completion queues */
- uint8_t __otx2_io xqe_sz;
- uint16_t __otx2_io rss_sz;
- uint8_t __otx2_io rss_grps;
- uint16_t __otx2_io npa_func;
- /* RVU_DEFAULT_PF_FUNC == default pf_func associated with lf */
- uint16_t __otx2_io sso_func;
- uint64_t __otx2_io rx_cfg; /* See NIX_AF_LF(0..127)_RX_CFG */
- uint64_t __otx2_io way_mask;
-#define NIX_LF_RSS_TAG_LSB_AS_ADDER BIT_ULL(0)
- uint64_t flags;
-};
-
-struct nix_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sqb_size;
- uint16_t __otx2_io rx_chan_base;
- uint16_t __otx2_io tx_chan_base;
- uint8_t __otx2_io rx_chan_cnt; /* Total number of RX channels */
- uint8_t __otx2_io tx_chan_cnt; /* Total number of TX channels */
- uint8_t __otx2_io lso_tsov4_idx;
- uint8_t __otx2_io lso_tsov6_idx;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
- uint8_t __otx2_io lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */
- uint8_t __otx2_io lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */
- uint16_t __otx2_io cints; /* NIX_AF_CONST2::CINTS */
- uint16_t __otx2_io qints; /* NIX_AF_CONST2::QINTS */
- uint8_t __otx2_io hw_rx_tstamp_en; /*set if rx timestamping enabled */
- uint8_t __otx2_io cgx_links; /* No. of CGX links present in HW */
- uint8_t __otx2_io lbk_links; /* No. of LBK links present in HW */
- uint8_t __otx2_io sdp_links; /* No. of SDP links present in HW */
- uint8_t __otx2_io tx_link; /* Transmit channel link number */
-};
-
-struct nix_lf_free_req {
- struct mbox_msghdr hdr;
-#define NIX_LF_DISABLE_FLOWS BIT_ULL(0)
-#define NIX_LF_DONT_FREE_TX_VTAG BIT_ULL(1)
- uint64_t __otx2_io flags;
-};
-
-/* NIX AQ enqueue msg */
-struct nix_aq_enq_req {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io qidx;
- uint8_t __otx2_io ctype;
- uint8_t __otx2_io op;
- union {
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */
- __otx2_io struct nix_rq_ctx_s rq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */
- __otx2_io struct nix_sq_ctx_s sq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */
- __otx2_io struct nix_cq_ctx_s cq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */
- __otx2_io struct nix_rsse_s rss;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */
- __otx2_io struct nix_rx_mce_s mce;
- };
- /* Mask data when op == WRITE (1=write, 0=don't write) */
- union {
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */
- __otx2_io struct nix_rq_ctx_s rq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */
- __otx2_io struct nix_sq_ctx_s sq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */
- __otx2_io struct nix_cq_ctx_s cq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */
- __otx2_io struct nix_rsse_s rss_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */
- __otx2_io struct nix_rx_mce_s mce_mask;
- };
-};
-
-struct nix_aq_enq_rsp {
- struct mbox_msghdr hdr;
- union {
- __otx2_io struct nix_rq_ctx_s rq;
- __otx2_io struct nix_sq_ctx_s sq;
- __otx2_io struct nix_cq_ctx_s cq;
- __otx2_io struct nix_rsse_s rss;
- __otx2_io struct nix_rx_mce_s mce;
- };
-};
-
-/* Tx scheduler/shaper mailbox messages */
-
-#define MAX_TXSCHQ_PER_FUNC 128
-
-struct nix_txsch_alloc_req {
- struct mbox_msghdr hdr;
- /* Scheduler queue count request at each level */
- uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
-};
-
-struct nix_txsch_alloc_rsp {
- struct mbox_msghdr hdr;
- /* Scheduler queue count allocated at each level */
- uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
- /* Scheduler queue list allocated at each level */
- uint16_t __otx2_io
- schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- uint16_t __otx2_io schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- /* Traffic aggregation scheduler level */
- uint8_t __otx2_io aggr_level;
- /* Aggregation lvl's RR_PRIO config */
- uint8_t __otx2_io aggr_lvl_rr_prio;
- /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */
- uint8_t __otx2_io link_cfg_lvl;
-};
-
-struct nix_txsch_free_req {
- struct mbox_msghdr hdr;
-#define TXSCHQ_FREE_ALL BIT_ULL(0)
- uint16_t __otx2_io flags;
- /* Scheduler queue level to be freed */
- uint16_t __otx2_io schq_lvl;
- /* List of scheduler queues to be freed */
- uint16_t __otx2_io schq;
-};
-
-struct nix_txschq_config {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */
- uint8_t __otx2_io read;
-#define TXSCHQ_IDX_SHIFT 16
-#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1)
-#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK)
- uint8_t __otx2_io num_regs;
-#define MAX_REGS_PER_MBOX_MSG 20
- uint64_t __otx2_io reg[MAX_REGS_PER_MBOX_MSG];
- uint64_t __otx2_io regval[MAX_REGS_PER_MBOX_MSG];
- /* All 0's => overwrite with new value */
- uint64_t __otx2_io regval_mask[MAX_REGS_PER_MBOX_MSG];
-};
-
-struct nix_vtag_config {
- struct mbox_msghdr hdr;
- /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */
- uint8_t __otx2_io vtag_size;
- /* cfg_type is '0' for tx vlan cfg
- * cfg_type is '1' for rx vlan cfg
- */
- uint8_t __otx2_io cfg_type;
- union {
- /* Valid when cfg_type is '0' */
- struct {
- uint64_t __otx2_io vtag0;
- uint64_t __otx2_io vtag1;
-
- /* cfg_vtag0 & cfg_vtag1 fields are valid
- * when free_vtag0 & free_vtag1 are '0's.
- */
- /* cfg_vtag0 = 1 to configure vtag0 */
- uint8_t __otx2_io cfg_vtag0 :1;
- /* cfg_vtag1 = 1 to configure vtag1 */
- uint8_t __otx2_io cfg_vtag1 :1;
-
- /* vtag0_idx & vtag1_idx are only valid when
- * both cfg_vtag0 & cfg_vtag1 are '0's,
- * these fields are used along with free_vtag0
- * & free_vtag1 to free the nix lf's tx_vlan
- * configuration.
- *
- * Denotes the indices of tx_vtag def registers
- * that needs to be cleared and freed.
- */
- int __otx2_io vtag0_idx;
- int __otx2_io vtag1_idx;
-
- /* Free_vtag0 & free_vtag1 fields are valid
- * when cfg_vtag0 & cfg_vtag1 are '0's.
- */
- /* Free_vtag0 = 1 clears vtag0 configuration
- * vtag0_idx denotes the index to be cleared.
- */
- uint8_t __otx2_io free_vtag0 :1;
- /* Free_vtag1 = 1 clears vtag1 configuration
- * vtag1_idx denotes the index to be cleared.
- */
- uint8_t __otx2_io free_vtag1 :1;
- } tx;
-
- /* Valid when cfg_type is '1' */
- struct {
- /* Rx vtag type index, valid values are in 0..7 range */
- uint8_t __otx2_io vtag_type;
- /* Rx vtag strip */
- uint8_t __otx2_io strip_vtag :1;
- /* Rx vtag capture */
- uint8_t __otx2_io capture_vtag :1;
- } rx;
- };
-};
-
-struct nix_vtag_config_rsp {
- struct mbox_msghdr hdr;
- /* Indices of tx_vtag def registers used to configure
- * tx vtag0 & vtag1 headers, these indices are valid
- * when nix_vtag_config mbox requested for vtag0 and/
- * or vtag1 configuration.
- */
- int __otx2_io vtag0_idx;
- int __otx2_io vtag1_idx;
-};
-
-struct nix_rss_flowkey_cfg {
- struct mbox_msghdr hdr;
- int __otx2_io mcam_index; /* MCAM entry index to modify */
- uint32_t __otx2_io flowkey_cfg; /* Flowkey types selected */
-#define FLOW_KEY_TYPE_PORT BIT(0)
-#define FLOW_KEY_TYPE_IPV4 BIT(1)
-#define FLOW_KEY_TYPE_IPV6 BIT(2)
-#define FLOW_KEY_TYPE_TCP BIT(3)
-#define FLOW_KEY_TYPE_UDP BIT(4)
-#define FLOW_KEY_TYPE_SCTP BIT(5)
-#define FLOW_KEY_TYPE_NVGRE BIT(6)
-#define FLOW_KEY_TYPE_VXLAN BIT(7)
-#define FLOW_KEY_TYPE_GENEVE BIT(8)
-#define FLOW_KEY_TYPE_ETH_DMAC BIT(9)
-#define FLOW_KEY_TYPE_IPV6_EXT BIT(10)
-#define FLOW_KEY_TYPE_GTPU BIT(11)
-#define FLOW_KEY_TYPE_INNR_IPV4 BIT(12)
-#define FLOW_KEY_TYPE_INNR_IPV6 BIT(13)
-#define FLOW_KEY_TYPE_INNR_TCP BIT(14)
-#define FLOW_KEY_TYPE_INNR_UDP BIT(15)
-#define FLOW_KEY_TYPE_INNR_SCTP BIT(16)
-#define FLOW_KEY_TYPE_INNR_ETH_DMAC BIT(17)
-#define FLOW_KEY_TYPE_CH_LEN_90B BIT(18)
-#define FLOW_KEY_TYPE_CUSTOM0 BIT(19)
-#define FLOW_KEY_TYPE_VLAN BIT(20)
-#define FLOW_KEY_TYPE_L4_DST BIT(28)
-#define FLOW_KEY_TYPE_L4_SRC BIT(29)
-#define FLOW_KEY_TYPE_L3_DST BIT(30)
-#define FLOW_KEY_TYPE_L3_SRC BIT(31)
- uint8_t __otx2_io group; /* RSS context or group */
-};
-
-struct nix_rss_flowkey_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io alg_idx; /* Selected algo index */
-};
-
-struct nix_set_mac_addr {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-struct nix_get_mac_addr_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-struct nix_mark_format_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io offset;
- uint8_t __otx2_io y_mask;
- uint8_t __otx2_io y_val;
- uint8_t __otx2_io r_mask;
- uint8_t __otx2_io r_val;
-};
-
-struct nix_mark_format_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mark_format_idx;
-};
-
-struct nix_lso_format_cfg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io field_mask;
- uint64_t __otx2_io fields[NIX_LSO_FIELD_MAX];
-};
-
-struct nix_lso_format_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io lso_format_idx;
-};
-
-struct nix_rx_mode {
- struct mbox_msghdr hdr;
-#define NIX_RX_MODE_UCAST BIT(0)
-#define NIX_RX_MODE_PROMISC BIT(1)
-#define NIX_RX_MODE_ALLMULTI BIT(2)
- uint16_t __otx2_io mode;
-};
-
-struct nix_rx_cfg {
- struct mbox_msghdr hdr;
-#define NIX_RX_OL3_VERIFY BIT(0)
-#define NIX_RX_OL4_VERIFY BIT(1)
- uint8_t __otx2_io len_verify; /* Outer L3/L4 len check */
-#define NIX_RX_CSUM_OL4_VERIFY BIT(0)
- uint8_t __otx2_io csum_verify; /* Outer L4 checksum verification */
-};
-
-struct nix_frs_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io update_smq; /* Update SMQ's min/max lens */
- uint8_t __otx2_io update_minlen; /* Set minlen also */
- uint8_t __otx2_io sdp_link; /* Set SDP RX link */
- uint16_t __otx2_io maxlen;
- uint16_t __otx2_io minlen;
-};
-
-struct nix_set_vlan_tpid {
- struct mbox_msghdr hdr;
-#define NIX_VLAN_TYPE_INNER 0
-#define NIX_VLAN_TYPE_OUTER 1
- uint8_t __otx2_io vlan_type;
- uint16_t __otx2_io tpid;
-};
-
-struct nix_bp_cfg_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io chan_base; /* Starting channel number */
- uint8_t __otx2_io chan_cnt; /* Number of channels */
- uint8_t __otx2_io bpid_per_chan;
- /* bpid_per_chan = 0 assigns single bp id for range of channels */
- /* bpid_per_chan = 1 assigns separate bp id for each channel */
-};
-
-/* PF can be mapped to either CGX or LBK interface,
- * so maximum 64 channels are possible.
- */
-#define NIX_MAX_CHAN 64
-struct nix_bp_cfg_rsp {
- struct mbox_msghdr hdr;
- /* Channel and bpid mapping */
- uint16_t __otx2_io chan_bpid[NIX_MAX_CHAN];
- /* Number of channel for which bpids are assigned */
- uint8_t __otx2_io chan_cnt;
-};
-
-/* Global NIX inline IPSec configuration */
-struct nix_inline_ipsec_cfg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io cpt_credit;
- struct {
- uint8_t __otx2_io egrp;
- uint8_t __otx2_io opcode;
- } gen_cfg;
- struct {
- uint16_t __otx2_io cpt_pf_func;
- uint8_t __otx2_io cpt_slot;
- } inst_qsel;
- uint8_t __otx2_io enable;
-};
-
-/* Per NIX LF inline IPSec configuration */
-struct nix_inline_ipsec_lf_cfg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io sa_base_addr;
- struct {
- uint32_t __otx2_io tag_const;
- uint16_t __otx2_io lenm1_max;
- uint8_t __otx2_io sa_pow2_size;
- uint8_t __otx2_io tt;
- } ipsec_cfg0;
- struct {
- uint32_t __otx2_io sa_idx_max;
- uint8_t __otx2_io sa_idx_w;
- } ipsec_cfg1;
- uint8_t __otx2_io enable;
-};
-
-/* SSO mailbox error codes
- * Range 501 - 600.
- */
-enum sso_af_status {
- SSO_AF_ERR_PARAM = -501,
- SSO_AF_ERR_LF_INVALID = -502,
- SSO_AF_ERR_AF_LF_ALLOC = -503,
- SSO_AF_ERR_GRP_EBUSY = -504,
- SSO_AF_INVAL_NPA_PF_FUNC = -505,
-};
-
-struct sso_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io xaq_buf_size;
- uint32_t __otx2_io xaq_wq_entries;
- uint32_t __otx2_io in_unit_entries;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_lf_free_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hwgrps;
-};
-
-/* SSOW mailbox error codes
- * Range 601 - 700.
- */
-enum ssow_af_status {
- SSOW_AF_ERR_PARAM = -601,
- SSOW_AF_ERR_LF_INVALID = -602,
- SSOW_AF_ERR_AF_LF_ALLOC = -603,
-};
-
-struct ssow_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hws;
-};
-
-struct ssow_lf_free_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hws;
-};
-
-struct sso_hw_setconfig {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io npa_aura_id;
- uint16_t __otx2_io npa_pf_func;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_release_xaq {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_info_req {
- struct mbox_msghdr hdr;
- union {
- uint16_t __otx2_io grp;
- uint16_t __otx2_io hws;
- };
-};
-
-struct sso_grp_priority {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint8_t __otx2_io priority;
- uint8_t __otx2_io affinity;
- uint8_t __otx2_io weight;
-};
-
-struct sso_grp_qos_cfg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint32_t __otx2_io xaq_limit;
- uint16_t __otx2_io taq_thr;
- uint16_t __otx2_io iaq_thr;
-};
-
-struct sso_grp_stats {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint64_t __otx2_io ws_pc;
- uint64_t __otx2_io ext_pc;
- uint64_t __otx2_io wa_pc;
- uint64_t __otx2_io ts_pc;
- uint64_t __otx2_io ds_pc;
- uint64_t __otx2_io dq_pc;
- uint64_t __otx2_io aw_status;
- uint64_t __otx2_io page_cnt;
-};
-
-struct sso_hws_stats {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io hws;
- uint64_t __otx2_io arbitration;
-};
-
-/* CPT mailbox error codes
- * Range 901 - 1000.
- */
-enum cpt_af_status {
- CPT_AF_ERR_PARAM = -901,
- CPT_AF_ERR_GRP_INVALID = -902,
- CPT_AF_ERR_LF_INVALID = -903,
- CPT_AF_ERR_ACCESS_DENIED = -904,
- CPT_AF_ERR_SSO_PF_FUNC_INVALID = -905,
- CPT_AF_ERR_NIX_PF_FUNC_INVALID = -906,
- CPT_AF_ERR_INLINE_IPSEC_INB_ENA = -907,
- CPT_AF_ERR_INLINE_IPSEC_OUT_ENA = -908
-};
-
-/* CPT mbox message formats */
-
-struct cpt_rd_wr_reg_msg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io reg_offset;
- uint64_t __otx2_io *ret_val;
- uint64_t __otx2_io val;
- uint8_t __otx2_io is_write;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- uint8_t __otx2_io blkaddr;
-};
-
-struct cpt_set_crypto_grp_req_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io crypto_eng_grp;
-};
-
-struct cpt_lf_alloc_req_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io nix_pf_func;
- uint16_t __otx2_io sso_pf_func;
- uint16_t __otx2_io eng_grpmask;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- uint8_t __otx2_io blkaddr;
-};
-
-struct cpt_lf_alloc_rsp_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io eng_grpmsk;
-};
-
-#define CPT_INLINE_INBOUND 0
-#define CPT_INLINE_OUTBOUND 1
-
-struct cpt_inline_ipsec_cfg_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io enable;
- uint8_t __otx2_io slot;
- uint8_t __otx2_io dir;
- uint16_t __otx2_io sso_pf_func; /* Inbound path SSO_PF_FUNC */
- uint16_t __otx2_io nix_pf_func; /* Outbound path NIX_PF_FUNC */
-};
-
-struct cpt_rx_inline_lf_cfg_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sso_pf_func;
-};
-
-enum cpt_eng_type {
- CPT_ENG_TYPE_AE = 1,
- CPT_ENG_TYPE_SE = 2,
- CPT_ENG_TYPE_IE = 3,
- CPT_MAX_ENG_TYPES,
-};
-
-/* CPT HW capabilities */
-union cpt_eng_caps {
- uint64_t __otx2_io u;
- struct {
- uint64_t __otx2_io reserved_0_4:5;
- uint64_t __otx2_io mul:1;
- uint64_t __otx2_io sha1_sha2:1;
- uint64_t __otx2_io chacha20:1;
- uint64_t __otx2_io zuc_snow3g:1;
- uint64_t __otx2_io sha3:1;
- uint64_t __otx2_io aes:1;
- uint64_t __otx2_io kasumi:1;
- uint64_t __otx2_io des:1;
- uint64_t __otx2_io crc:1;
- uint64_t __otx2_io reserved_14_63:50;
- };
-};
-
-struct cpt_caps_rsp_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cpt_pf_drv_version;
- uint8_t __otx2_io cpt_revision;
- union cpt_eng_caps eng_caps[CPT_MAX_ENG_TYPES];
-};
-
-/* NPC mbox message structs */
-
-#define NPC_MCAM_ENTRY_INVALID 0xFFFF
-#define NPC_MCAM_INVALID_MAP 0xFFFF
-
-/* NPC mailbox error codes
- * Range 701 - 800.
- */
-enum npc_af_status {
- NPC_MCAM_INVALID_REQ = -701,
- NPC_MCAM_ALLOC_DENIED = -702,
- NPC_MCAM_ALLOC_FAILED = -703,
- NPC_MCAM_PERM_DENIED = -704,
- NPC_AF_ERR_HIGIG_CONFIG_FAIL = -705,
-};
-
-struct npc_mcam_alloc_entry_req {
- struct mbox_msghdr hdr;
-#define NPC_MAX_NONCONTIG_ENTRIES 256
- uint8_t __otx2_io contig; /* Contiguous entries ? */
-#define NPC_MCAM_ANY_PRIO 0
-#define NPC_MCAM_LOWER_PRIO 1
-#define NPC_MCAM_HIGHER_PRIO 2
- uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */
- uint16_t __otx2_io ref_entry;
- uint16_t __otx2_io count; /* Number of entries requested */
-};
-
-struct npc_mcam_alloc_entry_rsp {
- struct mbox_msghdr hdr;
- /* Entry alloc'ed or start index if contiguous.
- * Invalid in case of non-contiguous.
- */
- uint16_t __otx2_io entry;
- uint16_t __otx2_io count; /* Number of entries allocated */
- uint16_t __otx2_io free_count; /* Number of entries available */
- uint16_t __otx2_io entry_list[NPC_MAX_NONCONTIG_ENTRIES];
-};
-
-struct npc_mcam_free_entry_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry; /* Entry index to be freed */
- uint8_t __otx2_io all; /* Free all entries alloc'ed to this PFVF */
-};
-
-struct mcam_entry {
-#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max key width */
- uint64_t __otx2_io kw[NPC_MAX_KWS_IN_KEY];
- uint64_t __otx2_io kw_mask[NPC_MAX_KWS_IN_KEY];
- uint64_t __otx2_io action;
- uint64_t __otx2_io vtag_action;
-};
-
-struct npc_mcam_write_entry_req {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint16_t __otx2_io entry; /* MCAM entry to write this match key */
- uint16_t __otx2_io cntr; /* Counter for this MCAM entry */
- uint8_t __otx2_io intf; /* Rx or Tx interface */
- uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */
- uint8_t __otx2_io set_cntr; /* Set counter for this entry ? */
-};
-
-/* Enable/Disable a given entry */
-struct npc_mcam_ena_dis_entry_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
-};
-
-struct npc_mcam_shift_entry_req {
- struct mbox_msghdr hdr;
-#define NPC_MCAM_MAX_SHIFTS 64
- uint16_t __otx2_io curr_entry[NPC_MCAM_MAX_SHIFTS];
- uint16_t __otx2_io new_entry[NPC_MCAM_MAX_SHIFTS];
- uint16_t __otx2_io shift_count; /* Number of entries to shift */
-};
-
-struct npc_mcam_shift_entry_rsp {
- struct mbox_msghdr hdr;
- /* Index in 'curr_entry', not entry itself */
- uint16_t __otx2_io failed_entry_idx;
-};
-
-struct npc_mcam_alloc_counter_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io contig; /* Contiguous counters ? */
-#define NPC_MAX_NONCONTIG_COUNTERS 64
- uint16_t __otx2_io count; /* Number of counters requested */
-};
-
-struct npc_mcam_alloc_counter_rsp {
- struct mbox_msghdr hdr;
- /* Counter alloc'ed or start idx if contiguous.
- * Invalid incase of non-contiguous.
- */
- uint16_t __otx2_io cntr;
- uint16_t __otx2_io count; /* Number of counters allocated */
- uint16_t __otx2_io cntr_list[NPC_MAX_NONCONTIG_COUNTERS];
-};
-
-struct npc_mcam_oper_counter_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cntr; /* Free a counter or clear/fetch it's stats */
-};
-
-struct npc_mcam_oper_counter_rsp {
- struct mbox_msghdr hdr;
- /* valid only while fetching counter's stats */
- uint64_t __otx2_io stat;
-};
-
-struct npc_mcam_unmap_counter_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cntr;
- uint16_t __otx2_io entry; /* Entry and counter to be unmapped */
- uint8_t __otx2_io all; /* Unmap all entries using this counter ? */
-};
-
-struct npc_mcam_alloc_and_write_entry_req {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint16_t __otx2_io ref_entry;
- uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */
- uint8_t __otx2_io intf; /* Rx or Tx interface */
- uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */
- uint8_t __otx2_io alloc_cntr; /* Allocate counter and map ? */
-};
-
-struct npc_mcam_alloc_and_write_entry_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io cntr;
-};
-
-struct npc_get_kex_cfg_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */
- uint64_t __otx2_io tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */
-#define NPC_MAX_INTF 2
-#define NPC_MAX_LID 8
-#define NPC_MAX_LT 16
-#define NPC_MAX_LD 2
-#define NPC_MAX_LFL 16
- /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
- uint64_t __otx2_io kex_ld_flags[NPC_MAX_LD];
- /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */
- uint64_t __otx2_io
- intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD];
- /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */
- uint64_t __otx2_io
- intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
-#define MKEX_NAME_LEN 128
- uint8_t __otx2_io mkex_pfl_name[MKEX_NAME_LEN];
-};
-
-enum header_fields {
- NPC_DMAC,
- NPC_SMAC,
- NPC_ETYPE,
- NPC_OUTER_VID,
- NPC_TOS,
- NPC_SIP_IPV4,
- NPC_DIP_IPV4,
- NPC_SIP_IPV6,
- NPC_DIP_IPV6,
- NPC_SPORT_TCP,
- NPC_DPORT_TCP,
- NPC_SPORT_UDP,
- NPC_DPORT_UDP,
- NPC_FDSA_VAL,
- NPC_HEADER_FIELDS_MAX,
-};
-
-struct flow_msg {
- unsigned char __otx2_io dmac[6];
- unsigned char __otx2_io smac[6];
- uint16_t __otx2_io etype;
- uint16_t __otx2_io vlan_etype;
- uint16_t __otx2_io vlan_tci;
- union {
- uint32_t __otx2_io ip4src;
- uint32_t __otx2_io ip6src[4];
- };
- union {
- uint32_t __otx2_io ip4dst;
- uint32_t __otx2_io ip6dst[4];
- };
- uint8_t __otx2_io tos;
- uint8_t __otx2_io ip_ver;
- uint8_t __otx2_io ip_proto;
- uint8_t __otx2_io tc;
- uint16_t __otx2_io sport;
- uint16_t __otx2_io dport;
-};
-
-struct npc_install_flow_req {
- struct mbox_msghdr hdr;
- struct flow_msg packet;
- struct flow_msg mask;
- uint64_t __otx2_io features;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io channel;
- uint8_t __otx2_io intf;
- uint8_t __otx2_io set_cntr;
- uint8_t __otx2_io default_rule;
- /* Overwrite(0) or append(1) flow to default rule? */
- uint8_t __otx2_io append;
- uint16_t __otx2_io vf;
- /* action */
- uint32_t __otx2_io index;
- uint16_t __otx2_io match_id;
- uint8_t __otx2_io flow_key_alg;
- uint8_t __otx2_io op;
- /* vtag action */
- uint8_t __otx2_io vtag0_type;
- uint8_t __otx2_io vtag0_valid;
- uint8_t __otx2_io vtag1_type;
- uint8_t __otx2_io vtag1_valid;
-
- /* vtag tx action */
- uint16_t __otx2_io vtag0_def;
- uint8_t __otx2_io vtag0_op;
- uint16_t __otx2_io vtag1_def;
- uint8_t __otx2_io vtag1_op;
-};
-
-struct npc_install_flow_rsp {
- struct mbox_msghdr hdr;
- /* Negative if no counter else counter number */
- int __otx2_io counter;
-};
-
-struct npc_delete_flow_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io start;/*Disable range of entries */
- uint16_t __otx2_io end;
- uint8_t __otx2_io all; /* PF + VFs */
-};
-
-struct npc_mcam_read_entry_req {
- struct mbox_msghdr hdr;
- /* MCAM entry to read */
- uint16_t __otx2_io entry;
-};
-
-struct npc_mcam_read_entry_rsp {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint8_t __otx2_io intf;
- uint8_t __otx2_io enable;
-};
-
-struct npc_mcam_read_base_rule_rsp {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
-};
-
-/* TIM mailbox error codes
- * Range 801 - 900.
- */
-enum tim_af_status {
- TIM_AF_NO_RINGS_LEFT = -801,
- TIM_AF_INVALID_NPA_PF_FUNC = -802,
- TIM_AF_INVALID_SSO_PF_FUNC = -803,
- TIM_AF_RING_STILL_RUNNING = -804,
- TIM_AF_LF_INVALID = -805,
- TIM_AF_CSIZE_NOT_ALIGNED = -806,
- TIM_AF_CSIZE_TOO_SMALL = -807,
- TIM_AF_CSIZE_TOO_BIG = -808,
- TIM_AF_INTERVAL_TOO_SMALL = -809,
- TIM_AF_INVALID_BIG_ENDIAN_VALUE = -810,
- TIM_AF_INVALID_CLOCK_SOURCE = -811,
- TIM_AF_GPIO_CLK_SRC_NOT_ENABLED = -812,
- TIM_AF_INVALID_BSIZE = -813,
- TIM_AF_INVALID_ENABLE_PERIODIC = -814,
- TIM_AF_INVALID_ENABLE_DONTFREE = -815,
- TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816,
- TIM_AF_RING_ALREADY_DISABLED = -817,
-};
-
-enum tim_clk_srcs {
- TIM_CLK_SRCS_TENNS = 0,
- TIM_CLK_SRCS_GPIO = 1,
- TIM_CLK_SRCS_GTI = 2,
- TIM_CLK_SRCS_PTP = 3,
- TIM_CLK_SRSC_INVALID,
-};
-
-enum tim_gpio_edge {
- TIM_GPIO_NO_EDGE = 0,
- TIM_GPIO_LTOH_TRANS = 1,
- TIM_GPIO_HTOL_TRANS = 2,
- TIM_GPIO_BOTH_TRANS = 3,
- TIM_GPIO_INVALID,
-};
-
-enum ptp_op {
- PTP_OP_ADJFINE = 0, /* adjfine(req.scaled_ppm); */
- PTP_OP_GET_CLOCK = 1, /* rsp.clk = get_clock() */
-};
-
-struct ptp_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io op;
- int64_t __otx2_io scaled_ppm;
- uint8_t __otx2_io is_pmu;
-};
-
-struct ptp_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io clk;
- uint64_t __otx2_io tsc;
-};
-
-struct get_hw_cap_rsp {
- struct mbox_msghdr hdr;
- /* Schq mapping fixed or flexible */
- uint8_t __otx2_io nix_fixed_txschq_mapping;
- uint8_t __otx2_io nix_shaping; /* Is shaping and coloring supported */
-};
-
-struct ndc_sync_op {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io nix_lf_tx_sync;
- uint8_t __otx2_io nix_lf_rx_sync;
- uint8_t __otx2_io npa_lf_sync;
-};
-
-struct tim_lf_alloc_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
- uint16_t __otx2_io npa_pf_func;
- uint16_t __otx2_io sso_pf_func;
-};
-
-struct tim_ring_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
-};
-
-struct tim_config_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
- uint8_t __otx2_io bigendian;
- uint8_t __otx2_io clocksource;
- uint8_t __otx2_io enableperiodic;
- uint8_t __otx2_io enabledontfreebuffer;
- uint32_t __otx2_io bucketsize;
- uint32_t __otx2_io chunksize;
- uint32_t __otx2_io interval;
-};
-
-struct tim_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io tenns_clk;
-};
-
-struct tim_enable_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io timestarted;
- uint32_t __otx2_io currentbucket;
-};
-
-/* REE mailbox error codes
- * Range 1001 - 1100.
- */
-enum ree_af_status {
- REE_AF_ERR_RULE_UNKNOWN_VALUE = -1001,
- REE_AF_ERR_LF_NO_MORE_RESOURCES = -1002,
- REE_AF_ERR_LF_INVALID = -1003,
- REE_AF_ERR_ACCESS_DENIED = -1004,
- REE_AF_ERR_RULE_DB_PARTIAL = -1005,
- REE_AF_ERR_RULE_DB_EQ_BAD_VALUE = -1006,
- REE_AF_ERR_RULE_DB_BLOCK_ALLOC_FAILED = -1007,
- REE_AF_ERR_BLOCK_NOT_IMPLEMENTED = -1008,
- REE_AF_ERR_RULE_DB_INC_OFFSET_TOO_BIG = -1009,
- REE_AF_ERR_RULE_DB_OFFSET_TOO_BIG = -1010,
- REE_AF_ERR_Q_IS_GRACEFUL_DIS = -1011,
- REE_AF_ERR_Q_NOT_GRACEFUL_DIS = -1012,
- REE_AF_ERR_RULE_DB_ALLOC_FAILED = -1013,
- REE_AF_ERR_RULE_DB_TOO_BIG = -1014,
- REE_AF_ERR_RULE_DB_GEQ_BAD_VALUE = -1015,
- REE_AF_ERR_RULE_DB_LEQ_BAD_VALUE = -1016,
- REE_AF_ERR_RULE_DB_WRONG_LENGTH = -1017,
- REE_AF_ERR_RULE_DB_WRONG_OFFSET = -1018,
- REE_AF_ERR_RULE_DB_BLOCK_TOO_BIG = -1019,
- REE_AF_ERR_RULE_DB_SHOULD_FILL_REQUEST = -1020,
- REE_AF_ERR_RULE_DBI_ALLOC_FAILED = -1021,
- REE_AF_ERR_LF_WRONG_PRIORITY = -1022,
- REE_AF_ERR_LF_SIZE_TOO_BIG = -1023,
-};
-
-/* REE mbox message formats */
-
-struct ree_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
-};
-
-struct ree_lf_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io size;
- uint8_t __otx2_io lf;
- uint8_t __otx2_io pri;
-};
-
-struct ree_rule_db_prog_req_msg {
- struct mbox_msghdr hdr;
-#define REE_RULE_DB_REQ_BLOCK_SIZE (MBOX_SIZE >> 1)
- uint8_t __otx2_io rule_db[REE_RULE_DB_REQ_BLOCK_SIZE];
- uint32_t __otx2_io blkaddr; /* REE0 or REE1 */
- uint32_t __otx2_io total_len; /* total len of rule db */
- uint32_t __otx2_io offset; /* offset of current rule db block */
- uint16_t __otx2_io len; /* length of rule db block */
- uint8_t __otx2_io is_last; /* is this the last block */
- uint8_t __otx2_io is_incremental; /* is incremental flow */
- uint8_t __otx2_io is_dbi; /* is rule db incremental */
-};
-
-struct ree_rule_db_get_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io offset; /* retrieve db from this offset */
- uint8_t __otx2_io is_dbi; /* is request for rule db incremental */
-};
-
-struct ree_rd_wr_reg_msg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io reg_offset;
- uint64_t __otx2_io *ret_val;
- uint64_t __otx2_io val;
- uint32_t __otx2_io blkaddr;
- uint8_t __otx2_io is_write;
-};
-
-struct ree_rule_db_len_rsp_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io len;
- uint32_t __otx2_io inc_len;
-};
-
-struct ree_rule_db_get_rsp_msg {
- struct mbox_msghdr hdr;
-#define REE_RULE_DB_RSP_BLOCK_SIZE (MBOX_DOWN_TX_SIZE - SZ_1K)
- uint8_t __otx2_io rule_db[REE_RULE_DB_RSP_BLOCK_SIZE];
- uint32_t __otx2_io total_len; /* total len of rule db */
- uint32_t __otx2_io offset; /* offset of current rule db block */
- uint16_t __otx2_io len; /* length of rule db block */
- uint8_t __otx2_io is_last; /* is this the last block */
-};
-
-__rte_internal
-const char *otx2_mbox_id2name(uint16_t id);
-int otx2_mbox_id2size(uint16_t id);
-void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
-int otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
- int direction, int ndevsi, uint64_t intr_offset);
-void otx2_mbox_fini(struct otx2_mbox *mbox);
-__rte_internal
-void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
-__rte_internal
-int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
-int otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo);
-__rte_internal
-int otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg);
-__rte_internal
-int otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
- uint32_t tmo);
-int otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid);
-__rte_internal
-struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
- int size, int size_rsp);
-
-static inline struct mbox_msghdr *
-otx2_mbox_alloc_msg(struct otx2_mbox *mbox, int devid, int size)
-{
- return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
-}
-
-static inline void
-otx2_mbox_req_init(uint16_t mbox_id, void *msghdr)
-{
- struct mbox_msghdr *hdr = msghdr;
-
- hdr->sig = OTX2_MBOX_REQ_SIG;
- hdr->ver = OTX2_MBOX_VERSION;
- hdr->id = mbox_id;
- hdr->pcifunc = 0;
-}
-
-static inline void
-otx2_mbox_rsp_init(uint16_t mbox_id, void *msghdr)
-{
- struct mbox_msghdr *hdr = msghdr;
-
- hdr->sig = OTX2_MBOX_RSP_SIG;
- hdr->rc = -ETIMEDOUT;
- hdr->id = mbox_id;
-}
-
-static inline bool
-otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- bool ret;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- ret = mdev->num_msgs != 0;
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return ret;
-}
-
-static inline int
-otx2_mbox_process(struct otx2_mbox *mbox)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp(mbox, 0, NULL);
-}
-
-static inline int
-otx2_mbox_process_msg(struct otx2_mbox *mbox, void **msg)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp(mbox, 0, msg);
-}
-
-static inline int
-otx2_mbox_process_tmo(struct otx2_mbox *mbox, uint32_t tmo)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp_tmo(mbox, 0, NULL, tmo);
-}
-
-static inline int
-otx2_mbox_process_msg_tmo(struct otx2_mbox *mbox, void **msg, uint32_t tmo)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp_tmo(mbox, 0, msg, tmo);
-}
-
-int otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pf_func /* out */);
-int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pf_func,
- uint16_t id);
-
-#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
-static inline struct _req_type \
-*otx2_mbox_alloc_msg_ ## _fn_name(struct otx2_mbox *mbox) \
-{ \
- struct _req_type *req; \
- \
- req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \
- mbox, 0, sizeof(struct _req_type), \
- sizeof(struct _rsp_type)); \
- if (!req) \
- return NULL; \
- \
- req->hdr.sig = OTX2_MBOX_REQ_SIG; \
- req->hdr.id = _id; \
- otx2_mbox_dbg("id=0x%x (%s)", \
- req->hdr.id, otx2_mbox_id2name(req->hdr.id)); \
- return req; \
-}
-
-MBOX_MESSAGES
-#undef M
-
-/* This is required for copy operations from device memory which do not work on
- * addresses which are unaligned to 16B. This is because of specific
- * optimizations to libc memcpy.
- */
-static inline volatile void *
-otx2_mbox_memcpy(volatile void *d, const volatile void *s, size_t l)
-{
- const volatile uint8_t *sb;
- volatile uint8_t *db;
- size_t i;
-
- if (!d || !s)
- return NULL;
- db = (volatile uint8_t *)d;
- sb = (const volatile uint8_t *)s;
- for (i = 0; i < l; i++)
- db[i] = sb[i];
- return d;
-}
-
-/* This is required for memory operations from device memory which do not
- * work on addresses which are unaligned to 16B. This is because of specific
- * optimizations to libc memset.
- */
-static inline void
-otx2_mbox_memset(volatile void *d, uint8_t val, size_t l)
-{
- volatile uint8_t *db;
- size_t i = 0;
-
- if (!d || !l)
- return;
- db = (volatile uint8_t *)d;
- for (i = 0; i < l; i++)
- db[i] = val;
-}
-
-#endif /* __OTX2_MBOX_H__ */
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
deleted file mode 100644
index b561b67174..0000000000
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_bus_pci.h>
-#include <ethdev_driver.h>
-#include <rte_spinlock.h>
-
-#include "otx2_common.h"
-#include "otx2_sec_idev.h"
-
-static struct otx2_sec_idev_cfg sec_cfg[OTX2_MAX_INLINE_PORTS];
-
-/**
- * @internal
- * Check if rte_eth_dev is security offload capable otx2_eth_dev
- */
-uint8_t
-otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev;
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_PF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_VF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_AF_VF)
- return 1;
-
- return 0;
-}
-
-int
-otx2_sec_idev_cfg_init(int port_id)
-{
- struct otx2_sec_idev_cfg *cfg;
- int i;
-
- cfg = &sec_cfg[port_id];
- cfg->tx_cpt_idx = 0;
- rte_spinlock_init(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- cfg->tx_cpt[i].qp = NULL;
- rte_atomic16_set(&cfg->tx_cpt[i].ref_cnt, 0);
- }
-
- return 0;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- int i, ret;
-
- if (qp == NULL || port_id >= OTX2_MAX_INLINE_PORTS)
- return -EINVAL;
-
- cfg = &sec_cfg[port_id];
-
- /* Find a free slot to save CPT LF */
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp == NULL) {
- cfg->tx_cpt[i].qp = qp;
- ret = 0;
- goto unlock;
- }
- }
-
- ret = -EINVAL;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t port_id;
- int i, ret;
-
- if (qp == NULL)
- return -EINVAL;
-
- for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) {
- cfg = &sec_cfg[port_id];
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp != qp)
- continue;
-
- /* Don't free if the QP is in use by any sec session */
- if (rte_atomic16_read(&cfg->tx_cpt[i].ref_cnt)) {
- ret = -EBUSY;
- } else {
- cfg->tx_cpt[i].qp = NULL;
- ret = 0;
- }
-
- goto unlock;
- }
-
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- }
-
- return -ENOENT;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t index;
- int i, ret;
-
- if (port_id >= OTX2_MAX_INLINE_PORTS || qp == NULL)
- return -EINVAL;
-
- cfg = &sec_cfg[port_id];
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- index = cfg->tx_cpt_idx;
-
- /* Get the next index with valid data */
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[index].qp != NULL)
- break;
- index = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT;
- }
-
- if (i >= OTX2_MAX_CPT_QP_PER_PORT) {
- ret = -EINVAL;
- goto unlock;
- }
-
- *qp = cfg->tx_cpt[index].qp;
- rte_atomic16_inc(&cfg->tx_cpt[index].ref_cnt);
-
- cfg->tx_cpt_idx = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT;
-
- ret = 0;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t port_id;
- int i;
-
- if (qp == NULL)
- return -EINVAL;
-
- for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) {
- cfg = &sec_cfg[port_id];
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp == qp) {
- rte_atomic16_dec(&cfg->tx_cpt[i].ref_cnt);
- return 0;
- }
- }
- }
-
- return -EINVAL;
-}
diff --git a/drivers/common/octeontx2/otx2_sec_idev.h b/drivers/common/octeontx2/otx2_sec_idev.h
deleted file mode 100644
index 89cdaf66ab..0000000000
--- a/drivers/common/octeontx2/otx2_sec_idev.h
+++ /dev/null
@@ -1,43 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_SEC_IDEV_H_
-#define _OTX2_SEC_IDEV_H_
-
-#include <rte_ethdev.h>
-
-#define OTX2_MAX_CPT_QP_PER_PORT 64
-#define OTX2_MAX_INLINE_PORTS 64
-
-struct otx2_cpt_qp;
-
-struct otx2_sec_idev_cfg {
- struct {
- struct otx2_cpt_qp *qp;
- rte_atomic16_t ref_cnt;
- } tx_cpt[OTX2_MAX_CPT_QP_PER_PORT];
-
- uint16_t tx_cpt_idx;
- rte_spinlock_t tx_cpt_lock;
-};
-
-__rte_internal
-uint8_t otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev);
-
-__rte_internal
-int otx2_sec_idev_cfg_init(int port_id);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp);
-
-#endif /* _OTX2_SEC_IDEV_H_ */
diff --git a/drivers/common/octeontx2/version.map b/drivers/common/octeontx2/version.map
deleted file mode 100644
index b58f19ce32..0000000000
--- a/drivers/common/octeontx2/version.map
+++ /dev/null
@@ -1,44 +0,0 @@
-INTERNAL {
- global:
-
- otx2_dev_active_vfs;
- otx2_dev_fini;
- otx2_dev_priv_init;
- otx2_disable_irqs;
- otx2_eth_dev_is_sec_capable;
- otx2_intra_dev_get_cfg;
- otx2_logtype_base;
- otx2_logtype_dpi;
- otx2_logtype_ep;
- otx2_logtype_mbox;
- otx2_logtype_nix;
- otx2_logtype_npa;
- otx2_logtype_npc;
- otx2_logtype_ree;
- otx2_logtype_sso;
- otx2_logtype_tim;
- otx2_logtype_tm;
- otx2_mbox_alloc_msg_rsp;
- otx2_mbox_get_rsp;
- otx2_mbox_get_rsp_tmo;
- otx2_mbox_id2name;
- otx2_mbox_msg_send;
- otx2_mbox_wait_for_rsp;
- otx2_npa_lf_active;
- otx2_npa_lf_obj_get;
- otx2_npa_lf_obj_ref;
- otx2_npa_pf_func_get;
- otx2_npa_set_defaults;
- otx2_parse_common_devargs;
- otx2_register_irq;
- otx2_sec_idev_cfg_init;
- otx2_sec_idev_tx_cpt_qp_add;
- otx2_sec_idev_tx_cpt_qp_get;
- otx2_sec_idev_tx_cpt_qp_put;
- otx2_sec_idev_tx_cpt_qp_remove;
- otx2_sso_pf_func_get;
- otx2_sso_pf_func_set;
- otx2_unregister_irq;
-
- local: *;
-};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index 59f02ea47c..147b8cf633 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -16,7 +16,6 @@ drivers = [
'nitrox',
'null',
'octeontx',
- 'octeontx2',
'openssl',
'scheduler',
'virtio',
diff --git a/drivers/crypto/octeontx2/meson.build b/drivers/crypto/octeontx2/meson.build
deleted file mode 100644
index 3b387cc570..0000000000
--- a/drivers/crypto/octeontx2/meson.build
+++ /dev/null
@@ -1,30 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright (C) 2019 Marvell International Ltd.
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-deps += ['bus_pci']
-deps += ['common_cpt']
-deps += ['common_octeontx2']
-deps += ['ethdev']
-deps += ['eventdev']
-deps += ['security']
-
-sources = files(
- 'otx2_cryptodev.c',
- 'otx2_cryptodev_capabilities.c',
- 'otx2_cryptodev_hw_access.c',
- 'otx2_cryptodev_mbox.c',
- 'otx2_cryptodev_ops.c',
- 'otx2_cryptodev_sec.c',
-)
-
-includes += include_directories('../../common/cpt')
-includes += include_directories('../../common/octeontx2')
-includes += include_directories('../../crypto/octeontx2')
-includes += include_directories('../../mempool/octeontx2')
-includes += include_directories('../../net/octeontx2')
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c
deleted file mode 100644
index fc7ad05366..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev.c
+++ /dev/null
@@ -1,188 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_crypto.h>
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_dev.h>
-#include <rte_errno.h>
-#include <rte_mempool.h>
-#include <rte_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_sec.h"
-#include "otx2_dev.h"
-
-/* CPT common headers */
-#include "cpt_common.h"
-#include "cpt_pmd_logs.h"
-
-uint8_t otx2_cryptodev_driver_id;
-
-static struct rte_pci_id pci_id_cpt_table[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_CPT_VF)
- },
- /* sentinel */
- {
- .device_id = 0
- },
-};
-
-uint64_t
-otx2_cpt_default_ff_get(void)
-{
- return RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_HW_ACCELERATED |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- RTE_CRYPTODEV_FF_IN_PLACE_SGL |
- RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
- RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
- RTE_CRYPTODEV_FF_SECURITY |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-}
-
-static int
-otx2_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
- struct rte_pci_device *pci_dev)
-{
- struct rte_cryptodev_pmd_init_params init_params = {
- .name = "",
- .socket_id = rte_socket_id(),
- .private_data_size = sizeof(struct otx2_cpt_vf)
- };
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- struct rte_cryptodev *dev;
- struct otx2_dev *otx2_dev;
- struct otx2_cpt_vf *vf;
- uint16_t nb_queues;
- int ret;
-
- rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
-
- dev = rte_cryptodev_pmd_create(name, &pci_dev->device, &init_params);
- if (dev == NULL) {
- ret = -ENODEV;
- goto exit;
- }
-
- dev->dev_ops = &otx2_cpt_ops;
-
- dev->driver_id = otx2_cryptodev_driver_id;
-
- /* Get private data space allocated */
- vf = dev->data->dev_private;
-
- otx2_dev = &vf->otx2_dev;
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- /* Initialize the base otx2_dev object */
- ret = otx2_dev_init(pci_dev, otx2_dev);
- if (ret) {
- CPT_LOG_ERR("Could not initialize otx2_dev");
- goto pmd_destroy;
- }
-
- /* Get number of queues available on the device */
- ret = otx2_cpt_available_queues_get(dev, &nb_queues);
- if (ret) {
- CPT_LOG_ERR("Could not determine the number of queues available");
- goto otx2_dev_fini;
- }
-
- /* Don't exceed the limits set per VF */
- nb_queues = RTE_MIN(nb_queues, OTX2_CPT_MAX_QUEUES_PER_VF);
-
- if (nb_queues == 0) {
- CPT_LOG_ERR("No free queues available on the device");
- goto otx2_dev_fini;
- }
-
- vf->max_queues = nb_queues;
-
- CPT_LOG_INFO("Max queues supported by device: %d",
- vf->max_queues);
-
- ret = otx2_cpt_hardware_caps_get(dev, vf->hw_caps);
- if (ret) {
- CPT_LOG_ERR("Could not determine hardware capabilities");
- goto otx2_dev_fini;
- }
- }
-
- otx2_crypto_capabilities_init(vf->hw_caps);
- otx2_crypto_sec_capabilities_init(vf->hw_caps);
-
- /* Create security ctx */
- ret = otx2_crypto_sec_ctx_create(dev);
- if (ret)
- goto otx2_dev_fini;
-
- dev->feature_flags = otx2_cpt_default_ff_get();
-
- if (rte_eal_process_type() == RTE_PROC_SECONDARY)
- otx2_cpt_set_enqdeq_fns(dev);
-
- rte_cryptodev_pmd_probing_finish(dev);
-
- return 0;
-
-otx2_dev_fini:
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- otx2_dev_fini(pci_dev, otx2_dev);
-pmd_destroy:
- rte_cryptodev_pmd_destroy(dev);
-exit:
- CPT_LOG_ERR("Could not create device (vendor_id: 0x%x device_id: 0x%x)",
- pci_dev->id.vendor_id, pci_dev->id.device_id);
- return ret;
-}
-
-static int
-otx2_cpt_pci_remove(struct rte_pci_device *pci_dev)
-{
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- struct rte_cryptodev *dev;
-
- if (pci_dev == NULL)
- return -EINVAL;
-
- rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
-
- dev = rte_cryptodev_pmd_get_named_dev(name);
- if (dev == NULL)
- return -ENODEV;
-
- /* Destroy security ctx */
- otx2_crypto_sec_ctx_destroy(dev);
-
- return rte_cryptodev_pmd_destroy(dev);
-}
-
-static struct rte_pci_driver otx2_cryptodev_pmd = {
- .id_table = pci_id_cpt_table,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = otx2_cpt_pci_probe,
- .remove = otx2_cpt_pci_remove,
-};
-
-static struct cryptodev_driver otx2_cryptodev_drv;
-
-RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_OCTEONTX2_PMD, otx2_cryptodev_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_OCTEONTX2_PMD, pci_id_cpt_table);
-RTE_PMD_REGISTER_KMOD_DEP(CRYPTODEV_NAME_OCTEONTX2_PMD, "vfio-pci");
-RTE_PMD_REGISTER_CRYPTO_DRIVER(otx2_cryptodev_drv, otx2_cryptodev_pmd.driver,
- otx2_cryptodev_driver_id);
-RTE_LOG_REGISTER_DEFAULT(otx2_cpt_logtype, NOTICE);
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.h b/drivers/crypto/octeontx2/otx2_cryptodev.h
deleted file mode 100644
index 15ecfe45b6..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_H_
-#define _OTX2_CRYPTODEV_H_
-
-#include "cpt_common.h"
-#include "cpt_hw_types.h"
-
-#include "otx2_dev.h"
-
-/* Marvell OCTEON TX2 Crypto PMD device name */
-#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
-
-#define OTX2_CPT_MAX_LFS 128
-#define OTX2_CPT_MAX_QUEUES_PER_VF 64
-#define OTX2_CPT_MAX_BLKS 2
-#define OTX2_CPT_PMD_VERSION 3
-#define OTX2_CPT_REVISION_ID_3 3
-
-/**
- * Device private data
- */
-struct otx2_cpt_vf {
- struct otx2_dev otx2_dev;
- /**< Base class */
- uint16_t max_queues;
- /**< Max queues supported */
- uint8_t nb_queues;
- /**< Number of crypto queues attached */
- uint16_t lf_msixoff[OTX2_CPT_MAX_LFS];
- /**< MSI-X offsets */
- uint8_t lf_blkaddr[OTX2_CPT_MAX_LFS];
- /**< CPT0/1 BLKADDR of LFs */
- uint8_t cpt_revision;
- /**< CPT revision */
- uint8_t err_intr_registered:1;
- /**< Are error interrupts registered? */
- union cpt_eng_caps hw_caps[CPT_MAX_ENG_TYPES];
- /**< CPT device capabilities */
-};
-
-struct cpt_meta_info {
- uint64_t deq_op_info[5];
- uint64_t comp_code_sz;
- union cpt_res_s cpt_res __rte_aligned(16);
- struct cpt_request_info cpt_req;
-};
-
-#define CPT_LOGTYPE otx2_cpt_logtype
-
-extern int otx2_cpt_logtype;
-
-/*
- * Crypto device driver ID
- */
-extern uint8_t otx2_cryptodev_driver_id;
-
-uint64_t otx2_cpt_default_ff_get(void);
-void otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
-
-#endif /* _OTX2_CRYPTODEV_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
deleted file mode 100644
index ba3fbbbe22..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
+++ /dev/null
@@ -1,924 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_security.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_mbox.h"
-
-#define CPT_EGRP_GET(hw_caps, name, egrp) do { \
- if ((hw_caps[CPT_ENG_TYPE_SE].name) && \
- (hw_caps[CPT_ENG_TYPE_IE].name)) \
- *egrp = OTX2_CPT_EGRP_SE_IE; \
- else if (hw_caps[CPT_ENG_TYPE_SE].name) \
- *egrp = OTX2_CPT_EGRP_SE; \
- else if (hw_caps[CPT_ENG_TYPE_AE].name) \
- *egrp = OTX2_CPT_EGRP_AE; \
- else \
- *egrp = OTX2_CPT_EGRP_MAX; \
-} while (0)
-
-#define CPT_CAPS_ADD(hw_caps, name) do { \
- enum otx2_cpt_egrp egrp; \
- CPT_EGRP_GET(hw_caps, name, &egrp); \
- if (egrp < OTX2_CPT_EGRP_MAX) \
- cpt_caps_add(caps_##name, RTE_DIM(caps_##name)); \
-} while (0)
-
-#define SEC_CAPS_ADD(hw_caps, name) do { \
- enum otx2_cpt_egrp egrp; \
- CPT_EGRP_GET(hw_caps, name, &egrp); \
- if (egrp < OTX2_CPT_EGRP_MAX) \
- sec_caps_add(sec_caps_##name, RTE_DIM(sec_caps_##name));\
-} while (0)
-
-#define OTX2_CPT_MAX_CAPS 34
-#define OTX2_SEC_MAX_CAPS 4
-
-static struct rte_cryptodev_capabilities otx2_cpt_caps[OTX2_CPT_MAX_CAPS];
-static struct rte_cryptodev_capabilities otx2_cpt_sec_caps[OTX2_SEC_MAX_CAPS];
-
-static const struct rte_cryptodev_capabilities caps_mul[] = {
- { /* RSA */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,
- .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
- (1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
- (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
- (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
- {.modlen = {
- .min = 17,
- .max = 1024,
- .increment = 1
- }, }
- }
- }, }
- },
- { /* MOD_EXP */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,
- .op_types = 0,
- {.modlen = {
- .min = 17,
- .max = 1024,
- .increment = 1
- }, }
- }
- }, }
- },
- { /* ECDSA */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA,
- .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
- (1 << RTE_CRYPTO_ASYM_OP_VERIFY)),
- }
- },
- }
- },
- { /* ECPM */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM,
- .op_types = 0
- }
- },
- }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_sha1_sha2[] = {
- { /* SHA1 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 20,
- .max = 20,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 20,
- .increment = 8
- },
- }, }
- }, }
- },
- { /* SHA224 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA224,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 28,
- .max = 28,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA224 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 28,
- .max = 28,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA256 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA256 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 16,
- .max = 32,
- .increment = 16
- },
- }, }
- }, }
- },
- { /* SHA384 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA384,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 48,
- .max = 48,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA384 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 24,
- .max = 48,
- .increment = 24
- },
- }, }
- }, }
- },
- { /* SHA512 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA512,
- .block_size = 128,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 64,
- .max = 64,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA512 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
- .block_size = 128,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 32,
- .max = 64,
- .increment = 32
- },
- }, }
- }, }
- },
- { /* MD5 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_MD5,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* MD5 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 8,
- .max = 64,
- .increment = 8
- },
- .digest_size = {
- .min = 12,
- .max = 16,
- .increment = 4
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_chacha20[] = {
- { /* Chacha20-Poly1305 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
- .block_size = 64,
- .key_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 0,
- .max = 1024,
- .increment = 1
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- },
- }, }
- }, }
- }
-};
-
-static const struct rte_cryptodev_capabilities caps_zuc_snow3g[] = {
- { /* SNOW 3G (UEA2) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* ZUC (EEA3) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* SNOW 3G (UIA2) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* ZUC (EIA3) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_ZUC_EIA3,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_aes[] = {
- { /* AES GMAC (AUTH) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_AES_GMAC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 8,
- .max = 16,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CTR */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CTR,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 12,
- .max = 16,
- .increment = 4
- }
- }, }
- }, }
- },
- { /* AES XTS */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_XTS,
- .block_size = 16,
- .key_size = {
- .min = 32,
- .max = 64,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 4,
- .max = 16,
- .increment = 1
- },
- .aad_size = {
- .min = 0,
- .max = 1024,
- .increment = 1
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_kasumi[] = {
- { /* KASUMI (F8) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_KASUMI_F8,
- .block_size = 8,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* KASUMI (F9) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_KASUMI_F9,
- .block_size = 8,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_des[] = {
- { /* 3DES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
- .block_size = 8,
- .key_size = {
- .min = 24,
- .max = 24,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 16,
- .increment = 8
- }
- }, }
- }, }
- },
- { /* 3DES ECB */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
- .block_size = 8,
- .key_size = {
- .min = 24,
- .max = 24,
- .increment = 0
- },
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* DES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_DES_CBC,
- .block_size = 8,
- .key_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_null[] = {
- { /* NULL (AUTH) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_NULL,
- .block_size = 1,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- }, },
- }, },
- },
- { /* NULL (CIPHER) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_NULL,
- .block_size = 1,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
- }, },
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_end[] = {
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 8,
- .max = 12,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 20,
- .increment = 8
- },
- }, }
- }, }
- },
- { /* SHA256 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 16,
- .max = 32,
- .increment = 16
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_security_capability
-otx2_crypto_sec_capabilities[] = {
- { /* IPsec Lookaside Protocol ESP Tunnel Ingress */
- .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_cpt_sec_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- { /* IPsec Lookaside Protocol ESP Tunnel Egress */
- .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_cpt_sec_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- {
- .action = RTE_SECURITY_ACTION_TYPE_NONE
- }
-};
-
-static void
-cpt_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps)
-{
- static int cur_pos;
-
- if (cur_pos + nb_caps > OTX2_CPT_MAX_CAPS)
- return;
-
- memcpy(&otx2_cpt_caps[cur_pos], caps, nb_caps * sizeof(caps[0]));
- cur_pos += nb_caps;
-}
-
-void
-otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps)
-{
- CPT_CAPS_ADD(hw_caps, mul);
- CPT_CAPS_ADD(hw_caps, sha1_sha2);
- CPT_CAPS_ADD(hw_caps, chacha20);
- CPT_CAPS_ADD(hw_caps, zuc_snow3g);
- CPT_CAPS_ADD(hw_caps, aes);
- CPT_CAPS_ADD(hw_caps, kasumi);
- CPT_CAPS_ADD(hw_caps, des);
-
- cpt_caps_add(caps_null, RTE_DIM(caps_null));
- cpt_caps_add(caps_end, RTE_DIM(caps_end));
-}
-
-const struct rte_cryptodev_capabilities *
-otx2_cpt_capabilities_get(void)
-{
- return otx2_cpt_caps;
-}
-
-static void
-sec_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps)
-{
- static int cur_pos;
-
- if (cur_pos + nb_caps > OTX2_SEC_MAX_CAPS)
- return;
-
- memcpy(&otx2_cpt_sec_caps[cur_pos], caps, nb_caps * sizeof(caps[0]));
- cur_pos += nb_caps;
-}
-
-void
-otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps)
-{
- SEC_CAPS_ADD(hw_caps, aes);
- SEC_CAPS_ADD(hw_caps, sha1_sha2);
-
- sec_caps_add(caps_end, RTE_DIM(caps_end));
-}
-
-const struct rte_security_capability *
-otx2_crypto_sec_capabilities_get(void *device __rte_unused)
-{
- return otx2_crypto_sec_capabilities;
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
deleted file mode 100644
index c1e0001190..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
+++ /dev/null
@@ -1,45 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_CAPABILITIES_H_
-#define _OTX2_CRYPTODEV_CAPABILITIES_H_
-
-#include <rte_cryptodev.h>
-
-#include "otx2_mbox.h"
-
-enum otx2_cpt_egrp {
- OTX2_CPT_EGRP_SE = 0,
- OTX2_CPT_EGRP_SE_IE = 1,
- OTX2_CPT_EGRP_AE = 2,
- OTX2_CPT_EGRP_MAX,
-};
-
-/*
- * Initialize crypto capabilities for the device
- *
- */
-void otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps);
-
-/*
- * Get capabilities list for the device
- *
- */
-const struct rte_cryptodev_capabilities *
-otx2_cpt_capabilities_get(void);
-
-/*
- * Initialize security capabilities for the device
- *
- */
-void otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps);
-
-/*
- * Get security capabilities list for the device
- *
- */
-const struct rte_security_capability *
-otx2_crypto_sec_capabilities_get(void *device __rte_unused);
-
-#endif /* _OTX2_CRYPTODEV_CAPABILITIES_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
deleted file mode 100644
index d5d6b5bad7..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ /dev/null
@@ -1,225 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-#include <rte_cryptodev.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_dev.h"
-
-#include "cpt_pmd_logs.h"
-
-static void
-otx2_cpt_lf_err_intr_handler(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t lf_id;
- uint64_t intr;
-
- lf_id = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + OTX2_CPT_LF_MISC_INT);
- if (intr == 0)
- return;
-
- CPT_LOG_ERR("LF %d MISC_INT: 0x%" PRIx64 "", lf_id, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + OTX2_CPT_LF_MISC_INT);
-}
-
-static void
-otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
- uint16_t msix_off, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
-
- otx2_unregister_irq(handle, otx2_cpt_lf_err_intr_handler, (void *)base,
- msix_off);
-}
-
-void
-otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uintptr_t base;
- uint32_t i;
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i);
- otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[i], base);
- }
-
- vf->err_intr_registered = 0;
-}
-
-static int
-otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
- uint16_t msix_off, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int ret;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
-
- /* Register error interrupt handler */
- ret = otx2_register_irq(handle, otx2_cpt_lf_err_intr_handler,
- (void *)base, msix_off);
- if (ret)
- return ret;
-
- /* Enable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1S);
-
- return 0;
-}
-
-int
-otx2_cpt_err_intr_register(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uint32_t i, j, ret;
- uintptr_t base;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) {
- CPT_LOG_ERR("Invalid CPT LF MSI-X offset: 0x%x",
- vf->lf_msixoff[i]);
- return -EINVAL;
- }
- }
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i);
- ret = otx2_cpt_lf_err_intr_register(dev, vf->lf_msixoff[i],
- base);
- if (ret)
- goto intr_unregister;
- }
-
- vf->err_intr_registered = 1;
- return 0;
-
-intr_unregister:
- /* Unregister the ones already registered */
- for (j = 0; j < i; j++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[j], j);
- otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base);
- }
-
- /*
- * Failed to register error interrupt. Not returning error as this would
- * prevent application from enabling larger number of devs.
- *
- * This failure is a known issue because otx2_dev_init() initializes
- * interrupts based on static values from ATF, and the actual number
- * of interrupts needed (which is based on LFs) can be determined only
- * after otx2_dev_init() sets up interrupts which includes mbox
- * interrupts.
- */
- return 0;
-}
-
-int
-otx2_cpt_iq_enable(const struct rte_cryptodev *dev,
- const struct otx2_cpt_qp *qp, uint8_t grp_mask, uint8_t pri,
- uint32_t size_div40)
-{
- union otx2_cpt_af_lf_ctl af_lf_ctl;
- union otx2_cpt_lf_inprog inprog;
- union otx2_cpt_lf_q_base base;
- union otx2_cpt_lf_q_size size;
- union otx2_cpt_lf_ctl lf_ctl;
- int ret;
-
- /* Set engine group mask and priority */
-
- ret = otx2_cpt_af_reg_read(dev, OTX2_CPT_AF_LF_CTL(qp->id),
- qp->blkaddr, &af_lf_ctl.u);
- if (ret)
- return ret;
- af_lf_ctl.s.grp = grp_mask;
- af_lf_ctl.s.pri = pri ? 1 : 0;
- ret = otx2_cpt_af_reg_write(dev, OTX2_CPT_AF_LF_CTL(qp->id),
- qp->blkaddr, af_lf_ctl.u);
- if (ret)
- return ret;
-
- /* Set instruction queue base address */
-
- base.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_BASE);
- base.s.fault = 0;
- base.s.stopped = 0;
- base.s.addr = qp->iq_dma_addr >> 7;
- otx2_write64(base.u, qp->base + OTX2_CPT_LF_Q_BASE);
-
- /* Set instruction queue size */
-
- size.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_SIZE);
- size.s.size_div40 = size_div40;
- otx2_write64(size.u, qp->base + OTX2_CPT_LF_Q_SIZE);
-
- /* Enable instruction queue */
-
- lf_ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL);
- lf_ctl.s.ena = 1;
- otx2_write64(lf_ctl.u, qp->base + OTX2_CPT_LF_CTL);
-
- /* Start instruction execution */
-
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- inprog.s.eena = 1;
- otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG);
-
- return 0;
-}
-
-void
-otx2_cpt_iq_disable(struct otx2_cpt_qp *qp)
-{
- union otx2_cpt_lf_q_grp_ptr grp_ptr;
- union otx2_cpt_lf_inprog inprog;
- union otx2_cpt_lf_ctl ctl;
- int cnt;
-
- /* Stop instruction execution */
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- inprog.s.eena = 0x0;
- otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG);
-
- /* Disable instructions enqueuing */
- ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL);
- ctl.s.ena = 0;
- otx2_write64(ctl.u, qp->base + OTX2_CPT_LF_CTL);
-
- /* Wait for instruction queue to become empty */
- cnt = 0;
- do {
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- if (inprog.s.grb_partial)
- cnt = 0;
- else
- cnt++;
- grp_ptr.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_GRP_PTR);
- } while ((cnt < 10) && (grp_ptr.s.nq_ptr != grp_ptr.s.dq_ptr));
-
- cnt = 0;
- do {
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- if ((inprog.s.inflight == 0) &&
- (inprog.s.gwb_cnt < 40) &&
- ((inprog.s.grb_cnt == 0) || (inprog.s.grb_cnt == 40)))
- cnt++;
- else
- cnt = 0;
- } while (cnt < 10);
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
deleted file mode 100644
index 90a338e05a..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_HW_ACCESS_H_
-#define _OTX2_CRYPTODEV_HW_ACCESS_H_
-
-#include <stdint.h>
-
-#include <rte_cryptodev.h>
-#include <rte_memory.h>
-
-#include "cpt_common.h"
-#include "cpt_hw_types.h"
-#include "cpt_mcode_defines.h"
-
-#include "otx2_dev.h"
-#include "otx2_cryptodev_qp.h"
-
-/* CPT instruction queue length.
- * Use queue size as power of 2 for aiding in pending queue calculations.
- */
-#define OTX2_CPT_DEFAULT_CMD_QLEN 8192
-
-/* Mask which selects all engine groups */
-#define OTX2_CPT_ENG_GRPS_MASK 0xFF
-
-/* Register offsets */
-
-/* LMT LF registers */
-#define OTX2_LMT_LF_LMTLINE(a) (0x0ull | (uint64_t)(a) << 3)
-
-/* CPT LF registers */
-#define OTX2_CPT_LF_CTL 0x10ull
-#define OTX2_CPT_LF_INPROG 0x40ull
-#define OTX2_CPT_LF_MISC_INT 0xb0ull
-#define OTX2_CPT_LF_MISC_INT_ENA_W1S 0xd0ull
-#define OTX2_CPT_LF_MISC_INT_ENA_W1C 0xe0ull
-#define OTX2_CPT_LF_Q_BASE 0xf0ull
-#define OTX2_CPT_LF_Q_SIZE 0x100ull
-#define OTX2_CPT_LF_Q_GRP_PTR 0x120ull
-#define OTX2_CPT_LF_NQ(a) (0x400ull | (uint64_t)(a) << 3)
-
-#define OTX2_CPT_AF_LF_CTL(a) (0x27000ull | (uint64_t)(a) << 3)
-#define OTX2_CPT_AF_LF_CTL2(a) (0x29000ull | (uint64_t)(a) << 3)
-
-#define OTX2_CPT_LF_BAR2(vf, blk_addr, q_id) \
- ((vf)->otx2_dev.bar2 + \
- ((blk_addr << 20) | ((q_id) << 12)))
-
-#define OTX2_CPT_QUEUE_HI_PRIO 0x1
-
-union otx2_cpt_lf_ctl {
- uint64_t u;
- struct {
- uint64_t ena : 1;
- uint64_t fc_ena : 1;
- uint64_t fc_up_crossing : 1;
- uint64_t reserved_3_3 : 1;
- uint64_t fc_hyst_bits : 4;
- uint64_t reserved_8_63 : 56;
- } s;
-};
-
-union otx2_cpt_lf_inprog {
- uint64_t u;
- struct {
- uint64_t inflight : 9;
- uint64_t reserved_9_15 : 7;
- uint64_t eena : 1;
- uint64_t grp_drp : 1;
- uint64_t reserved_18_30 : 13;
- uint64_t grb_partial : 1;
- uint64_t grb_cnt : 8;
- uint64_t gwb_cnt : 8;
- uint64_t reserved_48_63 : 16;
- } s;
-};
-
-union otx2_cpt_lf_q_base {
- uint64_t u;
- struct {
- uint64_t fault : 1;
- uint64_t stopped : 1;
- uint64_t reserved_2_6 : 5;
- uint64_t addr : 46;
- uint64_t reserved_53_63 : 11;
- } s;
-};
-
-union otx2_cpt_lf_q_size {
- uint64_t u;
- struct {
- uint64_t size_div40 : 15;
- uint64_t reserved_15_63 : 49;
- } s;
-};
-
-union otx2_cpt_af_lf_ctl {
- uint64_t u;
- struct {
- uint64_t pri : 1;
- uint64_t reserved_1_8 : 8;
- uint64_t pf_func_inst : 1;
- uint64_t cont_err : 1;
- uint64_t reserved_11_15 : 5;
- uint64_t nixtx_en : 1;
- uint64_t reserved_17_47 : 31;
- uint64_t grp : 8;
- uint64_t reserved_56_63 : 8;
- } s;
-};
-
-union otx2_cpt_af_lf_ctl2 {
- uint64_t u;
- struct {
- uint64_t exe_no_swap : 1;
- uint64_t exe_ldwb : 1;
- uint64_t reserved_2_31 : 30;
- uint64_t sso_pf_func : 16;
- uint64_t nix_pf_func : 16;
- } s;
-};
-
-union otx2_cpt_lf_q_grp_ptr {
- uint64_t u;
- struct {
- uint64_t dq_ptr : 15;
- uint64_t reserved_31_15 : 17;
- uint64_t nq_ptr : 15;
- uint64_t reserved_47_62 : 16;
- uint64_t xq_xor : 1;
- } s;
-};
-
-/*
- * Enumeration cpt_9x_comp_e
- *
- * CPT 9X Completion Enumeration
- * Enumerates the values of CPT_RES_S[COMPCODE].
- */
-enum cpt_9x_comp_e {
- CPT_9X_COMP_E_NOTDONE = 0x00,
- CPT_9X_COMP_E_GOOD = 0x01,
- CPT_9X_COMP_E_FAULT = 0x02,
- CPT_9X_COMP_E_HWERR = 0x04,
- CPT_9X_COMP_E_INSTERR = 0x05,
- CPT_9X_COMP_E_LAST_ENTRY = 0x06
-};
-
-void otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev);
-
-int otx2_cpt_err_intr_register(const struct rte_cryptodev *dev);
-
-int otx2_cpt_iq_enable(const struct rte_cryptodev *dev,
- const struct otx2_cpt_qp *qp, uint8_t grp_mask,
- uint8_t pri, uint32_t size_div40);
-
-void otx2_cpt_iq_disable(struct otx2_cpt_qp *qp);
-
-#endif /* _OTX2_CRYPTODEV_HW_ACCESS_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
deleted file mode 100644
index f9e7b0b474..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
+++ /dev/null
@@ -1,285 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-#include <cryptodev_pmd.h>
-#include <rte_ethdev.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-#include "otx2_sec_idev.h"
-#include "otx2_mbox.h"
-
-#include "cpt_pmd_logs.h"
-
-int
-otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev,
- union cpt_eng_caps *hw_caps)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_dev *otx2_dev = &vf->otx2_dev;
- struct cpt_caps_rsp_msg *rsp;
- int ret;
-
- otx2_mbox_alloc_msg_cpt_caps_get(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- if (rsp->cpt_pf_drv_version != OTX2_CPT_PMD_VERSION) {
- otx2_err("Incompatible CPT PMD version"
- "(Kernel: 0x%04x DPDK: 0x%04x)",
- rsp->cpt_pf_drv_version, OTX2_CPT_PMD_VERSION);
- return -EPIPE;
- }
-
- vf->cpt_revision = rsp->cpt_revision;
- otx2_mbox_memcpy(hw_caps, rsp->eng_caps,
- sizeof(union cpt_eng_caps) * CPT_MAX_ENG_TYPES);
-
- return 0;
-}
-
-int
-otx2_cpt_available_queues_get(const struct rte_cryptodev *dev,
- uint16_t *nb_queues)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_dev *otx2_dev = &vf->otx2_dev;
- struct free_rsrcs_rsp *rsp;
- int ret;
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- *nb_queues = rsp->cpt + rsp->cpt1;
- return 0;
-}
-
-int
-otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int blkaddr[OTX2_CPT_MAX_BLKS];
- struct rsrc_attach_req *req;
- int blknum = 0;
- int i, ret;
-
- blkaddr[0] = RVU_BLOCK_ADDR_CPT0;
- blkaddr[1] = RVU_BLOCK_ADDR_CPT1;
-
- /* Ask AF to attach required LFs */
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
-
- if ((vf->cpt_revision == OTX2_CPT_REVISION_ID_3) &&
- (vf->otx2_dev.pf_func & 0x1))
- blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS;
-
- /* 1 LF = 1 queue */
- req->cptlfs = nb_queues;
- req->cpt_blkaddr = blkaddr[blknum];
-
- ret = otx2_mbox_process(mbox);
- if (ret == -ENOSPC) {
- if (vf->cpt_revision == OTX2_CPT_REVISION_ID_3) {
- blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS;
- req->cpt_blkaddr = blkaddr[blknum];
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- } else {
- return -EIO;
- }
- } else if (ret < 0) {
- return -EIO;
- }
-
- /* Update number of attached queues */
- vf->nb_queues = nb_queues;
- for (i = 0; i < nb_queues; i++)
- vf->lf_blkaddr[i] = req->cpt_blkaddr;
-
- return 0;
-}
-
-int
-otx2_cpt_queues_detach(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->cptlfs = true;
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
-
- /* Queues have been detached */
- vf->nb_queues = 0;
-
- return 0;
-}
-
-int
-otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct msix_offset_rsp *rsp;
- uint32_t i, ret;
-
- /* Get CPT MSI-X vector offsets */
-
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
-
- for (i = 0; i < vf->nb_queues; i++)
- vf->lf_msixoff[i] = (vf->lf_blkaddr[i] == RVU_BLOCK_ADDR_CPT1) ?
- rsp->cpt1_lf_msixoff[i] : rsp->cptlf_msixoff[i];
-
- return 0;
-}
-
-static int
-otx2_cpt_send_mbox_msg(struct otx2_cpt_vf *vf)
-{
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int ret;
-
- otx2_mbox_msg_send(mbox, 0);
-
- ret = otx2_mbox_wait_for_rsp(mbox, 0);
- if (ret < 0) {
- CPT_LOG_ERR("Could not get mailbox response");
- return ret;
- }
-
- return 0;
-}
-
-int
-otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t *val)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct cpt_rd_wr_reg_msg *msg;
- int ret, off;
-
- msg = (struct cpt_rd_wr_reg_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg),
- sizeof(*msg));
- if (msg == NULL) {
- CPT_LOG_ERR("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 0;
- msg->reg_offset = reg;
- msg->ret_val = val;
- msg->blkaddr = blkaddr;
-
- ret = otx2_cpt_send_mbox_msg(vf);
- if (ret < 0)
- return ret;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msg = (struct cpt_rd_wr_reg_msg *) ((uintptr_t)mdev->mbase + off);
-
- *val = msg->val;
-
- return 0;
-}
-
-int
-otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t val)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_rd_wr_reg_msg *msg;
-
- msg = (struct cpt_rd_wr_reg_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg),
- sizeof(*msg));
- if (msg == NULL) {
- CPT_LOG_ERR("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 1;
- msg->reg_offset = reg;
- msg->val = val;
- msg->blkaddr = blkaddr;
-
- return otx2_cpt_send_mbox_msg(vf);
-}
-
-int
-otx2_cpt_inline_init(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_rx_inline_lf_cfg_msg *msg;
- int ret;
-
- msg = otx2_mbox_alloc_msg_cpt_rx_inline_lf_cfg(mbox);
- msg->sso_pf_func = otx2_sso_pf_func_get();
-
- otx2_mbox_msg_send(mbox, 0);
- ret = otx2_mbox_process(mbox);
- if (ret < 0)
- return -EIO;
-
- return 0;
-}
-
-int
-otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp,
- uint16_t port_id)
-{
- struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_inline_ipsec_cfg_msg *msg;
- struct otx2_eth_dev *otx2_eth_dev;
- int ret;
-
- if (!otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id]))
- return -EINVAL;
-
- otx2_eth_dev = otx2_eth_pmd_priv(eth_dev);
-
- msg = otx2_mbox_alloc_msg_cpt_inline_ipsec_cfg(mbox);
- msg->dir = CPT_INLINE_OUTBOUND;
- msg->enable = 1;
- msg->slot = qp->id;
-
- msg->nix_pf_func = otx2_eth_dev->pf_func;
-
- otx2_mbox_msg_send(mbox, 0);
- ret = otx2_mbox_process(mbox);
- if (ret < 0)
- return -EIO;
-
- return 0;
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
deleted file mode 100644
index 03323e418c..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_MBOX_H_
-#define _OTX2_CRYPTODEV_MBOX_H_
-
-#include <rte_cryptodev.h>
-
-#include "otx2_cryptodev_hw_access.h"
-
-int otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev,
- union cpt_eng_caps *hw_caps);
-
-int otx2_cpt_available_queues_get(const struct rte_cryptodev *dev,
- uint16_t *nb_queues);
-
-int otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues);
-
-int otx2_cpt_queues_detach(const struct rte_cryptodev *dev);
-
-int otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev);
-
-__rte_internal
-int otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t *val);
-
-__rte_internal
-int otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t val);
-
-int otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev,
- struct otx2_cpt_qp *qp, uint16_t port_id);
-
-int otx2_cpt_inline_init(const struct rte_cryptodev *dev);
-
-#endif /* _OTX2_CRYPTODEV_MBOX_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
deleted file mode 100644
index 339b82f33e..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ /dev/null
@@ -1,1438 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <unistd.h>
-
-#include <cryptodev_pmd.h>
-#include <rte_errno.h>
-#include <ethdev_driver.h>
-#include <rte_event_crypto_adapter.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_ops_helper.h"
-#include "otx2_ipsec_anti_replay.h"
-#include "otx2_ipsec_po_ops.h"
-#include "otx2_mbox.h"
-#include "otx2_sec_idev.h"
-#include "otx2_security.h"
-
-#include "cpt_hw_types.h"
-#include "cpt_pmd_logs.h"
-#include "cpt_pmd_ops_helper.h"
-#include "cpt_ucode.h"
-#include "cpt_ucode_asym.h"
-
-#define METABUF_POOL_CACHE_SIZE 512
-
-static uint64_t otx2_fpm_iova[CPT_EC_ID_PMAX];
-
-/* Forward declarations */
-
-static int
-otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id);
-
-static void
-qp_memzone_name_get(char *name, int size, int dev_id, int qp_id)
-{
- snprintf(name, size, "otx2_cpt_lf_mem_%u:%u", dev_id, qp_id);
-}
-
-static int
-otx2_cpt_metabuf_mempool_create(const struct rte_cryptodev *dev,
- struct otx2_cpt_qp *qp, uint8_t qp_id,
- unsigned int nb_elements)
-{
- char mempool_name[RTE_MEMPOOL_NAMESIZE];
- struct cpt_qp_meta_info *meta_info;
- int lcore_cnt = rte_lcore_count();
- int ret, max_mlen, mb_pool_sz;
- struct rte_mempool *pool;
- int asym_mlen = 0;
- int lb_mlen = 0;
- int sg_mlen = 0;
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO) {
-
- /* Get meta len for scatter gather mode */
- sg_mlen = cpt_pmd_ops_helper_get_mlen_sg_mode();
-
- /* Extra 32B saved for future considerations */
- sg_mlen += 4 * sizeof(uint64_t);
-
- /* Get meta len for linear buffer (direct) mode */
- lb_mlen = cpt_pmd_ops_helper_get_mlen_direct_mode();
-
- /* Extra 32B saved for future considerations */
- lb_mlen += 4 * sizeof(uint64_t);
- }
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) {
-
- /* Get meta len required for asymmetric operations */
- asym_mlen = cpt_pmd_ops_helper_asym_get_mlen();
- }
-
- /*
- * Check max requirement for meta buffer to
- * support crypto op of any type (sym/asym).
- */
- max_mlen = RTE_MAX(RTE_MAX(lb_mlen, sg_mlen), asym_mlen);
-
- /* Allocate mempool */
-
- snprintf(mempool_name, RTE_MEMPOOL_NAMESIZE, "otx2_cpt_mb_%u:%u",
- dev->data->dev_id, qp_id);
-
- mb_pool_sz = nb_elements;
-
- /* For poll mode, core that enqueues and core that dequeues can be
- * different. For event mode, all cores are allowed to use same crypto
- * queue pair.
- */
- mb_pool_sz += (RTE_MAX(2, lcore_cnt) * METABUF_POOL_CACHE_SIZE);
-
- pool = rte_mempool_create_empty(mempool_name, mb_pool_sz, max_mlen,
- METABUF_POOL_CACHE_SIZE, 0,
- rte_socket_id(), 0);
-
- if (pool == NULL) {
- CPT_LOG_ERR("Could not create mempool for metabuf");
- return rte_errno;
- }
-
- ret = rte_mempool_set_ops_byname(pool, RTE_MBUF_DEFAULT_MEMPOOL_OPS,
- NULL);
- if (ret) {
- CPT_LOG_ERR("Could not set mempool ops");
- goto mempool_free;
- }
-
- ret = rte_mempool_populate_default(pool);
- if (ret <= 0) {
- CPT_LOG_ERR("Could not populate metabuf pool");
- goto mempool_free;
- }
-
- meta_info = &qp->meta_info;
-
- meta_info->pool = pool;
- meta_info->lb_mlen = lb_mlen;
- meta_info->sg_mlen = sg_mlen;
-
- return 0;
-
-mempool_free:
- rte_mempool_free(pool);
- return ret;
-}
-
-static void
-otx2_cpt_metabuf_mempool_destroy(struct otx2_cpt_qp *qp)
-{
- struct cpt_qp_meta_info *meta_info = &qp->meta_info;
-
- rte_mempool_free(meta_info->pool);
-
- meta_info->pool = NULL;
- meta_info->lb_mlen = 0;
- meta_info->sg_mlen = 0;
-}
-
-static int
-otx2_cpt_qp_inline_cfg(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp)
-{
- static rte_atomic16_t port_offset = RTE_ATOMIC16_INIT(-1);
- uint16_t port_id, nb_ethport = rte_eth_dev_count_avail();
- int i, ret;
-
- for (i = 0; i < nb_ethport; i++) {
- port_id = rte_atomic16_add_return(&port_offset, 1) % nb_ethport;
- if (otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id]))
- break;
- }
-
- if (i >= nb_ethport)
- return 0;
-
- ret = otx2_cpt_qp_ethdev_bind(dev, qp, port_id);
- if (ret)
- return ret;
-
- /* Publish inline Tx QP to eth dev security */
- ret = otx2_sec_idev_tx_cpt_qp_add(port_id, qp);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static struct otx2_cpt_qp *
-otx2_cpt_qp_create(const struct rte_cryptodev *dev, uint16_t qp_id,
- uint8_t group)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uint64_t pg_sz = sysconf(_SC_PAGESIZE);
- const struct rte_memzone *lf_mem;
- uint32_t len, iq_len, size_div40;
- char name[RTE_MEMZONE_NAMESIZE];
- uint64_t used_len, iova;
- struct otx2_cpt_qp *qp;
- uint64_t lmtline;
- uint8_t *va;
- int ret;
-
- /* Allocate queue pair */
- qp = rte_zmalloc_socket("OCTEON TX2 Crypto PMD Queue Pair", sizeof(*qp),
- OTX2_ALIGN, 0);
- if (qp == NULL) {
- CPT_LOG_ERR("Could not allocate queue pair");
- return NULL;
- }
-
- /*
- * Pending queue updates make assumption that queue size is a power
- * of 2.
- */
- RTE_BUILD_BUG_ON(!RTE_IS_POWER_OF_2(OTX2_CPT_DEFAULT_CMD_QLEN));
-
- iq_len = OTX2_CPT_DEFAULT_CMD_QLEN;
-
- /*
- * Queue size must be a multiple of 40 and effective queue size to
- * software is (size_div40 - 1) * 40
- */
- size_div40 = (iq_len + 40 - 1) / 40 + 1;
-
- /* For pending queue */
- len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8);
-
- /* Space for instruction group memory */
- len += size_div40 * 16;
-
- /* So that instruction queues start as pg size aligned */
- len = RTE_ALIGN(len, pg_sz);
-
- /* For instruction queues */
- len += OTX2_CPT_DEFAULT_CMD_QLEN * sizeof(union cpt_inst_s);
-
- /* Wastage after instruction queues */
- len = RTE_ALIGN(len, pg_sz);
-
- qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
- qp_id);
-
- lf_mem = rte_memzone_reserve_aligned(name, len, vf->otx2_dev.node,
- RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB,
- RTE_CACHE_LINE_SIZE);
- if (lf_mem == NULL) {
- CPT_LOG_ERR("Could not allocate reserved memzone");
- goto qp_free;
- }
-
- va = lf_mem->addr;
- iova = lf_mem->iova;
-
- memset(va, 0, len);
-
- ret = otx2_cpt_metabuf_mempool_create(dev, qp, qp_id, iq_len);
- if (ret) {
- CPT_LOG_ERR("Could not create mempool for metabuf");
- goto lf_mem_free;
- }
-
- /* Initialize pending queue */
- qp->pend_q.rid_queue = (void **)va;
- qp->pend_q.tail = 0;
- qp->pend_q.head = 0;
-
- used_len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8);
- used_len += size_div40 * 16;
- used_len = RTE_ALIGN(used_len, pg_sz);
- iova += used_len;
-
- qp->iq_dma_addr = iova;
- qp->id = qp_id;
- qp->blkaddr = vf->lf_blkaddr[qp_id];
- qp->base = OTX2_CPT_LF_BAR2(vf, qp->blkaddr, qp_id);
-
- lmtline = vf->otx2_dev.bar2 +
- (RVU_BLOCK_ADDR_LMT << 20 | qp_id << 12) +
- OTX2_LMT_LF_LMTLINE(0);
-
- qp->lmtline = (void *)lmtline;
-
- qp->lf_nq_reg = qp->base + OTX2_CPT_LF_NQ(0);
-
- ret = otx2_sec_idev_tx_cpt_qp_remove(qp);
- if (ret && (ret != -ENOENT)) {
- CPT_LOG_ERR("Could not delete inline configuration");
- goto mempool_destroy;
- }
-
- otx2_cpt_iq_disable(qp);
-
- ret = otx2_cpt_qp_inline_cfg(dev, qp);
- if (ret) {
- CPT_LOG_ERR("Could not configure queue for inline IPsec");
- goto mempool_destroy;
- }
-
- ret = otx2_cpt_iq_enable(dev, qp, group, OTX2_CPT_QUEUE_HI_PRIO,
- size_div40);
- if (ret) {
- CPT_LOG_ERR("Could not enable instruction queue");
- goto mempool_destroy;
- }
-
- return qp;
-
-mempool_destroy:
- otx2_cpt_metabuf_mempool_destroy(qp);
-lf_mem_free:
- rte_memzone_free(lf_mem);
-qp_free:
- rte_free(qp);
- return NULL;
-}
-
-static int
-otx2_cpt_qp_destroy(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp)
-{
- const struct rte_memzone *lf_mem;
- char name[RTE_MEMZONE_NAMESIZE];
- int ret;
-
- ret = otx2_sec_idev_tx_cpt_qp_remove(qp);
- if (ret && (ret != -ENOENT)) {
- CPT_LOG_ERR("Could not delete inline configuration");
- return ret;
- }
-
- otx2_cpt_iq_disable(qp);
-
- otx2_cpt_metabuf_mempool_destroy(qp);
-
- qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
- qp->id);
-
- lf_mem = rte_memzone_lookup(name);
-
- ret = rte_memzone_free(lf_mem);
- if (ret)
- return ret;
-
- rte_free(qp);
-
- return 0;
-}
-
-static int
-sym_xform_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->next) {
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->next->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
- (xform->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC ||
- xform->next->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC))
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- (xform->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC ||
- xform->next->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC))
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->next->auth.algo == RTE_CRYPTO_AUTH_SHA1)
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->auth.algo == RTE_CRYPTO_AUTH_SHA1 &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->next->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC)
- return -ENOTSUP;
-
- } else {
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->auth.algo == RTE_CRYPTO_AUTH_NULL &&
- xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY)
- return -ENOTSUP;
- }
- return 0;
-}
-
-static int
-sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
- struct rte_cryptodev_sym_session *sess,
- struct rte_mempool *pool)
-{
- struct rte_crypto_sym_xform *temp_xform = xform;
- struct cpt_sess_misc *misc;
- vq_cmd_word3_t vq_cmd_w3;
- void *priv;
- int ret;
-
- ret = sym_xform_verify(xform);
- if (unlikely(ret))
- return ret;
-
- if (unlikely(rte_mempool_get(pool, &priv))) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_sess_misc) +
- offsetof(struct cpt_ctx, mc_ctx));
-
- misc = priv;
-
- for ( ; xform != NULL; xform = xform->next) {
- switch (xform->type) {
- case RTE_CRYPTO_SYM_XFORM_AEAD:
- ret = fill_sess_aead(xform, misc);
- break;
- case RTE_CRYPTO_SYM_XFORM_CIPHER:
- ret = fill_sess_cipher(xform, misc);
- break;
- case RTE_CRYPTO_SYM_XFORM_AUTH:
- if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC)
- ret = fill_sess_gmac(xform, misc);
- else
- ret = fill_sess_auth(xform, misc);
- break;
- default:
- ret = -1;
- }
-
- if (ret)
- goto priv_put;
- }
-
- if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
- cpt_mac_len_verify(&temp_xform->auth)) {
- CPT_LOG_ERR("MAC length is not supported");
- struct cpt_ctx *ctx = SESS_PRIV(misc);
- if (ctx->auth_key != NULL) {
- rte_free(ctx->auth_key);
- ctx->auth_key = NULL;
- }
- ret = -ENOTSUP;
- goto priv_put;
- }
-
- set_sym_session_private_data(sess, driver_id, misc);
-
- misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
- sizeof(struct cpt_sess_misc);
-
- vq_cmd_w3.u64 = 0;
- vq_cmd_w3.s.cptr = misc->ctx_dma_addr + offsetof(struct cpt_ctx,
- mc_ctx);
-
- /*
- * IE engines support IPsec operations
- * SE engines support IPsec operations, Chacha-Poly and
- * Air-Crypto operations
- */
- if (misc->zsk_flag || misc->chacha_poly)
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE;
- else
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE_IE;
-
- misc->cpt_inst_w7 = vq_cmd_w3.u64;
-
- return 0;
-
-priv_put:
- rte_mempool_put(pool, priv);
-
- return -ENOTSUP;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
- struct cpt_request_info *req,
- void *lmtline,
- struct rte_crypto_op *op,
- uint64_t cpt_inst_w7)
-{
- union rte_event_crypto_metadata *m_data;
- union cpt_inst_s inst;
- uint64_t lmt_status;
-
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- m_data = rte_cryptodev_sym_session_get_user_data(
- op->sym->session);
- if (m_data == NULL) {
- rte_pktmbuf_free(op->sym->m_src);
- rte_crypto_op_free(op);
- rte_errno = EINVAL;
- return -EINVAL;
- }
- } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
- op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
- ((uint8_t *)op +
- op->private_data_offset);
- } else {
- return -EINVAL;
- }
-
- inst.u[0] = 0;
- inst.s9x.res_addr = req->comp_baddr;
- inst.u[2] = 0;
- inst.u[3] = 0;
-
- inst.s9x.ei0 = req->ist.ei0;
- inst.s9x.ei1 = req->ist.ei1;
- inst.s9x.ei2 = req->ist.ei2;
- inst.s9x.ei3 = cpt_inst_w7;
-
- inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) |
- m_data->response_info.flow_id) |
- ((uint64_t)m_data->response_info.sched_type << 32) |
- ((uint64_t)m_data->response_info.queue_id << 34));
- inst.u[3] = 1 | (((uint64_t)req >> 3) << 3);
- req->qp = qp;
-
- do {
- /* Copy CPT command to LMTLINE */
- memcpy(lmtline, &inst, sizeof(inst));
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- return 0;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp,
- struct pending_queue *pend_q,
- struct cpt_request_info *req,
- struct rte_crypto_op *op,
- uint64_t cpt_inst_w7,
- unsigned int burst_index)
-{
- void *lmtline = qp->lmtline;
- union cpt_inst_s inst;
- uint64_t lmt_status;
-
- if (qp->ca_enable)
- return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7);
-
- inst.u[0] = 0;
- inst.s9x.res_addr = req->comp_baddr;
- inst.u[2] = 0;
- inst.u[3] = 0;
-
- inst.s9x.ei0 = req->ist.ei0;
- inst.s9x.ei1 = req->ist.ei1;
- inst.s9x.ei2 = req->ist.ei2;
- inst.s9x.ei3 = cpt_inst_w7;
-
- req->time_out = rte_get_timer_cycles() +
- DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
-
- do {
- /* Copy CPT command to LMTLINE */
- memcpy(lmtline, &inst, sizeof(inst));
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- pending_queue_push(pend_q, req, burst_index, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- return 0;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp,
- struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- unsigned int burst_index)
-{
- struct cpt_qp_meta_info *minfo = &qp->meta_info;
- struct rte_crypto_asym_op *asym_op = op->asym;
- struct asym_op_params params = {0};
- struct cpt_asym_sess_misc *sess;
- uintptr_t *cop;
- void *mdata;
- int ret;
-
- if (unlikely(rte_mempool_get(minfo->pool, &mdata) < 0)) {
- CPT_LOG_ERR("Could not allocate meta buffer for request");
- return -ENOMEM;
- }
-
- sess = get_asym_session_private_data(asym_op->session,
- otx2_cryptodev_driver_id);
-
- /* Store IO address of the mdata to meta_buf */
- params.meta_buf = rte_mempool_virt2iova(mdata);
-
- cop = mdata;
- cop[0] = (uintptr_t)mdata;
- cop[1] = (uintptr_t)op;
- cop[2] = cop[3] = 0ULL;
-
- params.req = RTE_PTR_ADD(cop, 4 * sizeof(uintptr_t));
- params.req->op = cop;
-
- /* Adjust meta_buf to point to end of cpt_request_info structure */
- params.meta_buf += (4 * sizeof(uintptr_t)) +
- sizeof(struct cpt_request_info);
- switch (sess->xfrm_type) {
- case RTE_CRYPTO_ASYM_XFORM_MODEX:
- ret = cpt_modex_prep(¶ms, &sess->mod_ctx);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_RSA:
- ret = cpt_enqueue_rsa_op(op, ¶ms, sess);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECDSA:
- ret = cpt_enqueue_ecdsa_op(op, ¶ms, sess, otx2_fpm_iova);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECPM:
- ret = cpt_ecpm_prep(&asym_op->ecpm, ¶ms,
- sess->ec_ctx.curveid);
- if (unlikely(ret))
- goto req_fail;
- break;
- default:
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- ret = -EINVAL;
- goto req_fail;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op,
- sess->cpt_inst_w7, burst_index);
- if (unlikely(ret)) {
- CPT_LOG_DP_ERR("Could not enqueue crypto req");
- goto req_fail;
- }
-
- return 0;
-
-req_fail:
- free_op_meta(mdata, minfo->pool);
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q, unsigned int burst_index)
-{
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct cpt_request_info *req;
- struct cpt_sess_misc *sess;
- uint64_t cpt_op;
- void *mdata;
- int ret;
-
- sess = get_sym_session_private_data(sym_op->session,
- otx2_cryptodev_driver_id);
-
- cpt_op = sess->cpt_op;
-
- if (cpt_op & CPT_OP_CIPHER_MASK)
- ret = fill_fc_params(op, sess, &qp->meta_info, &mdata,
- (void **)&req);
- else
- ret = fill_digest_params(op, sess, &qp->meta_info, &mdata,
- (void **)&req);
-
- if (unlikely(ret)) {
- CPT_LOG_DP_ERR("Crypto req : op %p, cpt_op 0x%x ret 0x%x",
- op, (unsigned int)cpt_op, ret);
- return ret;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7,
- burst_index);
- if (unlikely(ret)) {
- /* Free buffer allocated by fill params routines */
- free_op_meta(mdata, qp->meta_info.pool);
- }
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- const unsigned int burst_index)
-{
- uint32_t winsz, esn_low = 0, esn_hi = 0, seql = 0, seqh = 0;
- struct rte_mbuf *m_src = op->sym->m_src;
- struct otx2_sec_session_ipsec_lp *sess;
- struct otx2_ipsec_po_sa_ctl *ctl_wrd;
- struct otx2_ipsec_po_in_sa *sa;
- struct otx2_sec_session *priv;
- struct cpt_request_info *req;
- uint64_t seq_in_sa, seq = 0;
- uint8_t esn;
- int ret;
-
- priv = get_sec_session_private_data(op->sym->sec_session);
- sess = &priv->ipsec.lp;
- sa = &sess->in_sa;
-
- ctl_wrd = &sa->ctl;
- esn = ctl_wrd->esn_en;
- winsz = sa->replay_win_sz;
-
- if (ctl_wrd->direction == OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND)
- ret = process_outb_sa(op, sess, &qp->meta_info, (void **)&req);
- else {
- if (winsz) {
- esn_low = rte_be_to_cpu_32(sa->esn_low);
- esn_hi = rte_be_to_cpu_32(sa->esn_hi);
- seql = *rte_pktmbuf_mtod_offset(m_src, uint32_t *,
- sizeof(struct rte_ipv4_hdr) + 4);
- seql = rte_be_to_cpu_32(seql);
-
- if (!esn)
- seq = (uint64_t)seql;
- else {
- seqh = anti_replay_get_seqh(winsz, seql, esn_hi,
- esn_low);
- seq = ((uint64_t)seqh << 32) | seql;
- }
-
- if (unlikely(seq == 0))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- ret = anti_replay_check(sa->replay, seq, winsz);
- if (unlikely(ret)) {
- otx2_err("Anti replay check failed");
- return IPSEC_ANTI_REPLAY_FAILED;
- }
-
- if (esn) {
- seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low;
- if (seq > seq_in_sa) {
- sa->esn_low = rte_cpu_to_be_32(seql);
- sa->esn_hi = rte_cpu_to_be_32(seqh);
- }
- }
- }
-
- ret = process_inb_sa(op, sess, &qp->meta_info, (void **)&req);
- }
-
- if (unlikely(ret)) {
- otx2_err("Crypto req : op %p, ret 0x%x", op, ret);
- return ret;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7,
- burst_index);
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- unsigned int burst_index)
-{
- const int driver_id = otx2_cryptodev_driver_id;
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct rte_cryptodev_sym_session *sess;
- int ret;
-
- /* Create temporary session */
- sess = rte_cryptodev_sym_session_create(qp->sess_mp);
- if (sess == NULL)
- return -ENOMEM;
-
- ret = sym_session_configure(driver_id, sym_op->xform, sess,
- qp->sess_mp_priv);
- if (ret)
- goto sess_put;
-
- sym_op->session = sess;
-
- ret = otx2_cpt_enqueue_sym(qp, op, pend_q, burst_index);
-
- if (unlikely(ret))
- goto priv_put;
-
- return 0;
-
-priv_put:
- sym_session_clear(driver_id, sess);
-sess_put:
- rte_mempool_put(qp->sess_mp, sess);
- return ret;
-}
-
-static uint16_t
-otx2_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- uint16_t nb_allowed, count = 0;
- struct otx2_cpt_qp *qp = qptr;
- struct pending_queue *pend_q;
- struct rte_crypto_op *op;
- int ret;
-
- pend_q = &qp->pend_q;
-
- nb_allowed = pending_queue_free_slots(pend_q,
- OTX2_CPT_DEFAULT_CMD_QLEN, 0);
- nb_ops = RTE_MIN(nb_ops, nb_allowed);
-
- for (count = 0; count < nb_ops; count++) {
- op = ops[count];
- if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
- ret = otx2_cpt_enqueue_sec(qp, op, pend_q,
- count);
- else if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
- ret = otx2_cpt_enqueue_sym(qp, op, pend_q,
- count);
- else
- ret = otx2_cpt_enqueue_sym_sessless(qp, op,
- pend_q, count);
- } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
- ret = otx2_cpt_enqueue_asym(qp, op, pend_q,
- count);
- else
- break;
- } else
- break;
-
- if (unlikely(ret))
- break;
- }
-
- if (unlikely(!qp->ca_enable))
- pending_queue_commit(pend_q, count, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- return count;
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req,
- struct rte_crypto_rsa_xform *rsa_ctx)
-{
- struct rte_crypto_rsa_op_param *rsa = &cop->asym->rsa;
-
- switch (rsa->op_type) {
- case RTE_CRYPTO_ASYM_OP_ENCRYPT:
- rsa->cipher.length = rsa_ctx->n.length;
- memcpy(rsa->cipher.data, req->rptr, rsa->cipher.length);
- break;
- case RTE_CRYPTO_ASYM_OP_DECRYPT:
- if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) {
- rsa->message.length = rsa_ctx->n.length;
- memcpy(rsa->message.data, req->rptr,
- rsa->message.length);
- } else {
- /* Get length of decrypted output */
- rsa->message.length = rte_cpu_to_be_16
- (*((uint16_t *)req->rptr));
- /*
- * Offset output data pointer by length field
- * (2 bytes) and copy decrypted data.
- */
- memcpy(rsa->message.data, req->rptr + 2,
- rsa->message.length);
- }
- break;
- case RTE_CRYPTO_ASYM_OP_SIGN:
- rsa->sign.length = rsa_ctx->n.length;
- memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
- break;
- case RTE_CRYPTO_ASYM_OP_VERIFY:
- if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) {
- rsa->sign.length = rsa_ctx->n.length;
- memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
- } else {
- /* Get length of signed output */
- rsa->sign.length = rte_cpu_to_be_16
- (*((uint16_t *)req->rptr));
- /*
- * Offset output data pointer by length field
- * (2 bytes) and copy signed data.
- */
- memcpy(rsa->sign.data, req->rptr + 2,
- rsa->sign.length);
- }
- if (memcmp(rsa->sign.data, rsa->message.data,
- rsa->message.length)) {
- CPT_LOG_DP_ERR("RSA verification failed");
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
- break;
- default:
- CPT_LOG_DP_DEBUG("Invalid RSA operation type");
- cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_dequeue_ecdsa_op(struct rte_crypto_ecdsa_op_param *ecdsa,
- struct cpt_request_info *req,
- struct cpt_asym_ec_ctx *ec)
-{
- int prime_len = ec_grp[ec->curveid].prime.length;
-
- if (ecdsa->op_type == RTE_CRYPTO_ASYM_OP_VERIFY)
- return;
-
- /* Separate out sign r and s components */
- memcpy(ecdsa->r.data, req->rptr, prime_len);
- memcpy(ecdsa->s.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8),
- prime_len);
- ecdsa->r.length = prime_len;
- ecdsa->s.length = prime_len;
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_dequeue_ecpm_op(struct rte_crypto_ecpm_op_param *ecpm,
- struct cpt_request_info *req,
- struct cpt_asym_ec_ctx *ec)
-{
- int prime_len = ec_grp[ec->curveid].prime.length;
-
- memcpy(ecpm->r.x.data, req->rptr, prime_len);
- memcpy(ecpm->r.y.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8),
- prime_len);
- ecpm->r.x.length = prime_len;
- ecpm->r.y.length = prime_len;
-}
-
-static void
-otx2_cpt_asym_post_process(struct rte_crypto_op *cop,
- struct cpt_request_info *req)
-{
- struct rte_crypto_asym_op *op = cop->asym;
- struct cpt_asym_sess_misc *sess;
-
- sess = get_asym_session_private_data(op->session,
- otx2_cryptodev_driver_id);
-
- switch (sess->xfrm_type) {
- case RTE_CRYPTO_ASYM_XFORM_RSA:
- otx2_cpt_asym_rsa_op(cop, req, &sess->rsa_ctx);
- break;
- case RTE_CRYPTO_ASYM_XFORM_MODEX:
- op->modex.result.length = sess->mod_ctx.modulus.length;
- memcpy(op->modex.result.data, req->rptr,
- op->modex.result.length);
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECDSA:
- otx2_cpt_asym_dequeue_ecdsa_op(&op->ecdsa, req, &sess->ec_ctx);
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECPM:
- otx2_cpt_asym_dequeue_ecpm_op(&op->ecpm, req, &sess->ec_ctx);
- break;
- default:
- CPT_LOG_DP_DEBUG("Invalid crypto xform type");
- cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-}
-
-static void
-otx2_cpt_sec_post_process(struct rte_crypto_op *cop, uintptr_t *rsp)
-{
- struct cpt_request_info *req = (struct cpt_request_info *)rsp[2];
- vq_cmd_word0_t *word0 = (vq_cmd_word0_t *)&req->ist.ei0;
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m = sym_op->m_src;
- struct rte_ipv6_hdr *ip6;
- struct rte_ipv4_hdr *ip;
- uint16_t m_len = 0;
- int mdata_len;
- char *data;
-
- mdata_len = (int)rsp[3];
- rte_pktmbuf_trim(m, mdata_len);
-
- if (word0->s.opcode.major == OTX2_IPSEC_PO_PROCESS_IPSEC_INB) {
- data = rte_pktmbuf_mtod(m, char *);
- ip = (struct rte_ipv4_hdr *)(data +
- OTX2_IPSEC_PO_INB_RPTR_HDR);
-
- if ((ip->version_ihl >> 4) == 4) {
- m_len = rte_be_to_cpu_16(ip->total_length);
- } else {
- ip6 = (struct rte_ipv6_hdr *)(data +
- OTX2_IPSEC_PO_INB_RPTR_HDR);
- m_len = rte_be_to_cpu_16(ip6->payload_len) +
- sizeof(struct rte_ipv6_hdr);
- }
-
- m->data_len = m_len;
- m->pkt_len = m_len;
- m->data_off += OTX2_IPSEC_PO_INB_RPTR_HDR;
- }
-}
-
-static inline void
-otx2_cpt_dequeue_post_process(struct otx2_cpt_qp *qp, struct rte_crypto_op *cop,
- uintptr_t *rsp, uint8_t cc)
-{
- unsigned int sz;
-
- if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
- if (likely(cc == OTX2_IPSEC_PO_CC_SUCCESS)) {
- otx2_cpt_sec_post_process(cop, rsp);
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
-
- return;
- }
-
- if (likely(cc == NO_ERR)) {
- /* Verify authentication data if required */
- if (unlikely(rsp[2]))
- compl_auth_verify(cop, (uint8_t *)rsp[2],
- rsp[3]);
- else
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else {
- if (cc == ERR_GC_ICV_MISCOMPARE)
- cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-
- if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
- sym_session_clear(otx2_cryptodev_driver_id,
- cop->sym->session);
- sz = rte_cryptodev_sym_get_existing_header_session_size(
- cop->sym->session);
- memset(cop->sym->session, 0, sz);
- rte_mempool_put(qp->sess_mp, cop->sym->session);
- cop->sym->session = NULL;
- }
- }
-
- if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
- if (likely(cc == NO_ERR)) {
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /*
- * Pass cpt_req_info stored in metabuf during
- * enqueue.
- */
- rsp = RTE_PTR_ADD(rsp, 4 * sizeof(uintptr_t));
- otx2_cpt_asym_post_process(cop,
- (struct cpt_request_info *)rsp);
- } else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-}
-
-static uint16_t
-otx2_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- int i, nb_pending, nb_completed;
- struct otx2_cpt_qp *qp = qptr;
- struct pending_queue *pend_q;
- struct cpt_request_info *req;
- struct rte_crypto_op *cop;
- uint8_t cc[nb_ops];
- uintptr_t *rsp;
- void *metabuf;
-
- pend_q = &qp->pend_q;
-
- nb_pending = pending_queue_level(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- /* Ensure pcount isn't read before data lands */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
-
- nb_ops = RTE_MIN(nb_ops, nb_pending);
-
- for (i = 0; i < nb_ops; i++) {
- pending_queue_peek(pend_q, (void **)&req,
- OTX2_CPT_DEFAULT_CMD_QLEN, 0);
-
- cc[i] = otx2_cpt_compcode_get(req);
-
- if (unlikely(cc[i] == ERR_REQ_PENDING))
- break;
-
- ops[i] = req->op;
-
- pending_queue_pop(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN);
- }
-
- nb_completed = i;
-
- for (i = 0; i < nb_completed; i++) {
- rsp = (void *)ops[i];
-
- metabuf = (void *)rsp[0];
- cop = (void *)rsp[1];
-
- ops[i] = cop;
-
- otx2_cpt_dequeue_post_process(qp, cop, rsp, cc[i]);
-
- free_op_meta(metabuf, qp->meta_info.pool);
- }
-
- return nb_completed;
-}
-
-void
-otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev)
-{
- dev->enqueue_burst = otx2_cpt_enqueue_burst;
- dev->dequeue_burst = otx2_cpt_dequeue_burst;
-
- rte_mb();
-}
-
-/* PMD ops */
-
-static int
-otx2_cpt_dev_config(struct rte_cryptodev *dev,
- struct rte_cryptodev_config *conf)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- int ret;
-
- if (conf->nb_queue_pairs > vf->max_queues) {
- CPT_LOG_ERR("Invalid number of queue pairs requested");
- return -EINVAL;
- }
-
- dev->feature_flags = otx2_cpt_default_ff_get() & ~conf->ff_disable;
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) {
- /* Initialize shared FPM table */
- ret = cpt_fpm_init(otx2_fpm_iova);
- if (ret)
- return ret;
- }
-
- /* Unregister error interrupts */
- if (vf->err_intr_registered)
- otx2_cpt_err_intr_unregister(dev);
-
- /* Detach queues */
- if (vf->nb_queues) {
- ret = otx2_cpt_queues_detach(dev);
- if (ret) {
- CPT_LOG_ERR("Could not detach CPT queues");
- return ret;
- }
- }
-
- /* Attach queues */
- ret = otx2_cpt_queues_attach(dev, conf->nb_queue_pairs);
- if (ret) {
- CPT_LOG_ERR("Could not attach CPT queues");
- return -ENODEV;
- }
-
- ret = otx2_cpt_msix_offsets_get(dev);
- if (ret) {
- CPT_LOG_ERR("Could not get MSI-X offsets");
- goto queues_detach;
- }
-
- /* Register error interrupts */
- ret = otx2_cpt_err_intr_register(dev);
- if (ret) {
- CPT_LOG_ERR("Could not register error interrupts");
- goto queues_detach;
- }
-
- ret = otx2_cpt_inline_init(dev);
- if (ret) {
- CPT_LOG_ERR("Could not enable inline IPsec");
- goto intr_unregister;
- }
-
- otx2_cpt_set_enqdeq_fns(dev);
-
- return 0;
-
-intr_unregister:
- otx2_cpt_err_intr_unregister(dev);
-queues_detach:
- otx2_cpt_queues_detach(dev);
- return ret;
-}
-
-static int
-otx2_cpt_dev_start(struct rte_cryptodev *dev)
-{
- RTE_SET_USED(dev);
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- return 0;
-}
-
-static void
-otx2_cpt_dev_stop(struct rte_cryptodev *dev)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO)
- cpt_fpm_clear();
-}
-
-static int
-otx2_cpt_dev_close(struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- int i, ret = 0;
-
- for (i = 0; i < dev->data->nb_queue_pairs; i++) {
- ret = otx2_cpt_queue_pair_release(dev, i);
- if (ret)
- return ret;
- }
-
- /* Unregister error interrupts */
- if (vf->err_intr_registered)
- otx2_cpt_err_intr_unregister(dev);
-
- /* Detach queues */
- if (vf->nb_queues) {
- ret = otx2_cpt_queues_detach(dev);
- if (ret)
- CPT_LOG_ERR("Could not detach CPT queues");
- }
-
- return ret;
-}
-
-static void
-otx2_cpt_dev_info_get(struct rte_cryptodev *dev,
- struct rte_cryptodev_info *info)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
-
- if (info != NULL) {
- info->max_nb_queue_pairs = vf->max_queues;
- info->feature_flags = otx2_cpt_default_ff_get();
- info->capabilities = otx2_cpt_capabilities_get();
- info->sym.max_nb_sessions = 0;
- info->driver_id = otx2_cryptodev_driver_id;
- info->min_mbuf_headroom_req = OTX2_CPT_MIN_HEADROOM_REQ;
- info->min_mbuf_tailroom_req = OTX2_CPT_MIN_TAILROOM_REQ;
- }
-}
-
-static int
-otx2_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
- const struct rte_cryptodev_qp_conf *conf,
- int socket_id __rte_unused)
-{
- uint8_t grp_mask = OTX2_CPT_ENG_GRPS_MASK;
- struct rte_pci_device *pci_dev;
- struct otx2_cpt_qp *qp;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (dev->data->queue_pairs[qp_id] != NULL)
- otx2_cpt_queue_pair_release(dev, qp_id);
-
- if (conf->nb_descriptors > OTX2_CPT_DEFAULT_CMD_QLEN) {
- CPT_LOG_ERR("Could not setup queue pair for %u descriptors",
- conf->nb_descriptors);
- return -EINVAL;
- }
-
- pci_dev = RTE_DEV_TO_PCI(dev->device);
-
- if (pci_dev->mem_resource[2].addr == NULL) {
- CPT_LOG_ERR("Invalid PCI mem address");
- return -EIO;
- }
-
- qp = otx2_cpt_qp_create(dev, qp_id, grp_mask);
- if (qp == NULL) {
- CPT_LOG_ERR("Could not create queue pair %d", qp_id);
- return -ENOMEM;
- }
-
- qp->sess_mp = conf->mp_session;
- qp->sess_mp_priv = conf->mp_session_private;
- dev->data->queue_pairs[qp_id] = qp;
-
- return 0;
-}
-
-static int
-otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id)
-{
- struct otx2_cpt_qp *qp = dev->data->queue_pairs[qp_id];
- int ret;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (qp == NULL)
- return -EINVAL;
-
- CPT_LOG_INFO("Releasing queue pair %d", qp_id);
-
- ret = otx2_cpt_qp_destroy(dev, qp);
- if (ret) {
- CPT_LOG_ERR("Could not destroy queue pair %d", qp_id);
- return ret;
- }
-
- dev->data->queue_pairs[qp_id] = NULL;
-
- return 0;
-}
-
-static unsigned int
-otx2_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
-{
- return cpt_get_session_size();
-}
-
-static int
-otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
- struct rte_crypto_sym_xform *xform,
- struct rte_cryptodev_sym_session *sess,
- struct rte_mempool *pool)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- return sym_session_configure(dev->driver_id, xform, sess, pool);
-}
-
-static void
-otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- return sym_session_clear(dev->driver_id, sess);
-}
-
-static unsigned int
-otx2_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
-{
- return sizeof(struct cpt_asym_sess_misc);
-}
-
-static int
-otx2_cpt_asym_session_cfg(struct rte_cryptodev *dev,
- struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
-{
- struct cpt_asym_sess_misc *priv;
- vq_cmd_word3_t vq_cmd_w3;
- int ret;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session_private_data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
- ret = cpt_fill_asym_session_parameters(priv, xform);
- if (ret) {
- CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
- return ret;
- }
-
- vq_cmd_w3.u64 = 0;
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_AE;
- priv->cpt_inst_w7 = vq_cmd_w3.u64;
-
- set_asym_session_private_data(sess, dev->driver_id, priv);
-
- return 0;
-}
-
-static void
-otx2_cpt_asym_session_clear(struct rte_cryptodev *dev,
- struct rte_cryptodev_asym_session *sess)
-{
- struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- priv = get_asym_session_private_data(sess, dev->driver_id);
- if (priv == NULL)
- return;
-
- /* Free resources allocated in session_cfg */
- cpt_free_asym_session_parameters(priv);
-
- /* Reset and free object back to pool */
- memset(priv, 0, otx2_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
-}
-
-struct rte_cryptodev_ops otx2_cpt_ops = {
- /* Device control ops */
- .dev_configure = otx2_cpt_dev_config,
- .dev_start = otx2_cpt_dev_start,
- .dev_stop = otx2_cpt_dev_stop,
- .dev_close = otx2_cpt_dev_close,
- .dev_infos_get = otx2_cpt_dev_info_get,
-
- .stats_get = NULL,
- .stats_reset = NULL,
- .queue_pair_setup = otx2_cpt_queue_pair_setup,
- .queue_pair_release = otx2_cpt_queue_pair_release,
-
- /* Symmetric crypto ops */
- .sym_session_get_size = otx2_cpt_sym_session_get_size,
- .sym_session_configure = otx2_cpt_sym_session_configure,
- .sym_session_clear = otx2_cpt_sym_session_clear,
-
- /* Asymmetric crypto ops */
- .asym_session_get_size = otx2_cpt_asym_session_size_get,
- .asym_session_configure = otx2_cpt_asym_session_cfg,
- .asym_session_clear = otx2_cpt_asym_session_clear,
-
-};
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
deleted file mode 100644
index 7faf7ad034..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
+++ /dev/null
@@ -1,15 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_OPS_H_
-#define _OTX2_CRYPTODEV_OPS_H_
-
-#include <cryptodev_pmd.h>
-
-#define OTX2_CPT_MIN_HEADROOM_REQ 48
-#define OTX2_CPT_MIN_TAILROOM_REQ 208
-
-extern struct rte_cryptodev_ops otx2_cpt_ops;
-
-#endif /* _OTX2_CRYPTODEV_OPS_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
deleted file mode 100644
index 01c081a216..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
+++ /dev/null
@@ -1,82 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_OPS_HELPER_H_
-#define _OTX2_CRYPTODEV_OPS_HELPER_H_
-
-#include "cpt_pmd_logs.h"
-
-static void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
-{
- void *priv = get_sym_session_private_data(sess, driver_id);
- struct cpt_sess_misc *misc;
- struct rte_mempool *pool;
- struct cpt_ctx *ctx;
-
- if (priv == NULL)
- return;
-
- misc = priv;
- ctx = SESS_PRIV(misc);
-
- if (ctx->auth_key != NULL)
- rte_free(ctx->auth_key);
-
- memset(priv, 0, cpt_get_session_size());
-
- pool = rte_mempool_from_obj(priv);
-
- set_sym_session_private_data(sess, driver_id, NULL);
-
- rte_mempool_put(pool, priv);
-}
-
-static __rte_always_inline uint8_t
-otx2_cpt_compcode_get(struct cpt_request_info *req)
-{
- volatile struct cpt_res_s_9s *res;
- uint8_t ret;
-
- res = (volatile struct cpt_res_s_9s *)req->completion_addr;
-
- if (unlikely(res->compcode == CPT_9X_COMP_E_NOTDONE)) {
- if (rte_get_timer_cycles() < req->time_out)
- return ERR_REQ_PENDING;
-
- CPT_LOG_DP_ERR("Request timed out");
- return ERR_REQ_TIMEOUT;
- }
-
- if (likely(res->compcode == CPT_9X_COMP_E_GOOD)) {
- ret = NO_ERR;
- if (unlikely(res->uc_compcode)) {
- ret = res->uc_compcode;
- CPT_LOG_DP_DEBUG("Request failed with microcode error");
- CPT_LOG_DP_DEBUG("MC completion code 0x%x",
- res->uc_compcode);
- }
- } else {
- CPT_LOG_DP_DEBUG("HW completion code 0x%x", res->compcode);
-
- ret = res->compcode;
- switch (res->compcode) {
- case CPT_9X_COMP_E_INSTERR:
- CPT_LOG_DP_ERR("Request failed with instruction error");
- break;
- case CPT_9X_COMP_E_FAULT:
- CPT_LOG_DP_ERR("Request failed with DMA fault");
- break;
- case CPT_9X_COMP_E_HWERR:
- CPT_LOG_DP_ERR("Request failed with hardware error");
- break;
- default:
- CPT_LOG_DP_ERR("Request failed with unknown completion code");
- }
- }
-
- return ret;
-}
-
-#endif /* _OTX2_CRYPTODEV_OPS_HELPER_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h b/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
deleted file mode 100644
index 95bce3621a..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020-2021 Marvell.
- */
-
-#ifndef _OTX2_CRYPTODEV_QP_H_
-#define _OTX2_CRYPTODEV_QP_H_
-
-#include <rte_common.h>
-#include <rte_eventdev.h>
-#include <rte_mempool.h>
-#include <rte_spinlock.h>
-
-#include "cpt_common.h"
-
-struct otx2_cpt_qp {
- uint32_t id;
- /**< Queue pair id */
- uint8_t blkaddr;
- /**< CPT0/1 BLKADDR of LF */
- uintptr_t base;
- /**< Base address where BAR is mapped */
- void *lmtline;
- /**< Address of LMTLINE */
- rte_iova_t lf_nq_reg;
- /**< LF enqueue register address */
- struct pending_queue pend_q;
- /**< Pending queue */
- struct rte_mempool *sess_mp;
- /**< Session mempool */
- struct rte_mempool *sess_mp_priv;
- /**< Session private data mempool */
- struct cpt_qp_meta_info meta_info;
- /**< Metabuf info required to support operations on the queue pair */
- rte_iova_t iq_dma_addr;
- /**< Instruction queue address */
- struct rte_event ev;
- /**< Event information required for binding cryptodev queue to
- * eventdev queue. Used by crypto adapter.
- */
- uint8_t ca_enable;
- /**< Set when queue pair is added to crypto adapter */
- uint8_t qp_ev_bind;
- /**< Set when queue pair is bound to event queue */
-};
-
-#endif /* _OTX2_CRYPTODEV_QP_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
deleted file mode 100644
index 9a4f84f8d8..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
+++ /dev/null
@@ -1,655 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_esp.h>
-#include <rte_ethdev.h>
-#include <rte_ip.h>
-#include <rte_malloc.h>
-#include <rte_security.h>
-#include <rte_security_driver.h>
-#include <rte_udp.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_sec.h"
-#include "otx2_security.h"
-
-static int
-ipsec_lp_len_precalc(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_sec_session_ipsec_lp *lp)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
-
- lp->partial_len = 0;
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- lp->partial_len = sizeof(struct rte_ipv4_hdr);
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- lp->partial_len = sizeof(struct rte_ipv6_hdr);
- else
- return -EINVAL;
- }
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
- lp->partial_len += sizeof(struct rte_esp_hdr);
- lp->roundup_len = sizeof(struct rte_esp_tail);
- } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) {
- lp->partial_len += OTX2_SEC_AH_HDR_LEN;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->options.udp_encap)
- lp->partial_len += sizeof(struct rte_udp_hdr);
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- lp->partial_len += OTX2_SEC_AES_GCM_IV_LEN;
- lp->partial_len += OTX2_SEC_AES_GCM_MAC_LEN;
- lp->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN;
- return 0;
- } else {
- return -EINVAL;
- }
- }
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- lp->partial_len += OTX2_SEC_AES_CBC_IV_LEN;
- lp->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN;
- } else {
- return -EINVAL;
- }
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- lp->partial_len += OTX2_SEC_SHA1_HMAC_LEN;
- else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
- lp->partial_len += OTX2_SEC_SHA2_HMAC_LEN;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int
-otx2_cpt_enq_sa_write(struct otx2_sec_session_ipsec_lp *lp,
- struct otx2_cpt_qp *qptr, uint8_t opcode)
-{
- uint64_t lmt_status, time_out;
- void *lmtline = qptr->lmtline;
- struct otx2_cpt_inst_s inst;
- struct otx2_cpt_res *res;
- uint64_t *mdata;
- int ret = 0;
-
- if (unlikely(rte_mempool_get(qptr->meta_info.pool,
- (void **)&mdata) < 0))
- return -ENOMEM;
-
- res = (struct otx2_cpt_res *)RTE_PTR_ALIGN(mdata, 16);
- res->compcode = CPT_9X_COMP_E_NOTDONE;
-
- inst.opcode = opcode | (lp->ctx_len << 8);
- inst.param1 = 0;
- inst.param2 = 0;
- inst.dlen = lp->ctx_len << 3;
- inst.dptr = rte_mempool_virt2iova(lp);
- inst.rptr = 0;
- inst.cptr = rte_mempool_virt2iova(lp);
- inst.egrp = OTX2_CPT_EGRP_SE;
-
- inst.u64[0] = 0;
- inst.u64[2] = 0;
- inst.u64[3] = 0;
- inst.res_addr = rte_mempool_virt2iova(res);
-
- rte_io_wmb();
-
- do {
- /* Copy CPT command to LMTLINE */
- otx2_lmt_mov(lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(qptr->lf_nq_reg);
- } while (lmt_status == 0);
-
- time_out = rte_get_timer_cycles() +
- DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
-
- while (res->compcode == CPT_9X_COMP_E_NOTDONE) {
- if (rte_get_timer_cycles() > time_out) {
- rte_mempool_put(qptr->meta_info.pool, mdata);
- otx2_err("Request timed out");
- return -ETIMEDOUT;
- }
- rte_io_rmb();
- }
-
- if (unlikely(res->compcode != CPT_9X_COMP_E_GOOD)) {
- ret = res->compcode;
- switch (ret) {
- case CPT_9X_COMP_E_INSTERR:
- otx2_err("Request failed with instruction error");
- break;
- case CPT_9X_COMP_E_FAULT:
- otx2_err("Request failed with DMA fault");
- break;
- case CPT_9X_COMP_E_HWERR:
- otx2_err("Request failed with hardware error");
- break;
- default:
- otx2_err("Request failed with unknown hardware "
- "completion code : 0x%x", ret);
- }
- goto mempool_put;
- }
-
- if (unlikely(res->uc_compcode != OTX2_IPSEC_PO_CC_SUCCESS)) {
- ret = res->uc_compcode;
- switch (ret) {
- case OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED:
- otx2_err("Invalid auth type");
- break;
- case OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED:
- otx2_err("Invalid encrypt type");
- break;
- default:
- otx2_err("Request failed with unknown microcode "
- "completion code : 0x%x", ret);
- }
- }
-
-mempool_put:
- rte_mempool_put(qptr->meta_info.pool, mdata);
- return ret;
-}
-
-static void
-set_session_misc_attributes(struct otx2_sec_session_ipsec_lp *sess,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_crypto_sym_xform *auth_xform,
- struct rte_crypto_sym_xform *cipher_xform)
-{
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- sess->iv_offset = crypto_xform->aead.iv.offset;
- sess->iv_length = crypto_xform->aead.iv.length;
- sess->aad_length = crypto_xform->aead.aad_length;
- sess->mac_len = crypto_xform->aead.digest_length;
- } else {
- sess->iv_offset = cipher_xform->cipher.iv.offset;
- sess->iv_length = cipher_xform->cipher.iv.length;
- sess->auth_iv_offset = auth_xform->auth.iv.offset;
- sess->auth_iv_length = auth_xform->auth.iv.length;
- sess->mac_len = auth_xform->auth.digest_length;
- }
-}
-
-static int
-crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_ipsec_po_ip_template *template = NULL;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_sec_session_ipsec_lp *lp;
- struct otx2_ipsec_po_sa_ctl *ctl;
- int cipher_key_len, auth_key_len;
- struct otx2_ipsec_po_out_sa *sa;
- struct otx2_sec_session *sess;
- struct otx2_cpt_inst_s inst;
- struct rte_ipv6_hdr *ip6;
- struct rte_ipv4_hdr *ip;
- int ret, ctx_len;
-
- sess = get_sec_session_private_data(sec_sess);
- sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
- lp = &sess->ipsec.lp;
-
- sa = &lp->out_sa;
- ctl = &sa->ctl;
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_po_out_sa));
-
- /* Initialize lookaside ipsec private data */
- lp->ip_id = 0;
- lp->seq_lo = 1;
- lp->seq_hi = 0;
-
- ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- return ret;
-
- ret = ipsec_lp_len_precalc(ipsec, crypto_xform, lp);
- if (ret)
- return ret;
-
- /* Start ip id from 1 */
- lp->ip_id = 1;
-
- if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) {
- template = &sa->aes_gcm.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- aes_gcm.template) + sizeof(
- sa->aes_gcm.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA1) {
- template = &sa->sha1.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha1.template) + sizeof(
- sa->sha1.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256) {
- template = &sa->sha2.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha2.template) + sizeof(
- sa->sha2.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else {
- return -EINVAL;
- }
- ip = &template->ip4.ipv4_hdr;
- if (ipsec->options.udp_encap) {
- ip->next_proto_id = IPPROTO_UDP;
- template->ip4.udp_src = rte_be_to_cpu_16(4500);
- template->ip4.udp_dst = rte_be_to_cpu_16(4500);
- } else {
- ip->next_proto_id = IPPROTO_ESP;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- ip->version_ihl = RTE_IPV4_VHL_DEF;
- ip->time_to_live = ipsec->tunnel.ipv4.ttl;
- ip->type_of_service |= (ipsec->tunnel.ipv4.dscp << 2);
- if (ipsec->tunnel.ipv4.df)
- ip->fragment_offset = BIT(14);
- memcpy(&ip->src_addr, &ipsec->tunnel.ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&ip->dst_addr, &ipsec->tunnel.ipv4.dst_ip,
- sizeof(struct in_addr));
- } else if (ipsec->tunnel.type ==
- RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
-
- if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) {
- template = &sa->aes_gcm.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- aes_gcm.template) + sizeof(
- sa->aes_gcm.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA1) {
- template = &sa->sha1.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha1.template) + sizeof(
- sa->sha1.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256) {
- template = &sa->sha2.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha2.template) + sizeof(
- sa->sha2.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else {
- return -EINVAL;
- }
-
- ip6 = &template->ip6.ipv6_hdr;
- if (ipsec->options.udp_encap) {
- ip6->proto = IPPROTO_UDP;
- template->ip6.udp_src = rte_be_to_cpu_16(4500);
- template->ip6.udp_dst = rte_be_to_cpu_16(4500);
- } else {
- ip6->proto = (ipsec->proto ==
- RTE_SECURITY_IPSEC_SA_PROTO_ESP) ?
- IPPROTO_ESP : IPPROTO_AH;
- }
- ip6->vtc_flow = rte_cpu_to_be_32(0x60000000 |
- ((ipsec->tunnel.ipv6.dscp <<
- RTE_IPV6_HDR_TC_SHIFT) &
- RTE_IPV6_HDR_TC_MASK) |
- ((ipsec->tunnel.ipv6.flabel <<
- RTE_IPV6_HDR_FL_SHIFT) &
- RTE_IPV6_HDR_FL_MASK));
- ip6->hop_limits = ipsec->tunnel.ipv6.hlimit;
- memcpy(&ip6->src_addr, &ipsec->tunnel.ipv6.src_addr,
- sizeof(struct in6_addr));
- memcpy(&ip6->dst_addr, &ipsec->tunnel.ipv6.dst_addr,
- sizeof(struct in6_addr));
- }
- }
-
- cipher_xform = crypto_xform;
- auth_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- memcpy(sa->sha1.hmac_key, auth_key, auth_key_len);
- else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
- memcpy(sa->sha2.hmac_key, auth_key, auth_key_len);
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_SE;
- inst.cptr = rte_mempool_virt2iova(sa);
-
- lp->cpt_inst_w7 = inst.u64[7];
- lp->ucmd_opcode = (lp->ctx_len << 8) |
- (OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB);
-
- /* Set per packet IV and IKEv2 bits */
- lp->ucmd_param1 = BIT(11) | BIT(9);
- lp->ucmd_param2 = 0;
-
- set_session_misc_attributes(lp, crypto_xform,
- auth_xform, cipher_xform);
-
- return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0],
- OTX2_IPSEC_PO_WRITE_IPSEC_OUTB);
-}
-
-static int
-crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_sec_session_ipsec_lp *lp;
- struct otx2_ipsec_po_sa_ctl *ctl;
- int cipher_key_len, auth_key_len;
- struct otx2_ipsec_po_in_sa *sa;
- struct otx2_sec_session *sess;
- struct otx2_cpt_inst_s inst;
- int ret;
-
- sess = get_sec_session_private_data(sec_sess);
- sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
- lp = &sess->ipsec.lp;
-
- sa = &lp->in_sa;
- ctl = &sa->ctl;
-
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_po_in_sa));
- sa->replay_win_sz = ipsec->replay_win_sz;
-
- ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- return ret;
-
- auth_xform = crypto_xform;
- cipher_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
-
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- aes_gcm.hmac_key[0]) >> 3;
- RTE_ASSERT(lp->ctx_len == OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN);
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- memcpy(sa->aes_gcm.hmac_key, auth_key, auth_key_len);
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- aes_gcm.selector) >> 3;
- } else if (auth_xform->auth.algo ==
- RTE_CRYPTO_AUTH_SHA256_HMAC) {
- memcpy(sa->sha2.hmac_key, auth_key, auth_key_len);
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- sha2.selector) >> 3;
- }
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_SE;
- inst.cptr = rte_mempool_virt2iova(sa);
-
- lp->cpt_inst_w7 = inst.u64[7];
- lp->ucmd_opcode = (lp->ctx_len << 8) |
- (OTX2_IPSEC_PO_PROCESS_IPSEC_INB);
- lp->ucmd_param1 = 0;
-
- /* Set IKEv2 bit */
- lp->ucmd_param2 = BIT(12);
-
- set_session_misc_attributes(lp, crypto_xform,
- auth_xform, cipher_xform);
-
- if (sa->replay_win_sz) {
- if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) {
- otx2_err("Replay window size is not supported");
- return -ENOTSUP;
- }
- sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay),
- 0);
- if (sa->replay == NULL)
- return -ENOMEM;
-
- /* Set window bottom to 1, base and top to size of window */
- sa->replay->winb = 1;
- sa->replay->wint = sa->replay_win_sz;
- sa->replay->base = sa->replay_win_sz;
- sa->esn_low = 0;
- sa->esn_hi = 0;
- }
-
- return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0],
- OTX2_IPSEC_PO_WRITE_IPSEC_INB);
-}
-
-static int
-crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sess)
-{
- int ret;
-
- if (crypto_dev->data->queue_pairs[0] == NULL) {
- otx2_err("Setup cpt queue pair before creating sec session");
- return -EPERM;
- }
-
- ret = ipsec_po_xform_verify(ipsec, crypto_xform);
- if (ret)
- return ret;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
- return crypto_sec_ipsec_inb_session_create(crypto_dev, ipsec,
- crypto_xform, sess);
- else
- return crypto_sec_ipsec_outb_session_create(crypto_dev, ipsec,
- crypto_xform, sess);
-}
-
-static int
-otx2_crypto_sec_session_create(void *device,
- struct rte_security_session_conf *conf,
- struct rte_security_session *sess,
- struct rte_mempool *mempool)
-{
- struct otx2_sec_session *priv;
- int ret;
-
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
- return -ENOTSUP;
-
- if (rte_security_dynfield_register() < 0)
- return -rte_errno;
-
- if (rte_mempool_get(mempool, (void **)&priv)) {
- otx2_err("Could not allocate security session private data");
- return -ENOMEM;
- }
-
- set_sec_session_private_data(sess, priv);
-
- priv->userdata = conf->userdata;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
- ret = crypto_sec_ipsec_session_create(device, &conf->ipsec,
- conf->crypto_xform,
- sess);
- else
- ret = -ENOTSUP;
-
- if (ret)
- goto mempool_put;
-
- return 0;
-
-mempool_put:
- rte_mempool_put(mempool, priv);
- set_sec_session_private_data(sess, NULL);
- return ret;
-}
-
-static int
-otx2_crypto_sec_session_destroy(void *device __rte_unused,
- struct rte_security_session *sess)
-{
- struct otx2_sec_session *priv;
- struct rte_mempool *sess_mp;
-
- priv = get_sec_session_private_data(sess);
-
- if (priv == NULL)
- return 0;
-
- sess_mp = rte_mempool_from_obj(priv);
-
- memset(priv, 0, sizeof(*priv));
-
- set_sec_session_private_data(sess, NULL);
- rte_mempool_put(sess_mp, priv);
-
- return 0;
-}
-
-static unsigned int
-otx2_crypto_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct otx2_sec_session);
-}
-
-static int
-otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused,
- struct rte_security_session *session,
- struct rte_mbuf *m, void *params __rte_unused)
-{
- /* Set security session as the pkt metadata */
- *rte_security_dynfield(m) = (rte_security_dynfield_t)session;
-
- return 0;
-}
-
-static int
-otx2_crypto_sec_get_userdata(void *device __rte_unused, uint64_t md,
- void **userdata)
-{
- /* Retrieve userdata */
- *userdata = (void *)md;
-
- return 0;
-}
-
-static struct rte_security_ops otx2_crypto_sec_ops = {
- .session_create = otx2_crypto_sec_session_create,
- .session_destroy = otx2_crypto_sec_session_destroy,
- .session_get_size = otx2_crypto_sec_session_get_size,
- .set_pkt_metadata = otx2_crypto_sec_set_pkt_mdata,
- .get_userdata = otx2_crypto_sec_get_userdata,
- .capabilities_get = otx2_crypto_sec_capabilities_get
-};
-
-int
-otx2_crypto_sec_ctx_create(struct rte_cryptodev *cdev)
-{
- struct rte_security_ctx *ctx;
-
- ctx = rte_malloc("otx2_cpt_dev_sec_ctx",
- sizeof(struct rte_security_ctx), 0);
-
- if (ctx == NULL)
- return -ENOMEM;
-
- /* Populate ctx */
- ctx->device = cdev;
- ctx->ops = &otx2_crypto_sec_ops;
- ctx->sess_cnt = 0;
-
- cdev->security_ctx = ctx;
-
- return 0;
-}
-
-void
-otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *cdev)
-{
- rte_free(cdev->security_ctx);
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h b/drivers/crypto/octeontx2/otx2_cryptodev_sec.h
deleted file mode 100644
index ff3329c9c1..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h
+++ /dev/null
@@ -1,64 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_CRYPTODEV_SEC_H__
-#define __OTX2_CRYPTODEV_SEC_H__
-
-#include <rte_cryptodev.h>
-
-#include "otx2_ipsec_po.h"
-
-struct otx2_sec_session_ipsec_lp {
- RTE_STD_C11
- union {
- /* Inbound SA */
- struct otx2_ipsec_po_in_sa in_sa;
- /* Outbound SA */
- struct otx2_ipsec_po_out_sa out_sa;
- };
-
- uint64_t cpt_inst_w7;
- union {
- uint64_t ucmd_w0;
- struct {
- uint16_t ucmd_dlen;
- uint16_t ucmd_param2;
- uint16_t ucmd_param1;
- uint16_t ucmd_opcode;
- };
- };
-
- uint8_t partial_len;
- uint8_t roundup_len;
- uint8_t roundup_byte;
- uint16_t ip_id;
- union {
- uint64_t esn;
- struct {
- uint32_t seq_lo;
- uint32_t seq_hi;
- };
- };
-
- /** Context length in 8-byte words */
- size_t ctx_len;
- /** Auth IV offset in bytes */
- uint16_t auth_iv_offset;
- /** IV offset in bytes */
- uint16_t iv_offset;
- /** AAD length */
- uint16_t aad_length;
- /** MAC len in bytes */
- uint8_t mac_len;
- /** IV length in bytes */
- uint8_t iv_length;
- /** Auth IV length in bytes */
- uint8_t auth_iv_length;
-};
-
-int otx2_crypto_sec_ctx_create(struct rte_cryptodev *crypto_dev);
-
-void otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *crypto_dev);
-
-#endif /* __OTX2_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h b/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
deleted file mode 100644
index 089a3d073a..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
+++ /dev/null
@@ -1,227 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_ANTI_REPLAY_H__
-#define __OTX2_IPSEC_ANTI_REPLAY_H__
-
-#include <rte_mbuf.h>
-
-#include "otx2_ipsec_fp.h"
-
-#define WORD_SHIFT 6
-#define WORD_SIZE (1 << WORD_SHIFT)
-#define WORD_MASK (WORD_SIZE - 1)
-
-#define IPSEC_ANTI_REPLAY_FAILED (-1)
-
-static inline int
-anti_replay_check(struct otx2_ipsec_replay *replay, uint64_t seq,
- uint64_t winsz)
-{
- uint64_t *window = &replay->window[0];
- uint64_t ex_winsz = winsz + WORD_SIZE;
- uint64_t winwords = ex_winsz >> WORD_SHIFT;
- uint64_t base = replay->base;
- uint32_t winb = replay->winb;
- uint32_t wint = replay->wint;
- uint64_t seqword, shiftwords;
- uint64_t bit_pos;
- uint64_t shift;
- uint64_t *wptr;
- uint64_t tmp;
-
- if (winsz > 64)
- goto slow_shift;
- /* Check if the seq is the biggest one yet */
- if (likely(seq > base)) {
- shift = seq - base;
- if (shift < winsz) { /* In window */
- /*
- * If more than 64-bit anti-replay window,
- * use slow shift routine
- */
- wptr = window + (shift >> WORD_SHIFT);
- *wptr <<= shift;
- *wptr |= 1ull;
- } else {
- /* No special handling of window size > 64 */
- wptr = window + ((winsz - 1) >> WORD_SHIFT);
- /*
- * Zero out the whole window (especially for
- * bigger than 64b window) till the last 64b word
- * as the incoming sequence number minus
- * base sequence is more than the window size.
- */
- while (window != wptr)
- *window++ = 0ull;
- /*
- * Set the last bit (of the window) to 1
- * as that corresponds to the base sequence number.
- * Now any incoming sequence number which is
- * (base - window size - 1) will pass anti-replay check
- */
- *wptr = 1ull;
- }
- /*
- * Set the base to incoming sequence number as
- * that is the biggest sequence number seen yet
- */
- replay->base = seq;
- return 0;
- }
-
- bit_pos = base - seq;
-
- /* If seq falls behind the window, return failure */
- if (bit_pos >= winsz)
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* seq is within anti-replay window */
- wptr = window + ((winsz - bit_pos - 1) >> WORD_SHIFT);
- bit_pos &= WORD_MASK;
-
- /* Check if this is a replayed packet */
- if (*wptr & ((1ull) << bit_pos))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* mark as seen */
- *wptr |= ((1ull) << bit_pos);
- return 0;
-
-slow_shift:
- if (likely(seq > base)) {
- uint32_t i;
-
- shift = seq - base;
- if (unlikely(shift >= winsz)) {
- /*
- * shift is bigger than the window,
- * so just zero out everything
- */
- for (i = 0; i < winwords; i++)
- window[i] = 0;
-winupdate:
- /* Find out the word */
- seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
-
- /* Find out the bit in the word */
- bit_pos = (seq - 1) & WORD_MASK;
-
- /*
- * Set the bit corresponding to sequence number
- * in window to mark it as received
- */
- window[seqword] |= (1ull << (63 - bit_pos));
-
- /* wint and winb range from 1 to ex_winsz */
- replay->wint = ((wint + shift - 1) % ex_winsz) + 1;
- replay->winb = ((winb + shift - 1) % ex_winsz) + 1;
-
- replay->base = seq;
- return 0;
- }
-
- /*
- * New sequence number is bigger than the base but
- * it's not bigger than base + window size
- */
-
- shiftwords = ((wint + shift - 1) >> WORD_SHIFT) -
- ((wint - 1) >> WORD_SHIFT);
- if (unlikely(shiftwords)) {
- tmp = (wint + WORD_SIZE - 1) / WORD_SIZE;
- for (i = 0; i < shiftwords; i++) {
- tmp %= winwords;
- window[tmp++] = 0;
- }
- }
-
- goto winupdate;
- }
-
- /* Sequence number is before the window */
- if (unlikely((seq + winsz) <= base))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* Sequence number is within the window */
-
- /* Find out the word */
- seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
-
- /* Find out the bit in the word */
- bit_pos = (seq - 1) & WORD_MASK;
-
- /* Check if this is a replayed packet */
- if (window[seqword] & (1ull << (63 - bit_pos)))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /*
- * Set the bit corresponding to sequence number
- * in window to mark it as received
- */
- window[seqword] |= (1ull << (63 - bit_pos));
-
- return 0;
-}
-
-static inline int
-cpt_ipsec_ip_antireplay_check(struct otx2_ipsec_fp_in_sa *sa, void *l3_ptr)
-{
- struct otx2_ipsec_fp_res_hdr *hdr = l3_ptr;
- uint64_t seq_in_sa;
- uint32_t seqh = 0;
- uint32_t seql;
- uint64_t seq;
- uint8_t esn;
- int ret;
-
- esn = sa->ctl.esn_en;
- seql = rte_be_to_cpu_32(hdr->seq_no_lo);
-
- if (!esn)
- seq = (uint64_t)seql;
- else {
- seqh = rte_be_to_cpu_32(hdr->seq_no_hi);
- seq = ((uint64_t)seqh << 32) | seql;
- }
-
- if (unlikely(seq == 0))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- rte_spinlock_lock(&sa->replay->lock);
- ret = anti_replay_check(sa->replay, seq, sa->replay_win_sz);
- if (esn && (ret == 0)) {
- seq_in_sa = ((uint64_t)rte_be_to_cpu_32(sa->esn_hi) << 32) |
- rte_be_to_cpu_32(sa->esn_low);
- if (seq > seq_in_sa) {
- sa->esn_low = rte_cpu_to_be_32(seql);
- sa->esn_hi = rte_cpu_to_be_32(seqh);
- }
- }
- rte_spinlock_unlock(&sa->replay->lock);
-
- return ret;
-}
-
-static inline uint32_t
-anti_replay_get_seqh(uint32_t winsz, uint32_t seql,
- uint32_t esn_hi, uint32_t esn_low)
-{
- uint32_t win_low = esn_low - winsz + 1;
-
- if (esn_low > winsz - 1) {
- /* Window is in one sequence number subspace */
- if (seql > win_low)
- return esn_hi;
- else
- return esn_hi + 1;
- } else {
- /* Window is split across two sequence number subspaces */
- if (seql > win_low)
- return esn_hi - 1;
- else
- return esn_hi;
- }
-}
-#endif /* __OTX2_IPSEC_ANTI_REPLAY_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_fp.h b/drivers/crypto/octeontx2/otx2_ipsec_fp.h
deleted file mode 100644
index 2461e7462b..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_fp.h
+++ /dev/null
@@ -1,371 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_FP_H__
-#define __OTX2_IPSEC_FP_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_security.h>
-
-/* Macros for anti replay and ESN */
-#define OTX2_IPSEC_MAX_REPLAY_WIN_SZ 1024
-
-struct otx2_ipsec_fp_res_hdr {
- uint32_t spi;
- uint32_t seq_no_lo;
- uint32_t seq_no_hi;
- uint32_t rsvd;
-};
-
-enum {
- OTX2_IPSEC_FP_SA_DIRECTION_INBOUND = 0,
- OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_IP_VERSION_4 = 0,
- OTX2_IPSEC_FP_SA_IP_VERSION_6 = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_MODE_TRANSPORT = 0,
- OTX2_IPSEC_FP_SA_MODE_TUNNEL = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_PROTOCOL_AH = 0,
- OTX2_IPSEC_FP_SA_PROTOCOL_ESP = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_128 = 1,
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_192 = 2,
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_256 = 3,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_ENC_NULL = 0,
- OTX2_IPSEC_FP_SA_ENC_DES_CBC = 1,
- OTX2_IPSEC_FP_SA_ENC_3DES_CBC = 2,
- OTX2_IPSEC_FP_SA_ENC_AES_CBC = 3,
- OTX2_IPSEC_FP_SA_ENC_AES_CTR = 4,
- OTX2_IPSEC_FP_SA_ENC_AES_GCM = 5,
- OTX2_IPSEC_FP_SA_ENC_AES_CCM = 6,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_AUTH_NULL = 0,
- OTX2_IPSEC_FP_SA_AUTH_MD5 = 1,
- OTX2_IPSEC_FP_SA_AUTH_SHA1 = 2,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_224 = 3,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_256 = 4,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_384 = 5,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_512 = 6,
- OTX2_IPSEC_FP_SA_AUTH_AES_GMAC = 7,
- OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128 = 8,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_FRAG_POST = 0,
- OTX2_IPSEC_FP_SA_FRAG_PRE = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_ENCAP_NONE = 0,
- OTX2_IPSEC_FP_SA_ENCAP_UDP = 1,
-};
-
-struct otx2_ipsec_fp_sa_ctl {
- rte_be32_t spi : 32;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_42_40 : 3;
- uint64_t esn_en : 1;
- uint64_t rsvd_45_44 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct otx2_ipsec_fp_out_sa {
- /* w0 */
- struct otx2_ipsec_fp_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4];
- uint16_t udp_src;
- uint16_t udp_dst;
-
- /* w2 */
- uint32_t ip_src;
- uint32_t ip_dst;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-};
-
-struct otx2_ipsec_replay {
- rte_spinlock_t lock;
- uint32_t winb;
- uint32_t wint;
- uint64_t base; /**< base of the anti-replay window */
- uint64_t window[17]; /**< anti-replay window */
-};
-
-struct otx2_ipsec_fp_in_sa {
- /* w0 */
- struct otx2_ipsec_fp_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4]; /* Only for AES-GCM */
- uint32_t unused;
-
- /* w2 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-
- RTE_STD_C11
- union {
- void *userdata;
- uint64_t udata64;
- };
- union {
- struct otx2_ipsec_replay *replay;
- uint64_t replay64;
- };
- uint32_t replay_win_sz;
-
- uint32_t reserved1;
-};
-
-static inline int
-ipsec_fp_xform_cipher_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- switch (xform->cipher.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -ENOTSUP;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_auth_verify(struct rte_crypto_sym_xform *xform)
-{
- uint16_t keylen = xform->auth.key.length;
-
- if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- if (keylen >= 20 && keylen <= 64)
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_aead_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
- return -EINVAL;
-
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- switch (xform->aead.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -EINVAL;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- int ret;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- return ipsec_fp_xform_aead_verify(ipsec, xform);
-
- if (xform->next == NULL)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- /* Ingress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- /* Egress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- cipher_xform = xform;
- auth_xform = xform->next;
- }
-
- ret = ipsec_fp_xform_cipher_verify(cipher_xform);
- if (ret)
- return ret;
-
- ret = ipsec_fp_xform_auth_verify(auth_xform);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static inline int
-ipsec_fp_sa_ctl_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_ipsec_fp_sa_ctl *ctl)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
- int aes_key_len;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND;
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_INBOUND;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4;
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_6;
- else
- return -EINVAL;
- }
-
- ctl->inner_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4;
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT)
- ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TRANSPORT;
- else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TUNNEL;
- else
- return -EINVAL;
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
- ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_AH;
- else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
- ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_ESP;
- else
- return -EINVAL;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_GCM;
- aes_key_len = xform->aead.key.length;
- } else {
- return -ENOTSUP;
- }
- } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_CBC;
- aes_key_len = cipher_xform->cipher.key.length;
- } else {
- return -ENOTSUP;
- }
-
- switch (aes_key_len) {
- case 16:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_128;
- break;
- case 24:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_192;
- break;
- case 32:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) {
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_NULL:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_NULL;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_MD5;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA1;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_224;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_256;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_384;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_512;
- break;
- case RTE_CRYPTO_AUTH_AES_GMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_GMAC;
- break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128;
- break;
- default:
- return -ENOTSUP;
- }
- }
-
- if (ipsec->options.esn == 1)
- ctl->esn_en = 1;
-
- ctl->spi = rte_cpu_to_be_32(ipsec->spi);
-
- return 0;
-}
-
-#endif /* __OTX2_IPSEC_FP_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po.h b/drivers/crypto/octeontx2/otx2_ipsec_po.h
deleted file mode 100644
index 695f552644..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_po.h
+++ /dev/null
@@ -1,447 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_PO_H__
-#define __OTX2_IPSEC_PO_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_ip.h>
-#include <rte_security.h>
-
-#define OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN 0x09
-
-#define OTX2_IPSEC_PO_WRITE_IPSEC_OUTB 0x20
-#define OTX2_IPSEC_PO_WRITE_IPSEC_INB 0x21
-#define OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB 0x23
-#define OTX2_IPSEC_PO_PROCESS_IPSEC_INB 0x24
-
-#define OTX2_IPSEC_PO_INB_RPTR_HDR 0x8
-
-enum otx2_ipsec_po_comp_e {
- OTX2_IPSEC_PO_CC_SUCCESS = 0x00,
- OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED = 0xB0,
- OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED = 0xB1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_DIRECTION_INBOUND = 0,
- OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_IP_VERSION_4 = 0,
- OTX2_IPSEC_PO_SA_IP_VERSION_6 = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_MODE_TRANSPORT = 0,
- OTX2_IPSEC_PO_SA_MODE_TUNNEL = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_PROTOCOL_AH = 0,
- OTX2_IPSEC_PO_SA_PROTOCOL_ESP = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_128 = 1,
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_192 = 2,
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_256 = 3,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_ENC_NULL = 0,
- OTX2_IPSEC_PO_SA_ENC_DES_CBC = 1,
- OTX2_IPSEC_PO_SA_ENC_3DES_CBC = 2,
- OTX2_IPSEC_PO_SA_ENC_AES_CBC = 3,
- OTX2_IPSEC_PO_SA_ENC_AES_CTR = 4,
- OTX2_IPSEC_PO_SA_ENC_AES_GCM = 5,
- OTX2_IPSEC_PO_SA_ENC_AES_CCM = 6,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_AUTH_NULL = 0,
- OTX2_IPSEC_PO_SA_AUTH_MD5 = 1,
- OTX2_IPSEC_PO_SA_AUTH_SHA1 = 2,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_224 = 3,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256 = 4,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_384 = 5,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_512 = 6,
- OTX2_IPSEC_PO_SA_AUTH_AES_GMAC = 7,
- OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128 = 8,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_FRAG_POST = 0,
- OTX2_IPSEC_PO_SA_FRAG_PRE = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_ENCAP_NONE = 0,
- OTX2_IPSEC_PO_SA_ENCAP_UDP = 1,
-};
-
-struct otx2_ipsec_po_out_hdr {
- uint32_t ip_id;
- uint32_t seq;
- uint8_t iv[16];
-};
-
-union otx2_ipsec_po_bit_perfect_iv {
- uint8_t aes_iv[16];
- uint8_t des_iv[8];
- struct {
- uint8_t nonce[4];
- uint8_t iv[8];
- uint8_t counter[4];
- } gcm;
-};
-
-struct otx2_ipsec_po_traffic_selector {
- rte_be16_t src_port[2];
- rte_be16_t dst_port[2];
- RTE_STD_C11
- union {
- struct {
- rte_be32_t src_addr[2];
- rte_be32_t dst_addr[2];
- } ipv4;
- struct {
- uint8_t src_addr[32];
- uint8_t dst_addr[32];
- } ipv6;
- };
-};
-
-struct otx2_ipsec_po_sa_ctl {
- rte_be32_t spi : 32;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_42_40 : 3;
- uint64_t esn_en : 1;
- uint64_t rsvd_45_44 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct otx2_ipsec_po_in_sa {
- /* w0 */
- struct otx2_ipsec_po_sa_ctl ctl;
-
- /* w1-w4 */
- uint8_t cipher_key[32];
-
- /* w5-w6 */
- union otx2_ipsec_po_bit_perfect_iv iv;
-
- /* w7 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w8 */
- uint8_t udp_encap[8];
-
- /* w9-w33 */
- union {
- struct {
- uint8_t hmac_key[48];
- struct otx2_ipsec_po_traffic_selector selector;
- } aes_gcm;
- struct {
- uint8_t hmac_key[64];
- uint8_t hmac_iv[64];
- struct otx2_ipsec_po_traffic_selector selector;
- } sha2;
- };
- union {
- struct otx2_ipsec_replay *replay;
- uint64_t replay64;
- };
- uint32_t replay_win_sz;
-};
-
-struct otx2_ipsec_po_ip_template {
- RTE_STD_C11
- union {
- struct {
- struct rte_ipv4_hdr ipv4_hdr;
- uint16_t udp_src;
- uint16_t udp_dst;
- } ip4;
- struct {
- struct rte_ipv6_hdr ipv6_hdr;
- uint16_t udp_src;
- uint16_t udp_dst;
- } ip6;
- };
-};
-
-struct otx2_ipsec_po_out_sa {
- /* w0 */
- struct otx2_ipsec_po_sa_ctl ctl;
-
- /* w1-w4 */
- uint8_t cipher_key[32];
-
- /* w5-w6 */
- union otx2_ipsec_po_bit_perfect_iv iv;
-
- /* w7 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w8-w55 */
- union {
- struct {
- struct otx2_ipsec_po_ip_template template;
- } aes_gcm;
- struct {
- uint8_t hmac_key[24];
- uint8_t unused[24];
- struct otx2_ipsec_po_ip_template template;
- } sha1;
- struct {
- uint8_t hmac_key[64];
- uint8_t hmac_iv[64];
- struct otx2_ipsec_po_ip_template template;
- } sha2;
- };
-};
-
-static inline int
-ipsec_po_xform_cipher_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- switch (xform->cipher.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -ENOTSUP;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_auth_verify(struct rte_crypto_sym_xform *xform)
-{
- uint16_t keylen = xform->auth.key.length;
-
- if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- if (keylen >= 20 && keylen <= 64)
- return 0;
- } else if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) {
- if (keylen >= 32 && keylen <= 64)
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_aead_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
- return -EINVAL;
-
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- switch (xform->aead.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -EINVAL;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- int ret;
-
- if (ipsec->life.bytes_hard_limit != 0 ||
- ipsec->life.bytes_soft_limit != 0 ||
- ipsec->life.packets_hard_limit != 0 ||
- ipsec->life.packets_soft_limit != 0)
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- return ipsec_po_xform_aead_verify(ipsec, xform);
-
- if (xform->next == NULL)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- /* Ingress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- /* Egress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- cipher_xform = xform;
- auth_xform = xform->next;
- }
-
- ret = ipsec_po_xform_cipher_verify(cipher_xform);
- if (ret)
- return ret;
-
- ret = ipsec_po_xform_auth_verify(auth_xform);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static inline int
-ipsec_po_sa_ctl_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_ipsec_po_sa_ctl *ctl)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
- int aes_key_len;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND;
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_INBOUND;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_4;
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_6;
- else
- return -EINVAL;
- }
-
- ctl->inner_ip_ver = ctl->outer_ip_ver;
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT)
- ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TRANSPORT;
- else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TUNNEL;
- else
- return -EINVAL;
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
- ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_AH;
- else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
- ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_ESP;
- else
- return -EINVAL;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_GCM;
- aes_key_len = xform->aead.key.length;
- } else {
- return -ENOTSUP;
- }
- } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_CBC;
- aes_key_len = cipher_xform->cipher.key.length;
- } else {
- return -ENOTSUP;
- }
-
-
- switch (aes_key_len) {
- case 16:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_128;
- break;
- case 24:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_192;
- break;
- case 32:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) {
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_NULL:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_NULL;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_MD5;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA1;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_224;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_256;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_384;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_512;
- break;
- case RTE_CRYPTO_AUTH_AES_GMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_GMAC;
- break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128;
- break;
- default:
- return -ENOTSUP;
- }
- }
-
- if (ipsec->options.esn)
- ctl->esn_en = 1;
-
- if (ipsec->options.udp_encap == 1)
- ctl->encap_type = OTX2_IPSEC_PO_SA_ENCAP_UDP;
-
- ctl->spi = rte_cpu_to_be_32(ipsec->spi);
- ctl->valid = 1;
-
- return 0;
-}
-
-#endif /* __OTX2_IPSEC_PO_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h b/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
deleted file mode 100644
index c3abf02187..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
+++ /dev/null
@@ -1,167 +0,0 @@
-
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_PO_OPS_H__
-#define __OTX2_IPSEC_PO_OPS_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_security.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_security.h"
-
-static __rte_always_inline int32_t
-otx2_ipsec_po_out_rlen_get(struct otx2_sec_session_ipsec_lp *sess,
- uint32_t plen)
-{
- uint32_t enc_payload_len;
-
- enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len,
- sess->roundup_byte);
-
- return sess->partial_len + enc_payload_len;
-}
-
-static __rte_always_inline struct cpt_request_info *
-alloc_request_struct(char *maddr, void *cop, int mdata_len)
-{
- struct cpt_request_info *req;
- struct cpt_meta_info *meta;
- uint8_t *resp_addr;
- uintptr_t *op;
-
- meta = (void *)RTE_PTR_ALIGN((uint8_t *)maddr, 16);
-
- op = (uintptr_t *)meta->deq_op_info;
- req = &meta->cpt_req;
- resp_addr = (uint8_t *)&meta->cpt_res;
-
- req->completion_addr = (uint64_t *)((uint8_t *)resp_addr);
- *req->completion_addr = COMPLETION_CODE_INIT;
- req->comp_baddr = rte_mem_virt2iova(resp_addr);
- req->op = op;
-
- op[0] = (uintptr_t)((uint64_t)meta | 1ull);
- op[1] = (uintptr_t)cop;
- op[2] = (uintptr_t)req;
- op[3] = mdata_len;
-
- return req;
-}
-
-static __rte_always_inline int
-process_outb_sa(struct rte_crypto_op *cop,
- struct otx2_sec_session_ipsec_lp *sess,
- struct cpt_qp_meta_info *m_info, void **prep_req)
-{
- uint32_t dlen, rlen, extend_head, extend_tail;
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- struct cpt_request_info *req = NULL;
- struct otx2_ipsec_po_out_hdr *hdr;
- struct otx2_ipsec_po_out_sa *sa;
- int hdr_len, mdata_len, ret = 0;
- vq_cmd_word0_t word0;
- char *mdata, *data;
-
- sa = &sess->out_sa;
- hdr_len = sizeof(*hdr);
-
- dlen = rte_pktmbuf_pkt_len(m_src) + hdr_len;
- rlen = otx2_ipsec_po_out_rlen_get(sess, dlen - hdr_len);
-
- extend_head = hdr_len + RTE_ETHER_HDR_LEN;
- extend_tail = rlen - dlen;
- mdata_len = m_info->lb_mlen + 8;
-
- mdata = rte_pktmbuf_append(m_src, extend_tail + mdata_len);
- if (unlikely(mdata == NULL)) {
- otx2_err("Not enough tail room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- mdata += extend_tail; /* mdata follows encrypted data */
- req = alloc_request_struct(mdata, (void *)cop, mdata_len);
-
- data = rte_pktmbuf_prepend(m_src, extend_head);
- if (unlikely(data == NULL)) {
- otx2_err("Not enough head room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- /*
- * Move the Ethernet header, to insert otx2_ipsec_po_out_hdr prior
- * to the IP header
- */
- memcpy(data, data + hdr_len, RTE_ETHER_HDR_LEN);
-
- hdr = (struct otx2_ipsec_po_out_hdr *)rte_pktmbuf_adj(m_src,
- RTE_ETHER_HDR_LEN);
-
- memcpy(&hdr->iv[0], rte_crypto_op_ctod_offset(cop, uint8_t *,
- sess->iv_offset), sess->iv_length);
-
- /* Prepare CPT instruction */
- word0.u64 = sess->ucmd_w0;
- word0.s.dlen = dlen;
-
- req->ist.ei0 = word0.u64;
- req->ist.ei1 = rte_pktmbuf_iova(m_src);
- req->ist.ei2 = req->ist.ei1;
-
- sa->esn_hi = sess->seq_hi;
-
- hdr->seq = rte_cpu_to_be_32(sess->seq_lo);
- hdr->ip_id = rte_cpu_to_be_32(sess->ip_id);
-
- sess->ip_id++;
- sess->esn++;
-
-exit:
- *prep_req = req;
-
- return ret;
-}
-
-static __rte_always_inline int
-process_inb_sa(struct rte_crypto_op *cop,
- struct otx2_sec_session_ipsec_lp *sess,
- struct cpt_qp_meta_info *m_info, void **prep_req)
-{
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- struct cpt_request_info *req = NULL;
- int mdata_len, ret = 0;
- vq_cmd_word0_t word0;
- uint32_t dlen;
- char *mdata;
-
- dlen = rte_pktmbuf_pkt_len(m_src);
- mdata_len = m_info->lb_mlen + 8;
-
- mdata = rte_pktmbuf_append(m_src, mdata_len);
- if (unlikely(mdata == NULL)) {
- otx2_err("Not enough tail room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- req = alloc_request_struct(mdata, (void *)cop, mdata_len);
-
- /* Prepare CPT instruction */
- word0.u64 = sess->ucmd_w0;
- word0.s.dlen = dlen;
-
- req->ist.ei0 = word0.u64;
- req->ist.ei1 = rte_pktmbuf_iova(m_src);
- req->ist.ei2 = req->ist.ei1;
-
-exit:
- *prep_req = req;
- return ret;
-}
-#endif /* __OTX2_IPSEC_PO_OPS_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_security.h b/drivers/crypto/octeontx2/otx2_security.h
deleted file mode 100644
index 29c8fc351b..0000000000
--- a/drivers/crypto/octeontx2/otx2_security.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SECURITY_H__
-#define __OTX2_SECURITY_H__
-
-#include <rte_security.h>
-
-#include "otx2_cryptodev_sec.h"
-#include "otx2_ethdev_sec.h"
-
-#define OTX2_SEC_AH_HDR_LEN 12
-#define OTX2_SEC_AES_GCM_IV_LEN 8
-#define OTX2_SEC_AES_GCM_MAC_LEN 16
-#define OTX2_SEC_AES_CBC_IV_LEN 16
-#define OTX2_SEC_SHA1_HMAC_LEN 12
-#define OTX2_SEC_SHA2_HMAC_LEN 16
-
-#define OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN 4
-#define OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN 16
-
-struct otx2_sec_session_ipsec {
- union {
- struct otx2_sec_session_ipsec_ip ip;
- struct otx2_sec_session_ipsec_lp lp;
- };
- enum rte_security_ipsec_sa_direction dir;
-};
-
-struct otx2_sec_session {
- struct otx2_sec_session_ipsec ipsec;
- void *userdata;
- /**< Userdata registered by the application */
-} __rte_cache_aligned;
-
-#endif /* __OTX2_SECURITY_H__ */
diff --git a/drivers/crypto/octeontx2/version.map b/drivers/crypto/octeontx2/version.map
deleted file mode 100644
index d36663132a..0000000000
--- a/drivers/crypto/octeontx2/version.map
+++ /dev/null
@@ -1,13 +0,0 @@
-DPDK_22 {
- local: *;
-};
-
-INTERNAL {
- global:
-
- otx2_cryptodev_driver_id;
- otx2_cpt_af_reg_read;
- otx2_cpt_af_reg_write;
-
- local: *;
-};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index b68ce6c0a4..8db9775d7b 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -1127,6 +1127,16 @@ cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index 63d6b410b2..d6706b57f7 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -11,7 +11,6 @@ drivers = [
'dpaa',
'dpaa2',
'dsw',
- 'octeontx2',
'opdl',
'skeleton',
'sw',
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
deleted file mode 100644
index ce360af5f8..0000000000
--- a/drivers/event/octeontx2/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_worker.c',
- 'otx2_worker_dual.c',
- 'otx2_evdev.c',
- 'otx2_evdev_adptr.c',
- 'otx2_evdev_crypto_adptr.c',
- 'otx2_evdev_irq.c',
- 'otx2_evdev_selftest.c',
- 'otx2_tim_evdev.c',
- 'otx2_tim_worker.c',
-)
-
-deps += ['bus_pci', 'common_octeontx2', 'crypto_octeontx2', 'mempool_octeontx2', 'net_octeontx2']
-
-includes += include_directories('../../crypto/octeontx2')
-includes += include_directories('../../common/cpt')
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
deleted file mode 100644
index ccf28b678b..0000000000
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ /dev/null
@@ -1,1900 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <eventdev_pmd_pci.h>
-#include <rte_kvargs.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_pci.h>
-
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_tx.h"
-#include "otx2_evdev_stats.h"
-#include "otx2_irq.h"
-#include "otx2_tim_evdev.h"
-
-static inline int
-sso_get_msix_offsets(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int i, rc;
-
- /* Get SSO and SSOW MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- for (i = 0; i < nb_ports; i++)
- dev->ssow_msixoff[i] = msix_rsp->ssow_msixoff[i];
-
- for (i = 0; i < dev->nb_event_queues; i++)
- dev->sso_msixoff[i] = msix_rsp->sso_msixoff[i];
-
- return rc;
-}
-
-void
-sso_fastpath_fns_set(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- /* Single WS modes */
- const event_dequeue_t ssogws_deq[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t ssogws_deq_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_seg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_seg_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_seg_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_seg_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
-
- /* Dual WS modes */
- const event_dequeue_t ssogws_dual_deq[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_dual_deq_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_dual_deq_seg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_seg_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_seg_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t
- ssogws_dual_deq_seg_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_seg_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- /* Tx modes */
- const event_tx_adapter_enqueue_t
- ssogws_tx_adptr_enq[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_tx_adptr_enq_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_tx_adptr_enq_seg_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_tx_adptr_enq_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_tx_adptr_enq_seg_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- event_dev->enqueue = otx2_ssogws_enq;
- event_dev->enqueue_burst = otx2_ssogws_enq_burst;
- event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst;
- event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst;
- if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
- event_dev->dequeue = ssogws_deq_seg
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_deq_seg_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue = ssogws_deq_seg_timeout
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_deq_seg_timeout_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- }
- } else {
- event_dev->dequeue = ssogws_deq
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_deq_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue = ssogws_deq_timeout
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_deq_timeout_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- }
- }
-
- if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
- /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
- event_dev->txa_enqueue = ssogws_tx_adptr_enq_seg
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- } else {
- event_dev->txa_enqueue = ssogws_tx_adptr_enq
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- }
- event_dev->ca_enqueue = otx2_ssogws_ca_enq;
-
- if (dev->dual_ws) {
- event_dev->enqueue = otx2_ssogws_dual_enq;
- event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst;
- event_dev->enqueue_new_burst =
- otx2_ssogws_dual_enq_new_burst;
- event_dev->enqueue_forward_burst =
- otx2_ssogws_dual_enq_fwd_burst;
-
- if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
- event_dev->dequeue = ssogws_dual_deq_seg
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_dual_deq_seg_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue =
- ssogws_dual_deq_seg_timeout
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_dual_deq_seg_timeout_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- }
- } else {
- event_dev->dequeue = ssogws_dual_deq
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_dual_deq_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue =
- ssogws_dual_deq_timeout
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_dual_deq_timeout_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- }
- }
-
- if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
- /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
- event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq_seg
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- } else {
- event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- }
- event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq;
- }
-
- event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
- rte_mb();
-}
-
-static void
-otx2_sso_info_get(struct rte_eventdev *event_dev,
- struct rte_event_dev_info *dev_info)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
-
- dev_info->driver_name = RTE_STR(EVENTDEV_NAME_OCTEONTX2_PMD);
- dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
- dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
- dev_info->max_event_queues = dev->max_event_queues;
- dev_info->max_event_queue_flows = (1ULL << 20);
- dev_info->max_event_queue_priority_levels = 8;
- dev_info->max_event_priority_levels = 1;
- dev_info->max_event_ports = dev->max_event_ports;
- dev_info->max_event_port_dequeue_depth = 1;
- dev_info->max_event_port_enqueue_depth = 1;
- dev_info->max_num_events = dev->max_num_events;
- dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
- RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
- RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
- RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
- RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE |
- RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
- RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-}
-
-static void
-sso_port_link_modify(struct otx2_ssogws *ws, uint8_t queue, uint8_t enable)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
- uint64_t val;
-
- val = queue;
- val |= 0ULL << 12; /* SET 0 */
- val |= 0x8000800080000000; /* Dont modify rest of the masks */
- val |= (uint64_t)enable << 14; /* Enable/Disable Membership. */
-
- otx2_write64(val, base + SSOW_LF_GWS_GRPMSK_CHG);
-}
-
-static int
-otx2_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t port_id = 0;
- uint16_t link;
-
- RTE_SET_USED(priorities);
- for (link = 0; link < nb_links; link++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], queues[link], true);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], queues[link], true);
- } else {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[link], true);
- }
- }
- sso_func_trace("Port=%d nb_links=%d", port_id, nb_links);
-
- return (int)nb_links;
-}
-
-static int
-otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t port_id = 0;
- uint16_t unlink;
-
- for (unlink = 0; unlink < nb_unlinks; unlink++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], queues[unlink],
- false);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], queues[unlink],
- false);
- } else {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[unlink], false);
- }
- }
- sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks);
-
- return (int)nb_unlinks;
-}
-
-static int
-sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type,
- uint16_t nb_lf, uint8_t attach)
-{
- if (attach) {
- struct rsrc_attach_req *req;
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- switch (type) {
- case SSO_LF_GGRP:
- req->sso = nb_lf;
- break;
- case SSO_LF_GWS:
- req->ssow = nb_lf;
- break;
- default:
- return -EINVAL;
- }
- req->modify = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- } else {
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- switch (type) {
- case SSO_LF_GGRP:
- req->sso = true;
- break;
- case SSO_LF_GWS:
- req->ssow = true;
- break;
- default:
- return -EINVAL;
- }
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- }
-
- return 0;
-}
-
-static int
-sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox,
- enum otx2_sso_lf_type type, uint16_t nb_lf, uint8_t alloc)
-{
- void *rsp;
- int rc;
-
- if (alloc) {
- switch (type) {
- case SSO_LF_GGRP:
- {
- struct sso_lf_alloc_req *req_ggrp;
- req_ggrp = otx2_mbox_alloc_msg_sso_lf_alloc(mbox);
- req_ggrp->hwgrps = nb_lf;
- }
- break;
- case SSO_LF_GWS:
- {
- struct ssow_lf_alloc_req *req_hws;
- req_hws = otx2_mbox_alloc_msg_ssow_lf_alloc(mbox);
- req_hws->hws = nb_lf;
- }
- break;
- default:
- return -EINVAL;
- }
- } else {
- switch (type) {
- case SSO_LF_GGRP:
- {
- struct sso_lf_free_req *req_ggrp;
- req_ggrp = otx2_mbox_alloc_msg_sso_lf_free(mbox);
- req_ggrp->hwgrps = nb_lf;
- }
- break;
- case SSO_LF_GWS:
- {
- struct ssow_lf_free_req *req_hws;
- req_hws = otx2_mbox_alloc_msg_ssow_lf_free(mbox);
- req_hws->hws = nb_lf;
- }
- break;
- default:
- return -EINVAL;
- }
- }
-
- rc = otx2_mbox_process_msg_tmo(mbox, (void **)&rsp, ~0);
- if (rc < 0)
- return rc;
-
- if (alloc && type == SSO_LF_GGRP) {
- struct sso_lf_alloc_rsp *rsp_ggrp = rsp;
-
- dev->xaq_buf_size = rsp_ggrp->xaq_buf_size;
- dev->xae_waes = rsp_ggrp->xaq_wq_entries;
- dev->iue = rsp_ggrp->in_unit_entries;
- }
-
- return 0;
-}
-
-static void
-otx2_sso_port_release(void *port)
-{
- struct otx2_ssogws_cookie *gws_cookie = ssogws_get_cookie(port);
- struct otx2_sso_evdev *dev;
- int i;
-
- if (!gws_cookie->configured)
- goto free;
-
- dev = sso_pmd_priv(gws_cookie->event_dev);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], i, false);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], i, false);
- }
- memset(ws, 0, sizeof(*ws));
- } else {
- struct otx2_ssogws *ws = port;
-
- for (i = 0; i < dev->nb_event_queues; i++)
- sso_port_link_modify(ws, i, false);
- memset(ws, 0, sizeof(*ws));
- }
-
- memset(gws_cookie, 0, sizeof(*gws_cookie));
-
-free:
- rte_free(gws_cookie);
-}
-
-static void
-otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(queue_id);
-}
-
-static void
-sso_restore_links(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t *links_map;
- int i, j;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- links_map = event_dev->data->links_map;
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws;
-
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], j, true);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], j, true);
- sso_func_trace("Restoring port %d queue %d "
- "link", i, j);
- }
- } else {
- struct otx2_ssogws *ws;
-
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- sso_port_link_modify(ws, j, true);
- sso_func_trace("Restoring port %d queue %d "
- "link", i, j);
- }
- }
- }
-}
-
-static void
-sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base)
-{
- ws->tag_op = base + SSOW_LF_GWS_TAG;
- ws->wqp_op = base + SSOW_LF_GWS_WQP;
- ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK;
- ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
- ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
- ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
-}
-
-static int
-sso_configure_dual_ports(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t vws = 0;
- uint8_t nb_lf;
- int i, rc;
-
- otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
-
- nb_lf = dev->nb_event_ports * 2;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GWS LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- otx2_err("Failed to init SSO GWS LF");
- return -ENODEV;
- }
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- struct otx2_ssogws_cookie *gws_cookie;
- struct otx2_ssogws_dual *ws;
- uintptr_t base;
-
- if (event_dev->data->ports[i] != NULL) {
- ws = event_dev->data->ports[i];
- } else {
- /* Allocate event port memory */
- ws = rte_zmalloc_socket("otx2_sso_ws",
- sizeof(struct otx2_ssogws_dual) +
- RTE_CACHE_LINE_SIZE,
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL) {
- otx2_err("Failed to alloc memory for port=%d",
- i);
- rc = -ENOMEM;
- break;
- }
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws_dual *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
- }
-
- ws->port = i;
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
- sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[0], base);
- ws->base[0] = base;
- vws++;
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
- sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[1], base);
- ws->base[1] = base;
- vws++;
-
- gws_cookie = ssogws_get_cookie(ws);
- gws_cookie->event_dev = event_dev;
- gws_cookie->configured = 1;
-
- event_dev->data->ports[i] = ws;
- }
-
- if (rc < 0) {
- sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- }
-
- return rc;
-}
-
-static int
-sso_configure_ports(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t nb_lf;
- int i, rc;
-
- otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
-
- nb_lf = dev->nb_event_ports;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GWS LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- otx2_err("Failed to init SSO GWS LF");
- return -ENODEV;
- }
-
- for (i = 0; i < nb_lf; i++) {
- struct otx2_ssogws_cookie *gws_cookie;
- struct otx2_ssogws *ws;
- uintptr_t base;
-
- if (event_dev->data->ports[i] != NULL) {
- ws = event_dev->data->ports[i];
- } else {
- /* Allocate event port memory */
- ws = rte_zmalloc_socket("otx2_sso_ws",
- sizeof(struct otx2_ssogws) +
- RTE_CACHE_LINE_SIZE,
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL) {
- otx2_err("Failed to alloc memory for port=%d",
- i);
- rc = -ENOMEM;
- break;
- }
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
- }
-
- ws->port = i;
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12);
- sso_set_port_ops(ws, base);
- ws->base = base;
-
- gws_cookie = ssogws_get_cookie(ws);
- gws_cookie->event_dev = event_dev;
- gws_cookie->configured = 1;
-
- event_dev->data->ports[i] = ws;
- }
-
- if (rc < 0) {
- sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- }
-
- return rc;
-}
-
-static int
-sso_configure_queues(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t nb_lf;
- int rc;
-
- otx2_sso_dbg("Configuring event queues %d", dev->nb_event_queues);
-
- nb_lf = dev->nb_event_queues;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GGRP LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GGRP, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, false);
- otx2_err("Failed to init SSO GGRP LF");
- return -ENODEV;
- }
-
- return rc;
-}
-
-static int
-sso_xaq_allocate(struct otx2_sso_evdev *dev)
-{
- const struct rte_memzone *mz;
- struct npa_aura_s *aura;
- static int reconfig_cnt;
- char pool_name[RTE_MEMZONE_NAMESIZE];
- uint32_t xaq_cnt;
- int rc;
-
- if (dev->xaq_pool)
- rte_mempool_free(dev->xaq_pool);
-
- /*
- * Allocate memory for Add work backpressure.
- */
- mz = rte_memzone_lookup(OTX2_SSO_FC_NAME);
- if (mz == NULL)
- mz = rte_memzone_reserve_aligned(OTX2_SSO_FC_NAME,
- OTX2_ALIGN +
- sizeof(struct npa_aura_s),
- rte_socket_id(),
- RTE_MEMZONE_IOVA_CONTIG,
- OTX2_ALIGN);
- if (mz == NULL) {
- otx2_err("Failed to allocate mem for fcmem");
- return -ENOMEM;
- }
-
- dev->fc_iova = mz->iova;
- dev->fc_mem = mz->addr;
- *dev->fc_mem = 0;
- aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + OTX2_ALIGN);
- memset(aura, 0, sizeof(struct npa_aura_s));
-
- aura->fc_ena = 1;
- aura->fc_addr = dev->fc_iova;
- aura->fc_hyst_bits = 0; /* Store count on all updates */
-
- /* Taken from HRM 14.3.3(4) */
- xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT;
- if (dev->xae_cnt)
- xaq_cnt += dev->xae_cnt / dev->xae_waes;
- else if (dev->adptr_xae_cnt)
- xaq_cnt += (dev->adptr_xae_cnt / dev->xae_waes) +
- (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
- else
- xaq_cnt += (dev->iue / dev->xae_waes) +
- (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
-
- otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
- /* Setup XAQ based on number of nb queues. */
- snprintf(pool_name, 30, "otx2_xaq_buf_pool_%d", reconfig_cnt);
- dev->xaq_pool = (void *)rte_mempool_create_empty(pool_name,
- xaq_cnt, dev->xaq_buf_size, 0, 0,
- rte_socket_id(), 0);
-
- if (dev->xaq_pool == NULL) {
- otx2_err("Unable to create empty mempool.");
- rte_memzone_free(mz);
- return -ENOMEM;
- }
-
- rc = rte_mempool_set_ops_byname(dev->xaq_pool,
- rte_mbuf_platform_mempool_ops(), aura);
- if (rc != 0) {
- otx2_err("Unable to set xaqpool ops.");
- goto alloc_fail;
- }
-
- rc = rte_mempool_populate_default(dev->xaq_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate xaqpool.");
- goto alloc_fail;
- }
- reconfig_cnt++;
- /* When SW does addwork (enqueue) check if there is space in XAQ by
- * comparing fc_addr above against the xaq_lmt calculated below.
- * There should be a minimum headroom (OTX2_SSO_XAQ_SLACK / 2) for SSO
- * to request XAQ to cache them even before enqueue is called.
- */
- dev->xaq_lmt = xaq_cnt - (OTX2_SSO_XAQ_SLACK / 2 *
- dev->nb_event_queues);
- dev->nb_xaq_cfg = xaq_cnt;
-
- return 0;
-alloc_fail:
- rte_mempool_free(dev->xaq_pool);
- rte_memzone_free(mz);
- return rc;
-}
-
-static int
-sso_ggrp_alloc_xaq(struct otx2_sso_evdev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_hw_setconfig *req;
-
- otx2_sso_dbg("Configuring XAQ for GGRPs");
- req = otx2_mbox_alloc_msg_sso_hw_setconfig(mbox);
- req->npa_pf_func = otx2_npa_pf_func_get();
- req->npa_aura_id = npa_lf_aura_handle_to_aura(dev->xaq_pool->pool_id);
- req->hwgrps = dev->nb_event_queues;
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-sso_ggrp_free_xaq(struct otx2_sso_evdev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_release_xaq *req;
-
- otx2_sso_dbg("Freeing XAQ for GGRPs");
- req = otx2_mbox_alloc_msg_sso_hw_release_xaq_aura(mbox);
- req->hwgrps = dev->nb_event_queues;
-
- return otx2_mbox_process(mbox);
-}
-
-static void
-sso_lf_teardown(struct otx2_sso_evdev *dev,
- enum otx2_sso_lf_type lf_type)
-{
- uint8_t nb_lf;
-
- switch (lf_type) {
- case SSO_LF_GGRP:
- nb_lf = dev->nb_event_queues;
- break;
- case SSO_LF_GWS:
- nb_lf = dev->nb_event_ports;
- nb_lf *= dev->dual_ws ? 2 : 1;
- break;
- default:
- return;
- }
-
- sso_lf_cfg(dev, dev->mbox, lf_type, nb_lf, false);
- sso_hw_lf_cfg(dev->mbox, lf_type, nb_lf, false);
-}
-
-static int
-otx2_sso_configure(const struct rte_eventdev *event_dev)
-{
- struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint32_t deq_tmo_ns;
- int rc;
-
- sso_func_trace();
- deq_tmo_ns = conf->dequeue_timeout_ns;
-
- if (deq_tmo_ns == 0)
- deq_tmo_ns = dev->min_dequeue_timeout_ns;
-
- if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
- deq_tmo_ns > dev->max_dequeue_timeout_ns) {
- otx2_err("Unsupported dequeue timeout requested");
- return -EINVAL;
- }
-
- if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
- dev->is_timeout_deq = 1;
-
- dev->deq_tmo_ns = deq_tmo_ns;
-
- if (conf->nb_event_ports > dev->max_event_ports ||
- conf->nb_event_queues > dev->max_event_queues) {
- otx2_err("Unsupported event queues/ports requested");
- return -EINVAL;
- }
-
- if (conf->nb_event_port_dequeue_depth > 1) {
- otx2_err("Unsupported event port deq depth requested");
- return -EINVAL;
- }
-
- if (conf->nb_event_port_enqueue_depth > 1) {
- otx2_err("Unsupported event port enq depth requested");
- return -EINVAL;
- }
-
- if (dev->configured)
- sso_unregister_irqs(event_dev);
-
- if (dev->nb_event_queues) {
- /* Finit any previous queues. */
- sso_lf_teardown(dev, SSO_LF_GGRP);
- }
- if (dev->nb_event_ports) {
- /* Finit any previous ports. */
- sso_lf_teardown(dev, SSO_LF_GWS);
- }
-
- dev->nb_event_queues = conf->nb_event_queues;
- dev->nb_event_ports = conf->nb_event_ports;
-
- if (dev->dual_ws)
- rc = sso_configure_dual_ports(event_dev);
- else
- rc = sso_configure_ports(event_dev);
-
- if (rc < 0) {
- otx2_err("Failed to configure event ports");
- return -ENODEV;
- }
-
- if (sso_configure_queues(event_dev) < 0) {
- otx2_err("Failed to configure event queues");
- rc = -ENODEV;
- goto teardown_hws;
- }
-
- if (sso_xaq_allocate(dev) < 0) {
- rc = -ENOMEM;
- goto teardown_hwggrp;
- }
-
- /* Restore any prior port-queue mapping. */
- sso_restore_links(event_dev);
- rc = sso_ggrp_alloc_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq to ggrp %d", rc);
- goto teardown_hwggrp;
- }
-
- rc = sso_get_msix_offsets(event_dev);
- if (rc < 0) {
- otx2_err("Failed to get msix offsets %d", rc);
- goto teardown_hwggrp;
- }
-
- rc = sso_register_irqs(event_dev);
- if (rc < 0) {
- otx2_err("Failed to register irq %d", rc);
- goto teardown_hwggrp;
- }
-
- dev->configured = 1;
- rte_mb();
-
- return 0;
-teardown_hwggrp:
- sso_lf_teardown(dev, SSO_LF_GGRP);
-teardown_hws:
- sso_lf_teardown(dev, SSO_LF_GWS);
- dev->nb_event_queues = 0;
- dev->nb_event_ports = 0;
- dev->configured = 0;
- return rc;
-}
-
-static void
-otx2_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
- struct rte_event_queue_conf *queue_conf)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(queue_id);
-
- queue_conf->nb_atomic_flows = (1ULL << 20);
- queue_conf->nb_atomic_order_sequences = (1ULL << 20);
- queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
- queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
-}
-
-static int
-otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
- const struct rte_event_queue_conf *queue_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_grp_priority *req;
- int rc;
-
- sso_func_trace("Queue=%d prio=%d", queue_id, queue_conf->priority);
-
- req = otx2_mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
- req->grp = queue_id;
- req->weight = 0xFF;
- req->affinity = 0xFF;
- /* Normalize <0-255> to <0-7> */
- req->priority = queue_conf->priority / 32;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to set priority queue=%d", queue_id);
- return rc;
- }
-
- return 0;
-}
-
-static void
-otx2_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
- struct rte_event_port_conf *port_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
-
- RTE_SET_USED(port_id);
- port_conf->new_event_threshold = dev->max_num_events;
- port_conf->dequeue_depth = 1;
- port_conf->enqueue_depth = 1;
-}
-
-static int
-otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
- const struct rte_event_port_conf *port_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP] = {0};
- uint64_t val;
- uint16_t q;
-
- sso_func_trace("Port=%d", port_id);
- RTE_SET_USED(port_conf);
-
- if (event_dev->data->ports[port_id] == NULL) {
- otx2_err("Invalid port Id %d", port_id);
- return -EINVAL;
- }
-
- for (q = 0; q < dev->nb_event_queues; q++) {
- grps_base[q] = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | q << 12);
- if (grps_base[q] == 0) {
- otx2_err("Failed to get grp[%d] base addr", q);
- return -EINVAL;
- }
- }
-
- /* Set get_work timeout for HWS */
- val = NSEC2USEC(dev->deq_tmo_ns) - 1;
-
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[port_id];
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- ws->tstamp = dev->tstamp;
- otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
- ws->ws_state[0].getwrk_op) + SSOW_LF_GWS_NW_TIM);
- otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
- ws->ws_state[1].getwrk_op) + SSOW_LF_GWS_NW_TIM);
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[port_id];
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- ws->tstamp = dev->tstamp;
- otx2_write64(val, base + SSOW_LF_GWS_NW_TIM);
- }
-
- otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
-
- return 0;
-}
-
-static int
-otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
- uint64_t *tmo_ticks)
-{
- RTE_SET_USED(event_dev);
- *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
-
- return 0;
-}
-
-static void
-ssogws_dump(struct otx2_ssogws *ws, FILE *f)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- fprintf(f, "SSOW_LF_GWS Base addr 0x%" PRIx64 "\n", (uint64_t)base);
- fprintf(f, "SSOW_LF_GWS_LINKS 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_LINKS));
- fprintf(f, "SSOW_LF_GWS_PENDWQP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDWQP));
- fprintf(f, "SSOW_LF_GWS_PENDSTATE 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDSTATE));
- fprintf(f, "SSOW_LF_GWS_NW_TIM 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_NW_TIM));
- fprintf(f, "SSOW_LF_GWS_TAG 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_TAG));
- fprintf(f, "SSOW_LF_GWS_WQP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_TAG));
- fprintf(f, "SSOW_LF_GWS_SWTP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_SWTP));
- fprintf(f, "SSOW_LF_GWS_PENDTAG 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDTAG));
-}
-
-static void
-ssoggrp_dump(uintptr_t base, FILE *f)
-{
- fprintf(f, "SSO_LF_GGRP Base addr 0x%" PRIx64 "\n", (uint64_t)base);
- fprintf(f, "SSO_LF_GGRP_QCTL 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_QCTL));
- fprintf(f, "SSO_LF_GGRP_XAQ_CNT 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_XAQ_CNT));
- fprintf(f, "SSO_LF_GGRP_INT_THR 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_INT_THR));
- fprintf(f, "SSO_LF_GGRP_INT_CNT 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_INT_CNT));
- fprintf(f, "SSO_LF_GGRP_AQ_CNT 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_AQ_CNT));
- fprintf(f, "SSO_LF_GGRP_AQ_THR 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_AQ_THR));
- fprintf(f, "SSO_LF_GGRP_MISC_CNT 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_MISC_CNT));
-}
-
-static void
-otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t queue;
- uint8_t port;
-
- fprintf(f, "[%s] SSO running in [%s] mode\n", __func__, dev->dual_ws ?
- "dual_ws" : "single_ws");
- /* Dump SSOW registers */
- for (port = 0; port < dev->nb_event_ports; port++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws =
- event_dev->data->ports[port];
-
- fprintf(f, "[%s] SSO dual workslot[%d] vws[%d] dump\n",
- __func__, port, 0);
- ssogws_dump((struct otx2_ssogws *)&ws->ws_state[0], f);
- fprintf(f, "[%s]SSO dual workslot[%d] vws[%d] dump\n",
- __func__, port, 1);
- ssogws_dump((struct otx2_ssogws *)&ws->ws_state[1], f);
- } else {
- fprintf(f, "[%s]SSO single workslot[%d] dump\n",
- __func__, port);
- ssogws_dump(event_dev->data->ports[port], f);
- }
- }
-
- /* Dump SSO registers */
- for (queue = 0; queue < dev->nb_event_queues; queue++) {
- fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
- }
- }
-}
-
-static void
-otx2_handle_event(void *arg, struct rte_event event)
-{
- struct rte_eventdev *event_dev = arg;
-
- if (event_dev->dev_ops->dev_stop_flush != NULL)
- event_dev->dev_ops->dev_stop_flush(event_dev->data->dev_id,
- event, event_dev->data->dev_stop_flush_arg);
-}
-
-static void
-sso_qos_cfg(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct sso_grp_qos_cfg *req;
- uint16_t i;
-
- for (i = 0; i < dev->qos_queue_cnt; i++) {
- uint8_t xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
- uint8_t iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
- uint8_t taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
-
- if (dev->qos_parse_data[i].queue >= dev->nb_event_queues)
- continue;
-
- req = otx2_mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
- req->xaq_limit = (dev->nb_xaq_cfg *
- (xaq_prcnt ? xaq_prcnt : 100)) / 100;
- req->taq_thr = (SSO_HWGRP_IAQ_MAX_THR_MASK *
- (iaq_prcnt ? iaq_prcnt : 100)) / 100;
- req->iaq_thr = (SSO_HWGRP_TAQ_MAX_THR_MASK *
- (taq_prcnt ? taq_prcnt : 100)) / 100;
- }
-
- if (dev->qos_queue_cnt)
- otx2_mbox_process(dev->mbox);
-}
-
-static void
-sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t i;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws;
-
- ws = event_dev->data->ports[i];
- ssogws_reset((struct otx2_ssogws *)&ws->ws_state[0]);
- ssogws_reset((struct otx2_ssogws *)&ws->ws_state[1]);
- ws->swtag_req = 0;
- ws->vws = 0;
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- } else {
- struct otx2_ssogws *ws;
-
- ws = event_dev->data->ports[i];
- ssogws_reset(ws);
- ws->swtag_req = 0;
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- }
- }
-
- rte_mb();
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
- struct otx2_ssogws temp_ws;
-
- memcpy(&temp_ws, &ws->ws_state[0],
- sizeof(struct otx2_ssogws_state));
- for (i = 0; i < dev->nb_event_queues; i++) {
- /* Consume all the events through HWS0 */
- ssogws_flush_events(&temp_ws, i, ws->grps_base[i],
- otx2_handle_event, event_dev);
- /* Enable/Disable SSO GGRP */
- otx2_write64(enable, ws->grps_base[i] +
- SSO_LF_GGRP_QCTL);
- }
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[0];
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- /* Consume all the events through HWS0 */
- ssogws_flush_events(ws, i, ws->grps_base[i],
- otx2_handle_event, event_dev);
- /* Enable/Disable SSO GGRP */
- otx2_write64(enable, ws->grps_base[i] +
- SSO_LF_GGRP_QCTL);
- }
- }
-
- /* reset SSO GWS cache */
- otx2_mbox_alloc_msg_sso_ws_cache_inv(dev->mbox);
- otx2_mbox_process(dev->mbox);
-}
-
-int
-sso_xae_reconfigure(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int rc = 0;
-
- if (event_dev->data->dev_started)
- sso_cleanup(event_dev, 0);
-
- rc = sso_ggrp_free_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to free XAQ\n");
- return rc;
- }
-
- rte_mempool_free(dev->xaq_pool);
- dev->xaq_pool = NULL;
- rc = sso_xaq_allocate(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq pool %d", rc);
- return rc;
- }
- rc = sso_ggrp_alloc_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq to ggrp %d", rc);
- return rc;
- }
-
- rte_mb();
- if (event_dev->data->dev_started)
- sso_cleanup(event_dev, 1);
-
- return 0;
-}
-
-static int
-otx2_sso_start(struct rte_eventdev *event_dev)
-{
- sso_func_trace();
- sso_qos_cfg(event_dev);
- sso_cleanup(event_dev, 1);
- sso_fastpath_fns_set(event_dev);
-
- return 0;
-}
-
-static void
-otx2_sso_stop(struct rte_eventdev *event_dev)
-{
- sso_func_trace();
- sso_cleanup(event_dev, 0);
- rte_mb();
-}
-
-static int
-otx2_sso_close(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- uint16_t i;
-
- if (!dev->configured)
- return 0;
-
- sso_unregister_irqs(event_dev);
-
- for (i = 0; i < dev->nb_event_queues; i++)
- all_queues[i] = i;
-
- for (i = 0; i < dev->nb_event_ports; i++)
- otx2_sso_port_unlink(event_dev, event_dev->data->ports[i],
- all_queues, dev->nb_event_queues);
-
- sso_lf_teardown(dev, SSO_LF_GGRP);
- sso_lf_teardown(dev, SSO_LF_GWS);
- dev->nb_event_ports = 0;
- dev->nb_event_queues = 0;
- rte_mempool_free(dev->xaq_pool);
- rte_memzone_free(rte_memzone_lookup(OTX2_SSO_FC_NAME));
-
- return 0;
-}
-
-/* Initialize and register event driver with DPDK Application */
-static struct eventdev_ops otx2_sso_ops = {
- .dev_infos_get = otx2_sso_info_get,
- .dev_configure = otx2_sso_configure,
- .queue_def_conf = otx2_sso_queue_def_conf,
- .queue_setup = otx2_sso_queue_setup,
- .queue_release = otx2_sso_queue_release,
- .port_def_conf = otx2_sso_port_def_conf,
- .port_setup = otx2_sso_port_setup,
- .port_release = otx2_sso_port_release,
- .port_link = otx2_sso_port_link,
- .port_unlink = otx2_sso_port_unlink,
- .timeout_ticks = otx2_sso_timeout_ticks,
-
- .eth_rx_adapter_caps_get = otx2_sso_rx_adapter_caps_get,
- .eth_rx_adapter_queue_add = otx2_sso_rx_adapter_queue_add,
- .eth_rx_adapter_queue_del = otx2_sso_rx_adapter_queue_del,
- .eth_rx_adapter_start = otx2_sso_rx_adapter_start,
- .eth_rx_adapter_stop = otx2_sso_rx_adapter_stop,
-
- .eth_tx_adapter_caps_get = otx2_sso_tx_adapter_caps_get,
- .eth_tx_adapter_queue_add = otx2_sso_tx_adapter_queue_add,
- .eth_tx_adapter_queue_del = otx2_sso_tx_adapter_queue_del,
-
- .timer_adapter_caps_get = otx2_tim_caps_get,
-
- .crypto_adapter_caps_get = otx2_ca_caps_get,
- .crypto_adapter_queue_pair_add = otx2_ca_qp_add,
- .crypto_adapter_queue_pair_del = otx2_ca_qp_del,
-
- .xstats_get = otx2_sso_xstats_get,
- .xstats_reset = otx2_sso_xstats_reset,
- .xstats_get_names = otx2_sso_xstats_get_names,
-
- .dump = otx2_sso_dump,
- .dev_start = otx2_sso_start,
- .dev_stop = otx2_sso_stop,
- .dev_close = otx2_sso_close,
- .dev_selftest = otx2_sso_selftest,
-};
-
-#define OTX2_SSO_XAE_CNT "xae_cnt"
-#define OTX2_SSO_SINGLE_WS "single_ws"
-#define OTX2_SSO_GGRP_QOS "qos"
-#define OTX2_SSO_FORCE_BP "force_rx_bp"
-
-static void
-parse_queue_param(char *value, void *opaque)
-{
- struct otx2_sso_qos queue_qos = {0};
- uint8_t *val = (uint8_t *)&queue_qos;
- struct otx2_sso_evdev *dev = opaque;
- char *tok = strtok(value, "-");
- struct otx2_sso_qos *old_ptr;
-
- if (!strlen(value))
- return;
-
- while (tok != NULL) {
- *val = atoi(tok);
- tok = strtok(NULL, "-");
- val++;
- }
-
- if (val != (&queue_qos.iaq_prcnt + 1)) {
- otx2_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
- return;
- }
-
- dev->qos_queue_cnt++;
- old_ptr = dev->qos_parse_data;
- dev->qos_parse_data = rte_realloc(dev->qos_parse_data,
- sizeof(struct otx2_sso_qos) *
- dev->qos_queue_cnt, 0);
- if (dev->qos_parse_data == NULL) {
- dev->qos_parse_data = old_ptr;
- dev->qos_queue_cnt--;
- return;
- }
- dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
-}
-
-static void
-parse_qos_list(const char *value, void *opaque)
-{
- char *s = strdup(value);
- char *start = NULL;
- char *end = NULL;
- char *f = s;
-
- while (*s) {
- if (*s == '[')
- start = s;
- else if (*s == ']')
- end = s;
-
- if (start && start < end) {
- *end = 0;
- parse_queue_param(start + 1, opaque);
- s = end;
- start = end;
- }
- s++;
- }
-
- free(f);
-}
-
-static int
-parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
- * isn't allowed. Everything is expressed in percentages, 0 represents
- * default.
- */
- parse_qos_list(value, opaque);
-
- return 0;
-}
-
-static void
-sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
-{
- struct rte_kvargs *kvlist;
- uint8_t single_ws = 0;
-
- if (devargs == NULL)
- return;
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value,
- &dev->xae_cnt);
- rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag,
- &single_ws);
- rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
- dev);
- rte_kvargs_process(kvlist, OTX2_SSO_FORCE_BP, &parse_kvargs_flag,
- &dev->force_rx_bp);
- otx2_parse_common_devargs(kvlist);
- dev->dual_ws = !single_ws;
- rte_kvargs_free(kvlist);
-}
-
-static int
-otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- return rte_event_pmd_pci_probe(pci_drv, pci_dev,
- sizeof(struct otx2_sso_evdev),
- otx2_sso_init);
-}
-
-static int
-otx2_sso_remove(struct rte_pci_device *pci_dev)
-{
- return rte_event_pmd_pci_remove(pci_dev, otx2_sso_fini);
-}
-
-static const struct rte_pci_id pci_sso_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_sso = {
- .id_table = pci_sso_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
- .probe = otx2_sso_probe,
- .remove = otx2_sso_remove,
-};
-
-int
-otx2_sso_init(struct rte_eventdev *event_dev)
-{
- struct free_rsrcs_rsp *rsrc_cnt;
- struct rte_pci_device *pci_dev;
- struct otx2_sso_evdev *dev;
- int rc;
-
- event_dev->dev_ops = &otx2_sso_ops;
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- sso_fastpath_fns_set(event_dev);
- return 0;
- }
-
- dev = sso_pmd_priv(event_dev);
-
- pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
-
- /* Initialize the base otx2_dev object */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc < 0) {
- otx2_err("Failed to initialize otx2_dev rc=%d", rc);
- goto error;
- }
-
- /* Get SSO and SSOW MSIX rsrc cnt */
- otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
- rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
- if (rc < 0) {
- otx2_err("Unable to get free rsrc count");
- goto otx2_dev_uninit;
- }
- otx2_sso_dbg("SSO %d SSOW %d NPA %d provisioned", rsrc_cnt->sso,
- rsrc_cnt->ssow, rsrc_cnt->npa);
-
- dev->max_event_ports = RTE_MIN(rsrc_cnt->ssow, OTX2_SSO_MAX_VHWS);
- dev->max_event_queues = RTE_MIN(rsrc_cnt->sso, OTX2_SSO_MAX_VHGRP);
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc < 0) {
- otx2_err("Unable to init NPA lf. It might not be provisioned");
- goto otx2_dev_uninit;
- }
-
- dev->drv_inited = true;
- dev->is_timeout_deq = 0;
- dev->min_dequeue_timeout_ns = USEC2NSEC(1);
- dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
- dev->max_num_events = -1;
- dev->nb_event_queues = 0;
- dev->nb_event_ports = 0;
-
- if (!dev->max_event_ports || !dev->max_event_queues) {
- otx2_err("Not enough eventdev resource queues=%d ports=%d",
- dev->max_event_queues, dev->max_event_ports);
- rc = -ENODEV;
- goto otx2_npa_lf_uninit;
- }
-
- dev->dual_ws = 1;
- sso_parse_devargs(dev, pci_dev->device.devargs);
- if (dev->dual_ws) {
- otx2_sso_dbg("Using dual workslot mode");
- dev->max_event_ports = dev->max_event_ports / 2;
- } else {
- otx2_sso_dbg("Using single workslot mode");
- }
-
- otx2_sso_pf_func_set(dev->pf_func);
- otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
- event_dev->data->name, dev->max_event_queues,
- dev->max_event_ports);
-
- otx2_tim_init(pci_dev, (struct otx2_dev *)dev);
-
- return 0;
-
-otx2_npa_lf_uninit:
- otx2_npa_lf_fini();
-otx2_dev_uninit:
- otx2_dev_fini(pci_dev, dev);
-error:
- return rc;
-}
-
-int
-otx2_sso_fini(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct rte_pci_device *pci_dev;
-
- /* For secondary processes, nothing to be done */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
-
- if (!dev->drv_inited)
- goto dev_fini;
-
- dev->drv_inited = false;
- otx2_npa_lf_fini();
-
-dev_fini:
- if (otx2_npa_lf_active(dev)) {
- otx2_info("Common resource in use by other devices");
- return -EAGAIN;
- }
-
- otx2_tim_fini();
- otx2_dev_fini(pci_dev, dev);
-
- return 0;
-}
-
-RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso);
-RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
-RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
- OTX2_SSO_SINGLE_WS "=1"
- OTX2_SSO_GGRP_QOS "=<string>"
- OTX2_SSO_FORCE_BP "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
deleted file mode 100644
index a5d34b7df7..0000000000
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ /dev/null
@@ -1,430 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_EVDEV_H__
-#define __OTX2_EVDEV_H__
-
-#include <rte_eventdev.h>
-#include <eventdev_pmd.h>
-#include <rte_event_eth_rx_adapter.h>
-#include <rte_event_eth_tx_adapter.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-#include "otx2_mempool.h"
-#include "otx2_tim_evdev.h"
-
-#define EVENTDEV_NAME_OCTEONTX2_PMD event_octeontx2
-
-#define sso_func_trace otx2_sso_dbg
-
-#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV
-#define OTX2_SSO_MAX_VHWS (UINT8_MAX)
-#define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc"
-#define OTX2_SSO_SQB_LIMIT (0x180)
-#define OTX2_SSO_XAQ_SLACK (8)
-#define OTX2_SSO_XAQ_CACHE_CNT (0x7)
-#define OTX2_SSO_WQE_SG_PTR (9)
-
-/* SSO LF register offsets (BAR2) */
-#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
-#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
-
-#define SSO_LF_GGRP_QCTL (0x20ull)
-#define SSO_LF_GGRP_EXE_DIS (0x80ull)
-#define SSO_LF_GGRP_INT (0x100ull)
-#define SSO_LF_GGRP_INT_W1S (0x108ull)
-#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
-#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
-#define SSO_LF_GGRP_INT_THR (0x140ull)
-#define SSO_LF_GGRP_INT_CNT (0x180ull)
-#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
-#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
-#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
-#define SSO_LF_GGRP_MISC_CNT (0x200ull)
-
-/* SSOW LF register offsets (BAR2) */
-#define SSOW_LF_GWS_LINKS (0x10ull)
-#define SSOW_LF_GWS_PENDWQP (0x40ull)
-#define SSOW_LF_GWS_PENDSTATE (0x50ull)
-#define SSOW_LF_GWS_NW_TIM (0x70ull)
-#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
-#define SSOW_LF_GWS_INT (0x100ull)
-#define SSOW_LF_GWS_INT_W1S (0x108ull)
-#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
-#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
-#define SSOW_LF_GWS_TAG (0x200ull)
-#define SSOW_LF_GWS_WQP (0x210ull)
-#define SSOW_LF_GWS_SWTP (0x220ull)
-#define SSOW_LF_GWS_PENDTAG (0x230ull)
-#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
-#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
-#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
-#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
-#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
-#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
-#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
-#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
-#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
-#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
-#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
-#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
-
-#define OTX2_SSOW_GET_BASE_ADDR(_GW) ((_GW) - SSOW_LF_GWS_OP_GET_WORK)
-#define OTX2_SSOW_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
-#define OTX2_SSOW_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
-
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us) * 1E3)
-#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
-#define TICK2NSEC(__tck, __freq) (((__tck) * 1E9) / (__freq))
-
-enum otx2_sso_lf_type {
- SSO_LF_GGRP,
- SSO_LF_GWS
-};
-
-union otx2_sso_event {
- uint64_t get_work0;
- struct {
- uint32_t flow_id:20;
- uint32_t sub_event_type:8;
- uint32_t event_type:4;
- uint8_t op:2;
- uint8_t rsvd:4;
- uint8_t sched_type:2;
- uint8_t queue_id;
- uint8_t priority;
- uint8_t impl_opaque;
- };
-} __rte_aligned(64);
-
-enum {
- SSO_SYNC_ORDERED,
- SSO_SYNC_ATOMIC,
- SSO_SYNC_UNTAGGED,
- SSO_SYNC_EMPTY
-};
-
-struct otx2_sso_qos {
- uint8_t queue;
- uint8_t xaq_prcnt;
- uint8_t taq_prcnt;
- uint8_t iaq_prcnt;
-};
-
-struct otx2_sso_evdev {
- OTX2_DEV; /* Base class */
- uint8_t max_event_queues;
- uint8_t max_event_ports;
- uint8_t is_timeout_deq;
- uint8_t nb_event_queues;
- uint8_t nb_event_ports;
- uint8_t configured;
- uint32_t deq_tmo_ns;
- uint32_t min_dequeue_timeout_ns;
- uint32_t max_dequeue_timeout_ns;
- int32_t max_num_events;
- uint64_t *fc_mem;
- uint64_t xaq_lmt;
- uint64_t nb_xaq_cfg;
- rte_iova_t fc_iova;
- struct rte_mempool *xaq_pool;
- uint64_t rx_offloads;
- uint64_t tx_offloads;
- uint64_t adptr_xae_cnt;
- uint16_t rx_adptr_pool_cnt;
- uint64_t *rx_adptr_pools;
- uint16_t max_port_id;
- uint16_t tim_adptr_ring_cnt;
- uint16_t *timer_adptr_rings;
- uint64_t *timer_adptr_sz;
- /* Dev args */
- uint8_t dual_ws;
- uint32_t xae_cnt;
- uint8_t qos_queue_cnt;
- uint8_t force_rx_bp;
- struct otx2_sso_qos *qos_parse_data;
- /* HW const */
- uint32_t xae_waes;
- uint32_t xaq_buf_size;
- uint32_t iue;
- /* MSIX offsets */
- uint16_t sso_msixoff[OTX2_SSO_MAX_VHGRP];
- uint16_t ssow_msixoff[OTX2_SSO_MAX_VHWS];
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
-} __rte_cache_aligned;
-
-#define OTX2_SSOGWS_OPS \
- /* WS ops */ \
- uintptr_t getwrk_op; \
- uintptr_t tag_op; \
- uintptr_t wqp_op; \
- uintptr_t swtag_flush_op; \
- uintptr_t swtag_norm_op; \
- uintptr_t swtag_desched_op;
-
-/* Event port aka GWS */
-struct otx2_ssogws {
- /* Get Work Fastpath data */
- OTX2_SSOGWS_OPS;
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
- void *lookup_mem;
- uint8_t swtag_req;
- uint8_t port;
- /* Add Work Fastpath data */
- uint64_t xaq_lmt __rte_cache_aligned;
- uint64_t *fc_mem;
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
- /* Tx Fastpath data */
- uint64_t base __rte_cache_aligned;
- uint8_t tx_adptr_data[];
-} __rte_cache_aligned;
-
-struct otx2_ssogws_state {
- OTX2_SSOGWS_OPS;
-};
-
-struct otx2_ssogws_dual {
- /* Get Work Fastpath data */
- struct otx2_ssogws_state ws_state[2]; /* Ping and Pong */
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
- void *lookup_mem;
- uint8_t swtag_req;
- uint8_t vws; /* Ping pong bit */
- uint8_t port;
- /* Add Work Fastpath data */
- uint64_t xaq_lmt __rte_cache_aligned;
- uint64_t *fc_mem;
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
- /* Tx Fastpath data */
- uint64_t base[2] __rte_cache_aligned;
- uint8_t tx_adptr_data[];
-} __rte_cache_aligned;
-
-static inline struct otx2_sso_evdev *
-sso_pmd_priv(const struct rte_eventdev *event_dev)
-{
- return event_dev->data->dev_private;
-}
-
-struct otx2_ssogws_cookie {
- const struct rte_eventdev *event_dev;
- bool configured;
-};
-
-static inline struct otx2_ssogws_cookie *
-ssogws_get_cookie(void *ws)
-{
- return (struct otx2_ssogws_cookie *)
- ((uint8_t *)ws - RTE_CACHE_LINE_SIZE);
-}
-
-static const union mbuf_initializer mbuf_init = {
- .fields = {
- .data_off = RTE_PKTMBUF_HEADROOM,
- .refcnt = 1,
- .nb_segs = 1,
- .port = 0
- }
-};
-
-static __rte_always_inline void
-otx2_wqe_to_mbuf(uint64_t get_work1, const uint64_t mbuf, uint8_t port_id,
- const uint32_t tag, const uint32_t flags,
- const void * const lookup_mem)
-{
- struct nix_wqe_hdr_s *wqe = (struct nix_wqe_hdr_s *)get_work1;
- uint64_t val = mbuf_init.value | (uint64_t)port_id << 48;
-
- if (flags & NIX_RX_OFFLOAD_TSTAMP_F)
- val |= NIX_TIMESYNC_RX_OFFSET;
-
- otx2_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag,
- (struct rte_mbuf *)mbuf, lookup_mem,
- val, flags);
-
-}
-
-static inline int
-parse_kvargs_flag(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- *(uint8_t *)opaque = !!atoi(value);
- return 0;
-}
-
-static inline int
-parse_kvargs_value(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- *(uint32_t *)opaque = (uint32_t)atoi(value);
- return 0;
-}
-
-#define SSO_RX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_FASTPATH_MODES
-#define SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_TX_FASTPATH_MODES
-
-/* Single WS API's */
-uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev);
-uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-
-/* Dual WS API's */
-uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev);
-uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-
-/* Auto generated API's */
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
- \
-uint16_t otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks);\
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[],\
- uint16_t nb_events); \
-uint16_t otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-uint16_t otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-uint16_t otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data,
- uint32_t event_type);
-int sso_xae_reconfigure(struct rte_eventdev *event_dev);
-void sso_fastpath_fns_set(struct rte_eventdev *event_dev);
-
-int otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- uint32_t *caps);
-int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id,
- const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
-int otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id);
-int otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev);
-int otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev);
-int otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
- const struct rte_eth_dev *eth_dev,
- uint32_t *caps);
-int otx2_sso_tx_adapter_queue_add(uint8_t id,
- const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id);
-
-int otx2_sso_tx_adapter_queue_del(uint8_t id,
- const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id);
-
-/* Event crypto adapter API's */
-int otx2_ca_caps_get(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, uint32_t *caps);
-
-int otx2_ca_qp_add(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, int32_t queue_pair_id,
- const struct rte_event *event);
-
-int otx2_ca_qp_del(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, int32_t queue_pair_id);
-
-/* Clean up API's */
-typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev);
-void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id,
- uintptr_t base, otx2_handle_event_t fn, void *arg);
-void ssogws_reset(struct otx2_ssogws *ws);
-/* Selftest */
-int otx2_sso_selftest(void);
-/* Init and Fini API's */
-int otx2_sso_init(struct rte_eventdev *event_dev);
-int otx2_sso_fini(struct rte_eventdev *event_dev);
-/* IRQ handlers */
-int sso_register_irqs(const struct rte_eventdev *event_dev);
-void sso_unregister_irqs(const struct rte_eventdev *event_dev);
-
-#endif /* __OTX2_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c
deleted file mode 100644
index a91f784b1e..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_adptr.c
+++ /dev/null
@@ -1,656 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019-2021 Marvell.
- */
-
-#include "otx2_evdev.h"
-
-#define NIX_RQ_AURA_THRESH(x) (((x)*95) / 100)
-
-int
-otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev, uint32_t *caps)
-{
- int rc;
-
- RTE_SET_USED(event_dev);
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
- else
- *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT |
- RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ;
-
- return 0;
-}
-
-static inline int
-sso_rxq_enable(struct otx2_eth_dev *dev, uint16_t qid, uint8_t tt, uint8_t ggrp,
- uint16_t eth_port_id)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 0;
- aq->cq.caching = 0;
-
- otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s));
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
- aq->cq_mask.caching = ~(aq->cq_mask.caching);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to disable cq context");
- goto fail;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.sso_ena = 1;
- aq->rq.sso_tt = tt;
- aq->rq.sso_grp = ggrp;
- aq->rq.ena_wqwd = 1;
- /* Mbuf Header generation :
- * > FIRST_SKIP is a super set of WQE_SKIP, dont modify first skip as
- * it already has data related to mbuf size, headroom, private area.
- * > Using WQE_SKIP we can directly assign
- * mbuf = wqe - sizeof(struct mbuf);
- * so that mbuf header will not have unpredicted values while headroom
- * and private data starts at the beginning of wqe_data.
- */
- aq->rq.wqe_skip = 1;
- aq->rq.wqe_caching = 1;
- aq->rq.spb_ena = 0;
- aq->rq.flow_tagw = 20; /* 20-bits */
-
- /* Flow Tag calculation :
- *
- * rq_tag <31:24> = good/bad_tag<8:0>;
- * rq_tag <23:0> = [ltag]
- *
- * flow_tag_mask<31:0> = (1 << flow_tagw) - 1; <31:20>
- * tag<31:0> = (~flow_tag_mask & rq_tag) | (flow_tag_mask & flow_tag);
- *
- * Setup :
- * ltag<23:0> = (eth_port_id & 0xF) << 20;
- * good/bad_tag<8:0> =
- * ((eth_port_id >> 4) & 0xF) | (RTE_EVENT_TYPE_ETHDEV << 4);
- *
- * TAG<31:0> on getwork = <31:28>(RTE_EVENT_TYPE_ETHDEV) |
- * <27:20> (eth_port_id) | <20:0> [TAG]
- */
-
- aq->rq.ltag = (eth_port_id & 0xF) << 20;
- aq->rq.good_utag = ((eth_port_id >> 4) & 0xF) |
- (RTE_EVENT_TYPE_ETHDEV << 4);
- aq->rq.bad_utag = aq->rq.good_utag;
-
- aq->rq.ena = 0; /* Don't enable RQ yet */
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
-
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s));
- /* mask the bits to write. */
- aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena);
- aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt);
- aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp);
- aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd);
- aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip);
- aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching);
- aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena);
- aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw);
- aq->rq_mask.ltag = ~(aq->rq_mask.ltag);
- aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag);
- aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag);
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
- aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching);
- aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to init rx adapter context");
- goto fail;
- }
-
- return 0;
-fail:
- return rc;
-}
-
-static inline int
-sso_rxq_disable(struct otx2_eth_dev *dev, uint16_t qid)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 1;
- aq->cq.caching = 1;
-
- otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s));
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
- aq->cq_mask.caching = ~(aq->cq_mask.caching);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to enable cq context");
- goto fail;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.sso_ena = 0;
- aq->rq.sso_tt = SSO_TT_UNTAGGED;
- aq->rq.sso_grp = 0;
- aq->rq.ena_wqwd = 0;
- aq->rq.wqe_caching = 0;
- aq->rq.wqe_skip = 0;
- aq->rq.spb_ena = 0;
- aq->rq.flow_tagw = 0x20;
- aq->rq.ltag = 0;
- aq->rq.good_utag = 0;
- aq->rq.bad_utag = 0;
- aq->rq.ena = 1;
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
-
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s));
- /* mask the bits to write. */
- aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena);
- aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt);
- aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp);
- aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd);
- aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching);
- aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip);
- aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena);
- aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw);
- aq->rq_mask.ltag = ~(aq->rq_mask.ltag);
- aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag);
- aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag);
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
- aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching);
- aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to clear rx adapter context");
- goto fail;
- }
-
- return 0;
-fail:
- return rc;
-}
-
-void
-sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type)
-{
- int i;
-
- switch (event_type) {
- case RTE_EVENT_TYPE_ETHDEV:
- {
- struct otx2_eth_rxq *rxq = data;
- uint64_t *old_ptr;
-
- for (i = 0; i < dev->rx_adptr_pool_cnt; i++) {
- if ((uint64_t)rxq->pool == dev->rx_adptr_pools[i])
- return;
- }
-
- dev->rx_adptr_pool_cnt++;
- old_ptr = dev->rx_adptr_pools;
- dev->rx_adptr_pools = rte_realloc(dev->rx_adptr_pools,
- sizeof(uint64_t) *
- dev->rx_adptr_pool_cnt, 0);
- if (dev->rx_adptr_pools == NULL) {
- dev->adptr_xae_cnt += rxq->pool->size;
- dev->rx_adptr_pools = old_ptr;
- dev->rx_adptr_pool_cnt--;
- return;
- }
- dev->rx_adptr_pools[dev->rx_adptr_pool_cnt - 1] =
- (uint64_t)rxq->pool;
-
- dev->adptr_xae_cnt += rxq->pool->size;
- break;
- }
- case RTE_EVENT_TYPE_TIMER:
- {
- struct otx2_tim_ring *timr = data;
- uint16_t *old_ring_ptr;
- uint64_t *old_sz_ptr;
-
- for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
- if (timr->ring_id != dev->timer_adptr_rings[i])
- continue;
- if (timr->nb_timers == dev->timer_adptr_sz[i])
- return;
- dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_sz[i] = timr->nb_timers;
-
- return;
- }
-
- dev->tim_adptr_ring_cnt++;
- old_ring_ptr = dev->timer_adptr_rings;
- old_sz_ptr = dev->timer_adptr_sz;
-
- dev->timer_adptr_rings = rte_realloc(dev->timer_adptr_rings,
- sizeof(uint16_t) *
- dev->tim_adptr_ring_cnt,
- 0);
- if (dev->timer_adptr_rings == NULL) {
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_rings = old_ring_ptr;
- dev->tim_adptr_ring_cnt--;
- return;
- }
-
- dev->timer_adptr_sz = rte_realloc(dev->timer_adptr_sz,
- sizeof(uint64_t) *
- dev->tim_adptr_ring_cnt,
- 0);
-
- if (dev->timer_adptr_sz == NULL) {
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_sz = old_sz_ptr;
- dev->tim_adptr_ring_cnt--;
- return;
- }
-
- dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
- timr->ring_id;
- dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
- timr->nb_timers;
-
- dev->adptr_xae_cnt += timr->nb_timers;
- break;
- }
- default:
- break;
- }
-}
-
-static inline void
-sso_updt_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[i];
-
- ws->lookup_mem = lookup_mem;
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[i];
-
- ws->lookup_mem = lookup_mem;
- }
- }
-}
-
-static inline void
-sso_cfg_nix_mp_bpid(struct otx2_sso_evdev *dev,
- struct otx2_eth_dev *otx2_eth_dev, struct otx2_eth_rxq *rxq,
- uint8_t ena)
-{
- struct otx2_fc_info *fc = &otx2_eth_dev->fc_info;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- struct otx2_npa_lf *lf;
- struct otx2_mbox *mbox;
- uint32_t limit;
- int rc;
-
- if (otx2_dev_is_sdp(otx2_eth_dev))
- return;
-
- lf = otx2_npa_lf_obj_get();
- if (!lf)
- return;
- mbox = lf->mbox;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return;
-
- limit = rsp->aura.limit;
- /* BP is already enabled. */
- if (rsp->aura.bp_ena) {
- /* If BP ids don't match disable BP. */
- if ((rsp->aura.nix0_bpid != fc->bpid[0]) && !dev->force_rx_bp) {
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id =
- npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
-
- req->aura.bp_ena = 0;
- req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
-
- otx2_mbox_process(mbox);
- }
- return;
- }
-
- /* BP was previously enabled but now disabled skip. */
- if (rsp->aura.bp)
- return;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
-
- if (ena) {
- req->aura.nix0_bpid = fc->bpid[0];
- req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
- req->aura.bp = NIX_RQ_AURA_THRESH(
- limit > 128 ? 256 : limit); /* 95% of size*/
- req->aura_mask.bp = ~(req->aura_mask.bp);
- }
-
- req->aura.bp_ena = !!ena;
- req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
-
- otx2_mbox_process(mbox);
-}
-
-int
-otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id,
- const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t port = eth_dev->data->port_id;
- struct otx2_eth_rxq *rxq;
- int i, rc;
-
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- return -EINVAL;
-
- if (rx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
- sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true);
- rc = sso_xae_reconfigure(
- (struct rte_eventdev *)(uintptr_t)event_dev);
- rc |= sso_rxq_enable(otx2_eth_dev, i,
- queue_conf->ev.sched_type,
- queue_conf->ev.queue_id, port);
- }
- rxq = eth_dev->data->rx_queues[0];
- sso_updt_lookup_mem(event_dev, rxq->lookup_mem);
- } else {
- rxq = eth_dev->data->rx_queues[rx_queue_id];
- sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true);
- rc = sso_xae_reconfigure((struct rte_eventdev *)
- (uintptr_t)event_dev);
- rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id,
- queue_conf->ev.sched_type,
- queue_conf->ev.queue_id, port);
- sso_updt_lookup_mem(event_dev, rxq->lookup_mem);
- }
-
- if (rc < 0) {
- otx2_err("Failed to configure Rx adapter port=%d, q=%d", port,
- queue_conf->ev.queue_id);
- return rc;
- }
-
- dev->rx_offloads |= otx2_eth_dev->rx_offload_flags;
- dev->tstamp = &otx2_eth_dev->tstamp;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
-
- return 0;
-}
-
-int
-otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i, rc;
-
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- return -EINVAL;
-
- if (rx_queue_id < 0) {
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = sso_rxq_disable(otx2_eth_dev, i);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev,
- eth_dev->data->rx_queues[i], false);
- }
- } else {
- rc = sso_rxq_disable(otx2_eth_dev, (uint16_t)rx_queue_id);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev,
- eth_dev->data->rx_queues[rx_queue_id],
- false);
- }
-
- if (rc < 0)
- otx2_err("Failed to clear Rx adapter config port=%d, q=%d",
- eth_dev->data->port_id, rx_queue_id);
-
- return rc;
-}
-
-int
-otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(eth_dev);
-
- return 0;
-}
-
-int
-otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(eth_dev);
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
- const struct rte_eth_dev *eth_dev, uint32_t *caps)
-{
- int ret;
-
- RTE_SET_USED(dev);
- ret = strncmp(eth_dev->device->driver->name, "net_octeontx2,", 13);
- if (ret)
- *caps = 0;
- else
- *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT;
-
- return 0;
-}
-
-static int
-sso_sqb_aura_limit_edit(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *aura_req;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
-
- aura_req->aura.limit = nb_sqb_bufs;
- aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
-
- return otx2_mbox_process(npa_lf->mbox);
-}
-
-static int
-sso_add_tx_queue_data(const struct rte_eventdev *event_dev,
- uint16_t eth_port_id, uint16_t tx_queue_id,
- struct otx2_eth_txq *txq)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i;
-
- for (i = 0; i < event_dev->data->nb_ports; i++) {
- dev->max_port_id = RTE_MAX(dev->max_port_id, eth_port_id);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *old_dws;
- struct otx2_ssogws_dual *dws;
-
- old_dws = event_dev->data->ports[i];
- dws = rte_realloc_socket(ssogws_get_cookie(old_dws),
- sizeof(struct otx2_ssogws_dual)
- + RTE_CACHE_LINE_SIZE +
- (sizeof(uint64_t) *
- (dev->max_port_id + 1) *
- RTE_MAX_QUEUES_PER_PORT),
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (dws == NULL)
- return -ENOMEM;
-
- /* First cache line is reserved for cookie */
- dws = (struct otx2_ssogws_dual *)
- ((uint8_t *)dws + RTE_CACHE_LINE_SIZE);
-
- ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT]
- )&dws->tx_adptr_data)[eth_port_id][tx_queue_id] =
- (uint64_t)txq;
- event_dev->data->ports[i] = dws;
- } else {
- struct otx2_ssogws *old_ws;
- struct otx2_ssogws *ws;
-
- old_ws = event_dev->data->ports[i];
- ws = rte_realloc_socket(ssogws_get_cookie(old_ws),
- sizeof(struct otx2_ssogws) +
- RTE_CACHE_LINE_SIZE +
- (sizeof(uint64_t) *
- (dev->max_port_id + 1) *
- RTE_MAX_QUEUES_PER_PORT),
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL)
- return -ENOMEM;
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
-
- ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT]
- )&ws->tx_adptr_data)[eth_port_id][tx_queue_id] =
- (uint64_t)txq;
- event_dev->data->ports[i] = ws;
- }
- }
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_eth_txq *txq;
- int i, ret;
-
- RTE_SET_USED(id);
- if (tx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- sso_sqb_aura_limit_edit(txq->sqb_pool,
- OTX2_SSO_SQB_LIMIT);
- ret = sso_add_tx_queue_data(event_dev,
- eth_dev->data->port_id, i,
- txq);
- if (ret < 0)
- return ret;
- }
- } else {
- txq = eth_dev->data->tx_queues[tx_queue_id];
- sso_sqb_aura_limit_edit(txq->sqb_pool, OTX2_SSO_SQB_LIMIT);
- ret = sso_add_tx_queue_data(event_dev, eth_dev->data->port_id,
- tx_queue_id, txq);
- if (ret < 0)
- return ret;
- }
-
- dev->tx_offloads |= otx2_eth_dev->tx_offload_flags;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id)
-{
- struct otx2_eth_txq *txq;
- int i;
-
- RTE_SET_USED(id);
- RTE_SET_USED(eth_dev);
- RTE_SET_USED(event_dev);
- if (tx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- sso_sqb_aura_limit_edit(txq->sqb_pool,
- txq->nb_sqb_bufs);
- }
- } else {
- txq = eth_dev->data->tx_queues[tx_queue_id];
- sso_sqb_aura_limit_edit(txq->sqb_pool, txq->nb_sqb_bufs);
- }
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
deleted file mode 100644
index d59d6c53f6..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
+++ /dev/null
@@ -1,132 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020-2021 Marvell.
- */
-
-#include <cryptodev_pmd.h>
-#include <rte_eventdev.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_qp.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_evdev.h"
-
-int
-otx2_ca_caps_get(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, uint32_t *caps)
-{
- RTE_SET_USED(dev);
- RTE_SET_USED(cdev);
-
- *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND |
- RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW |
- RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD;
-
- return 0;
-}
-
-static int
-otx2_ca_qp_sso_link(const struct rte_cryptodev *cdev, struct otx2_cpt_qp *qp,
- uint16_t sso_pf_func)
-{
- union otx2_cpt_af_lf_ctl2 af_lf_ctl2;
- int ret;
-
- ret = otx2_cpt_af_reg_read(cdev, OTX2_CPT_AF_LF_CTL2(qp->id),
- qp->blkaddr, &af_lf_ctl2.u);
- if (ret)
- return ret;
-
- af_lf_ctl2.s.sso_pf_func = sso_pf_func;
- ret = otx2_cpt_af_reg_write(cdev, OTX2_CPT_AF_LF_CTL2(qp->id),
- qp->blkaddr, af_lf_ctl2.u);
- return ret;
-}
-
-static void
-otx2_ca_qp_init(struct otx2_cpt_qp *qp, const struct rte_event *event)
-{
- if (event) {
- qp->qp_ev_bind = 1;
- rte_memcpy(&qp->ev, event, sizeof(struct rte_event));
- } else {
- qp->qp_ev_bind = 0;
- }
- qp->ca_enable = 1;
-}
-
-int
-otx2_ca_qp_add(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev,
- int32_t queue_pair_id, const struct rte_event *event)
-{
- struct otx2_sso_evdev *sso_evdev = sso_pmd_priv(dev);
- struct otx2_cpt_vf *vf = cdev->data->dev_private;
- uint16_t sso_pf_func = otx2_sso_pf_func_get();
- struct otx2_cpt_qp *qp;
- uint8_t qp_id;
- int ret;
-
- if (queue_pair_id == -1) {
- for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) {
- qp = cdev->data->queue_pairs[qp_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func);
- if (ret) {
- uint8_t qp_tmp;
- for (qp_tmp = 0; qp_tmp < qp_id; qp_tmp++)
- otx2_ca_qp_del(dev, cdev, qp_tmp);
- return ret;
- }
- otx2_ca_qp_init(qp, event);
- }
- } else {
- qp = cdev->data->queue_pairs[queue_pair_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func);
- if (ret)
- return ret;
- otx2_ca_qp_init(qp, event);
- }
-
- sso_evdev->rx_offloads |= NIX_RX_OFFLOAD_SECURITY_F;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)dev);
-
- /* Update crypto adapter xae count */
- if (queue_pair_id == -1)
- sso_evdev->adptr_xae_cnt +=
- vf->nb_queues * OTX2_CPT_DEFAULT_CMD_QLEN;
- else
- sso_evdev->adptr_xae_cnt += OTX2_CPT_DEFAULT_CMD_QLEN;
- sso_xae_reconfigure((struct rte_eventdev *)(uintptr_t)dev);
-
- return 0;
-}
-
-int
-otx2_ca_qp_del(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev,
- int32_t queue_pair_id)
-{
- struct otx2_cpt_vf *vf = cdev->data->dev_private;
- struct otx2_cpt_qp *qp;
- uint8_t qp_id;
- int ret;
-
- RTE_SET_USED(dev);
-
- ret = 0;
- if (queue_pair_id == -1) {
- for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) {
- qp = cdev->data->queue_pairs[qp_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, 0);
- if (ret)
- return ret;
- qp->ca_enable = 0;
- }
- } else {
- qp = cdev->data->queue_pairs[queue_pair_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, 0);
- if (ret)
- return ret;
- qp->ca_enable = 0;
- }
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
deleted file mode 100644
index b33cb7e139..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
+++ /dev/null
@@ -1,77 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
-#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
-
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_eventdev.h>
-
-#include "cpt_pmd_logs.h"
-#include "cpt_ucode.h"
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_ops_helper.h"
-#include "otx2_cryptodev_qp.h"
-
-static inline void
-otx2_ca_deq_post_process(const struct otx2_cpt_qp *qp,
- struct rte_crypto_op *cop, uintptr_t *rsp,
- uint8_t cc)
-{
- if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (likely(cc == NO_ERR)) {
- /* Verify authentication data if required */
- if (unlikely(rsp[2]))
- compl_auth_verify(cop, (uint8_t *)rsp[2],
- rsp[3]);
- else
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else {
- if (cc == ERR_GC_ICV_MISCOMPARE)
- cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-
- if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
- sym_session_clear(otx2_cryptodev_driver_id,
- cop->sym->session);
- memset(cop->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- cop->sym->session));
- rte_mempool_put(qp->sess_mp, cop->sym->session);
- cop->sym->session = NULL;
- }
- }
-
-}
-
-static inline uint64_t
-otx2_handle_crypto_event(uint64_t get_work1)
-{
- struct cpt_request_info *req;
- const struct otx2_cpt_qp *qp;
- struct rte_crypto_op *cop;
- uintptr_t *rsp;
- void *metabuf;
- uint8_t cc;
-
- req = (struct cpt_request_info *)(get_work1);
- cc = otx2_cpt_compcode_get(req);
- qp = req->qp;
-
- rsp = req->op;
- metabuf = (void *)rsp[0];
- cop = (void *)rsp[1];
-
- otx2_ca_deq_post_process(qp, cop, rsp, cc);
-
- rte_mempool_put(qp->meta_info.pool, metabuf);
-
- return (uint64_t)(cop);
-}
-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
deleted file mode 100644
index 1fc56f903b..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
+++ /dev/null
@@ -1,83 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2021 Marvell International Ltd.
- */
-
-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
-#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
-
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_event_crypto_adapter.h>
-#include <rte_eventdev.h>
-
-#include <otx2_cryptodev_qp.h>
-#include <otx2_worker.h>
-
-static inline uint16_t
-otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
-{
- union rte_event_crypto_metadata *m_data;
- struct rte_crypto_op *crypto_op;
- struct rte_cryptodev *cdev;
- struct otx2_cpt_qp *qp;
- uint8_t cdev_id;
- uint16_t qp_id;
-
- crypto_op = ev->event_ptr;
- if (crypto_op == NULL)
- return 0;
-
- if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- m_data = rte_cryptodev_sym_session_get_user_data(
- crypto_op->sym->session);
- if (m_data == NULL)
- goto free_op;
-
- cdev_id = m_data->request_info.cdev_id;
- qp_id = m_data->request_info.queue_pair_id;
- } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
- crypto_op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
- ((uint8_t *)crypto_op +
- crypto_op->private_data_offset);
- cdev_id = m_data->request_info.cdev_id;
- qp_id = m_data->request_info.queue_pair_id;
- } else {
- goto free_op;
- }
-
- cdev = &rte_cryptodevs[cdev_id];
- qp = cdev->data->queue_pairs[qp_id];
-
- if (!ev->sched_type)
- otx2_ssogws_head_wait(tag_op);
- if (qp->ca_enable)
- return cdev->enqueue_burst(qp, &crypto_op, 1);
-
-free_op:
- rte_pktmbuf_free(crypto_op->sym->m_src);
- rte_crypto_op_free(crypto_op);
- rte_errno = EINVAL;
- return 0;
-}
-
-static uint16_t __rte_hot
-otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
-
- RTE_SET_USED(nb_events);
-
- return otx2_ca_enq(ws->tag_op, ev);
-}
-
-static uint16_t __rte_hot
-otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
-
- RTE_SET_USED(nb_events);
-
- return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev);
-}
-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
deleted file mode 100644
index 9b7ad27b04..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ /dev/null
@@ -1,272 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_evdev.h"
-#include "otx2_tim_evdev.h"
-
-static void
-sso_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint64_t intr;
- uint8_t ggrp;
-
- ggrp = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + SSO_LF_GGRP_INT);
- if (intr == 0)
- return;
-
- otx2_err("GGRP %d GGRP_INT=0x%" PRIx64 "", ggrp, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + SSO_LF_GGRP_INT);
-}
-
-static int
-sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, sso_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-ssow_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t gws = (base >> 12) & 0xFF;
- uint64_t intr;
-
- intr = otx2_read64(base + SSOW_LF_GWS_INT);
- if (intr == 0)
- return;
-
- otx2_err("GWS %d GWS_INT=0x%" PRIx64 "", gws, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + SSOW_LF_GWS_INT);
-}
-
-static int
-ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, ssow_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
- uint16_t ggrp_msixoff, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
- otx2_unregister_irq(handle, sso_lf_irq, (void *)base, vec);
-}
-
-static void
-ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
- uint16_t gws_msixoff, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
- otx2_unregister_irq(handle, ssow_lf_irq, (void *)base, vec);
-}
-
-int
-sso_register_irqs(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i, rc = -EINVAL;
- uint8_t nb_ports;
-
- nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid SSOLF MSIX offset[%d] vector: 0x%x",
- i, dev->sso_msixoff[i]);
- goto fail;
- }
- }
-
- for (i = 0; i < nb_ports; i++) {
- if (dev->ssow_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid SSOWLF MSIX offset[%d] vector: 0x%x",
- i, dev->ssow_msixoff[i]);
- goto fail;
- }
- }
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
- i << 12);
- rc = sso_lf_register_irq(event_dev, dev->sso_msixoff[i], base);
- }
-
- for (i = 0; i < nb_ports; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
- i << 12);
- rc = ssow_lf_register_irq(event_dev, dev->ssow_msixoff[i],
- base);
- }
-
-fail:
- return rc;
-}
-
-void
-sso_unregister_irqs(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports;
- int i;
-
- nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
- i << 12);
- sso_lf_unregister_irq(event_dev, dev->sso_msixoff[i], base);
- }
-
- for (i = 0; i < nb_ports; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
- i << 12);
- ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base);
- }
-}
-
-static void
-tim_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint64_t intr;
- uint8_t ring;
-
- ring = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + TIM_LF_NRSPERR_INT);
- otx2_err("TIM RING %d TIM_LF_NRSPERR_INT=0x%" PRIx64 "", ring, intr);
- intr = otx2_read64(base + TIM_LF_RAS_INT);
- otx2_err("TIM RING %d TIM_LF_RAS_INT=0x%" PRIx64 "", ring, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + TIM_LF_NRSPERR_INT);
- otx2_write64(intr, base + TIM_LF_RAS_INT);
-}
-
-static int
-tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
- uintptr_t base)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1S);
-
- vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
- uintptr_t base)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1C);
- otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
-
- vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1C);
- otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
-}
-
-int
-tim_register_irq(uint16_t ring_id)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- int rc = -EINVAL;
- uintptr_t base;
-
- if (dev->tim_msixoff[ring_id] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid TIMLF MSIX offset[%d] vector: 0x%x",
- ring_id, dev->tim_msixoff[ring_id]);
- goto fail;
- }
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
- rc = tim_lf_register_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
-fail:
- return rc;
-}
-
-void
-tim_unregister_irq(uint16_t ring_id)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- uintptr_t base;
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
- tim_lf_unregister_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c
deleted file mode 100644
index 48bfaf893d..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_selftest.c
+++ /dev/null
@@ -1,1517 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_debug.h>
-#include <rte_eal.h>
-#include <rte_ethdev.h>
-#include <rte_eventdev.h>
-#include <rte_hexdump.h>
-#include <rte_launch.h>
-#include <rte_lcore.h>
-#include <rte_mbuf.h>
-#include <rte_malloc.h>
-#include <rte_memcpy.h>
-#include <rte_per_lcore.h>
-#include <rte_random.h>
-#include <rte_test.h>
-
-#include "otx2_evdev.h"
-
-#define NUM_PACKETS (1024)
-#define MAX_EVENTS (1024)
-
-#define OCTEONTX2_TEST_RUN(setup, teardown, test) \
- octeontx_test_run(setup, teardown, test, #test)
-
-static int total;
-static int passed;
-static int failed;
-static int unsupported;
-
-static int evdev;
-static struct rte_mempool *eventdev_test_mempool;
-
-struct event_attr {
- uint32_t flow_id;
- uint8_t event_type;
- uint8_t sub_event_type;
- uint8_t sched_type;
- uint8_t queue;
- uint8_t port;
-};
-
-static uint32_t seqn_list_index;
-static int seqn_list[NUM_PACKETS];
-
-static inline void
-seqn_list_init(void)
-{
- RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
- memset(seqn_list, 0, sizeof(seqn_list));
- seqn_list_index = 0;
-}
-
-static inline int
-seqn_list_update(int val)
-{
- if (seqn_list_index >= NUM_PACKETS)
- return -1;
-
- seqn_list[seqn_list_index++] = val;
- rte_smp_wmb();
- return 0;
-}
-
-static inline int
-seqn_list_check(int limit)
-{
- int i;
-
- for (i = 0; i < limit; i++) {
- if (seqn_list[i] != i) {
- otx2_err("Seqn mismatch %d %d", seqn_list[i], i);
- return -1;
- }
- }
- return 0;
-}
-
-struct test_core_param {
- rte_atomic32_t *total_events;
- uint64_t dequeue_tmo_ticks;
- uint8_t port;
- uint8_t sched_type;
-};
-
-static int
-testsuite_setup(void)
-{
- const char *eventdev_name = "event_octeontx2";
-
- evdev = rte_event_dev_get_dev_id(eventdev_name);
- if (evdev < 0) {
- otx2_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
- return -1;
- }
- return 0;
-}
-
-static void
-testsuite_teardown(void)
-{
- rte_event_dev_close(evdev);
-}
-
-static inline void
-devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
- struct rte_event_dev_info *info)
-{
- memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
- dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
- dev_conf->nb_event_ports = info->max_event_ports;
- dev_conf->nb_event_queues = info->max_event_queues;
- dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
- dev_conf->nb_event_port_dequeue_depth =
- info->max_event_port_dequeue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_events_limit =
- info->max_num_events;
-}
-
-enum {
- TEST_EVENTDEV_SETUP_DEFAULT,
- TEST_EVENTDEV_SETUP_PRIORITY,
- TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
-};
-
-static inline int
-_eventdev_setup(int mode)
-{
- const char *pool_name = "evdev_octeontx_test_pool";
- struct rte_event_dev_config dev_conf;
- struct rte_event_dev_info info;
- int i, ret;
-
- /* Create and destrory pool for each test case to make it standalone */
- eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS,
- 0, 0, 512,
- rte_socket_id());
- if (!eventdev_test_mempool) {
- otx2_err("ERROR creating mempool");
- return -1;
- }
-
- ret = rte_event_dev_info_get(evdev, &info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
-
- devconf_set_default_sane_values(&dev_conf, &info);
- if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
- dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
-
- ret = rte_event_dev_configure(evdev, &dev_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
-
- uint32_t queue_count;
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
- if (queue_count > 8)
- queue_count = 8;
-
- /* Configure event queues(0 to n) with
- * RTE_EVENT_DEV_PRIORITY_HIGHEST to
- * RTE_EVENT_DEV_PRIORITY_LOWEST
- */
- uint8_t step = (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) /
- queue_count;
- for (i = 0; i < (int)queue_count; i++) {
- struct rte_event_queue_conf queue_conf;
-
- ret = rte_event_queue_default_conf_get(evdev, i,
- &queue_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
- i);
- queue_conf.priority = i * step;
- ret = rte_event_queue_setup(evdev, i, &queue_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
- i);
- }
-
- } else {
- /* Configure event queues with default priority */
- for (i = 0; i < (int)queue_count; i++) {
- ret = rte_event_queue_setup(evdev, i, NULL);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
- i);
- }
- }
- /* Configure event ports */
- uint32_t port_count;
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
- "Port count get failed");
- for (i = 0; i < (int)port_count; i++) {
- ret = rte_event_port_setup(evdev, i, NULL);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
- ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
- i);
- }
-
- ret = rte_event_dev_start(evdev);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
-
- return 0;
-}
-
-static inline int
-eventdev_setup(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
-}
-
-static inline int
-eventdev_setup_priority(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
-}
-
-static inline int
-eventdev_setup_dequeue_timeout(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
-}
-
-static inline void
-eventdev_teardown(void)
-{
- rte_event_dev_stop(evdev);
- rte_mempool_free(eventdev_test_mempool);
-}
-
-static inline void
-update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
- uint32_t flow_id, uint8_t event_type,
- uint8_t sub_event_type, uint8_t sched_type,
- uint8_t queue, uint8_t port)
-{
- struct event_attr *attr;
-
- /* Store the event attributes in mbuf for future reference */
- attr = rte_pktmbuf_mtod(m, struct event_attr *);
- attr->flow_id = flow_id;
- attr->event_type = event_type;
- attr->sub_event_type = sub_event_type;
- attr->sched_type = sched_type;
- attr->queue = queue;
- attr->port = port;
-
- ev->flow_id = flow_id;
- ev->sub_event_type = sub_event_type;
- ev->event_type = event_type;
- /* Inject the new event */
- ev->op = RTE_EVENT_OP_NEW;
- ev->sched_type = sched_type;
- ev->queue_id = queue;
- ev->mbuf = m;
-}
-
-static inline int
-inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
- uint8_t sched_type, uint8_t queue, uint8_t port,
- unsigned int events)
-{
- struct rte_mbuf *m;
- unsigned int i;
-
- for (i = 0; i < events; i++) {
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
-
- *rte_event_pmd_selftest_seqn(m) = i;
- update_event_and_validation_attr(m, &ev, flow_id, event_type,
- sub_event_type, sched_type,
- queue, port);
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- return 0;
-}
-
-static inline int
-check_excess_events(uint8_t port)
-{
- uint16_t valid_event;
- struct rte_event ev;
- int i;
-
- /* Check for excess events, try for a few times and exit */
- for (i = 0; i < 32; i++) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
-
- RTE_TEST_ASSERT_SUCCESS(valid_event,
- "Unexpected valid event=%d",
- *rte_event_pmd_selftest_seqn(ev.mbuf));
- }
- return 0;
-}
-
-static inline int
-generate_random_events(const unsigned int total_events)
-{
- struct rte_event_dev_info info;
- uint32_t queue_count;
- unsigned int i;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- ret = rte_event_dev_info_get(evdev, &info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
- for (i = 0; i < total_events; i++) {
- ret = inject_events(
- rte_rand() % info.max_event_queue_flows /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- rte_rand() % queue_count /* queue */,
- 0 /* port */,
- 1 /* events */);
- if (ret)
- return -1;
- }
- return ret;
-}
-
-
-static inline int
-validate_event(struct rte_event *ev)
-{
- struct event_attr *attr;
-
- attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
- RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
- "flow_id mismatch enq=%d deq =%d",
- attr->flow_id, ev->flow_id);
- RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
- "event_type mismatch enq=%d deq =%d",
- attr->event_type, ev->event_type);
- RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
- "sub_event_type mismatch enq=%d deq =%d",
- attr->sub_event_type, ev->sub_event_type);
- RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
- "sched_type mismatch enq=%d deq =%d",
- attr->sched_type, ev->sched_type);
- RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
- "queue mismatch enq=%d deq =%d",
- attr->queue, ev->queue_id);
- return 0;
-}
-
-typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
- struct rte_event *ev);
-
-static inline int
-consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
-{
- uint32_t events = 0, forward_progress_cnt = 0, index = 0;
- uint16_t valid_event;
- struct rte_event ev;
- int ret;
-
- while (1) {
- if (++forward_progress_cnt > UINT16_MAX) {
- otx2_err("Detected deadlock");
- return -1;
- }
-
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- forward_progress_cnt = 0;
- ret = validate_event(&ev);
- if (ret)
- return -1;
-
- if (fn != NULL) {
- ret = fn(index, port, &ev);
- RTE_TEST_ASSERT_SUCCESS(ret,
- "Failed to validate test specific event");
- }
-
- ++index;
-
- rte_pktmbuf_free(ev.mbuf);
- if (++events >= total_events)
- break;
- }
-
- return check_excess_events(port);
-}
-
-static int
-validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
-{
- RTE_SET_USED(port);
- RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
- "index=%d != seqn=%d",
- index, *rte_event_pmd_selftest_seqn(ev->mbuf));
- return 0;
-}
-
-static inline int
-test_simple_enqdeq(uint8_t sched_type)
-{
- int ret;
-
- ret = inject_events(0 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type */,
- sched_type,
- 0 /* queue */,
- 0 /* port */,
- MAX_EVENTS);
- if (ret)
- return -1;
-
- return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
-}
-
-static int
-test_simple_enqdeq_ordered(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_simple_enqdeq_atomic(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_simple_enqdeq_parallel(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
-}
-
-/*
- * Generate a prescribed number of events and spread them across available
- * queues. On dequeue, using single event port(port 0) verify the enqueued
- * event attributes
- */
-static int
-test_multi_queue_enq_single_port_deq(void)
-{
- int ret;
-
- ret = generate_random_events(MAX_EVENTS);
- if (ret)
- return -1;
-
- return consume_events(0 /* port */, MAX_EVENTS, NULL);
-}
-
-/*
- * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
- * operation
- *
- * For example, Inject 32 events over 0..7 queues
- * enqueue events 0, 8, 16, 24 in queue 0
- * enqueue events 1, 9, 17, 25 in queue 1
- * ..
- * ..
- * enqueue events 7, 15, 23, 31 in queue 7
- *
- * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
- * order from queue0(highest priority) to queue7(lowest_priority)
- */
-static int
-validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
-{
- uint32_t queue_count;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count > 8)
- queue_count = 8;
- uint32_t range = MAX_EVENTS / queue_count;
- uint32_t expected_val = (index % range) * queue_count;
-
- expected_val += ev->queue_id;
- RTE_SET_USED(port);
- RTE_TEST_ASSERT_EQUAL(
- *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
- "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
- *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
- range, queue_count, MAX_EVENTS);
- return 0;
-}
-
-static int
-test_multi_queue_priority(void)
-{
- int i, max_evts_roundoff;
- /* See validate_queue_priority() comments for priority validate logic */
- uint32_t queue_count;
- struct rte_mbuf *m;
- uint8_t queue;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count > 8)
- queue_count = 8;
- max_evts_roundoff = MAX_EVENTS / queue_count;
- max_evts_roundoff *= queue_count;
-
- for (i = 0; i < max_evts_roundoff; i++) {
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
-
- *rte_event_pmd_selftest_seqn(m) = i;
- queue = i % queue_count;
- update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
- 0, RTE_SCHED_TYPE_PARALLEL,
- queue, 0);
- rte_event_enqueue_burst(evdev, 0, &ev, 1);
- }
-
- return consume_events(0, max_evts_roundoff, validate_queue_priority);
-}
-
-static int
-worker_multi_port_fn(void *arg)
-{
- struct test_core_param *param = arg;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
- int ret;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- ret = validate_event(&ev);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- }
-
- return 0;
-}
-
-static inline int
-wait_workers_to_join(const rte_atomic32_t *count)
-{
- uint64_t cycles, print_cycles;
-
- cycles = rte_get_timer_cycles();
- print_cycles = cycles;
- while (rte_atomic32_read(count)) {
- uint64_t new_cycles = rte_get_timer_cycles();
-
- if (new_cycles - print_cycles > rte_get_timer_hz()) {
- otx2_err("Events %d", rte_atomic32_read(count));
- print_cycles = new_cycles;
- }
- if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
- otx2_err("No schedules for seconds, deadlock (%d)",
- rte_atomic32_read(count));
- rte_event_dev_dump(evdev, stdout);
- cycles = new_cycles;
- return -1;
- }
- }
- rte_eal_mp_wait_lcore();
-
- return 0;
-}
-
-static inline int
-launch_workers_and_wait(int (*main_thread)(void *),
- int (*worker_thread)(void *), uint32_t total_events,
- uint8_t nb_workers, uint8_t sched_type)
-{
- rte_atomic32_t atomic_total_events;
- struct test_core_param *param;
- uint64_t dequeue_tmo_ticks;
- uint8_t port = 0;
- int w_lcore;
- int ret;
-
- if (!nb_workers)
- return 0;
-
- rte_atomic32_set(&atomic_total_events, total_events);
- seqn_list_init();
-
- param = malloc(sizeof(struct test_core_param) * nb_workers);
- if (!param)
- return -1;
-
- ret = rte_event_dequeue_timeout_ticks(evdev,
- rte_rand() % 10000000/* 10ms */,
- &dequeue_tmo_ticks);
- if (ret) {
- free(param);
- return -1;
- }
-
- param[0].total_events = &atomic_total_events;
- param[0].sched_type = sched_type;
- param[0].port = 0;
- param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
- rte_wmb();
-
- w_lcore = rte_get_next_lcore(
- /* start core */ -1,
- /* skip main */ 1,
- /* wrap */ 0);
- rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
-
- for (port = 1; port < nb_workers; port++) {
- param[port].total_events = &atomic_total_events;
- param[port].sched_type = sched_type;
- param[port].port = port;
- param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
- rte_smp_wmb();
- w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
- rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
- }
-
- rte_smp_wmb();
- ret = wait_workers_to_join(&atomic_total_events);
- free(param);
-
- return ret;
-}
-
-/*
- * Generate a prescribed number of events and spread them across available
- * queues. Dequeue the events through multiple ports and verify the enqueued
- * event attributes
- */
-static int
-test_multi_queue_enq_multi_port_deq(void)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t nr_ports;
- int ret;
-
- ret = generate_random_events(total_events);
- if (ret)
- return -1;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d", nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- return launch_workers_and_wait(worker_multi_port_fn,
- worker_multi_port_fn, total_events,
- nr_ports, 0xff /* invalid */);
-}
-
-static
-void flush(uint8_t dev_id, struct rte_event event, void *arg)
-{
- unsigned int *count = arg;
-
- RTE_SET_USED(dev_id);
- if (event.event_type == RTE_EVENT_TYPE_CPU)
- *count = *count + 1;
-}
-
-static int
-test_dev_stop_flush(void)
-{
- unsigned int total_events = MAX_EVENTS, count = 0;
- int ret;
-
- ret = generate_random_events(total_events);
- if (ret)
- return -1;
-
- ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
- if (ret)
- return -2;
- rte_event_dev_stop(evdev);
- ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
- if (ret)
- return -3;
- RTE_TEST_ASSERT_EQUAL(total_events, count,
- "count mismatch total_events=%d count=%d",
- total_events, count);
-
- return 0;
-}
-
-static int
-validate_queue_to_port_single_link(uint32_t index, uint8_t port,
- struct rte_event *ev)
-{
- RTE_SET_USED(index);
- RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
- "queue mismatch enq=%d deq =%d",
- port, ev->queue_id);
-
- return 0;
-}
-
-/*
- * Link queue x to port x and check correctness of link by checking
- * queue_id == x on dequeue on the specific port x
- */
-static int
-test_queue_to_port_single_link(void)
-{
- int i, nr_links, ret;
- uint32_t queue_count;
- uint32_t port_count;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
- "Port count get failed");
-
- /* Unlink all connections that created in eventdev_setup */
- for (i = 0; i < (int)port_count; i++) {
- ret = rte_event_port_unlink(evdev, i, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0,
- "Failed to unlink all queues port=%d", i);
- }
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- nr_links = RTE_MIN(port_count, queue_count);
- const unsigned int total_events = MAX_EVENTS / nr_links;
-
- /* Link queue x to port x and inject events to queue x through port x */
- for (i = 0; i < nr_links; i++) {
- uint8_t queue = (uint8_t)i;
-
- ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
- RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
-
- ret = inject_events(0x100 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- queue /* queue */, i /* port */,
- total_events /* events */);
- if (ret)
- return -1;
- }
-
- /* Verify the events generated from correct queue */
- for (i = 0; i < nr_links; i++) {
- ret = consume_events(i /* port */, total_events,
- validate_queue_to_port_single_link);
- if (ret)
- return -1;
- }
-
- return 0;
-}
-
-static int
-validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
- struct rte_event *ev)
-{
- RTE_SET_USED(index);
- RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
- "queue mismatch enq=%d deq =%d",
- port, ev->queue_id);
-
- return 0;
-}
-
-/*
- * Link all even number of queues to port 0 and all odd number of queues to
- * port 1 and verify the link connection on dequeue
- */
-static int
-test_queue_to_port_multi_link(void)
-{
- int ret, port0_events = 0, port1_events = 0;
- uint32_t nr_queues = 0;
- uint32_t nr_ports = 0;
- uint8_t queue, port;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
- "Queue count get failed");
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
- "Queue count get failed");
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
-
- if (nr_ports < 2) {
- otx2_err("Not enough ports to test ports=%d", nr_ports);
- return 0;
- }
-
- /* Unlink all connections that created in eventdev_setup */
- for (port = 0; port < nr_ports; port++) {
- ret = rte_event_port_unlink(evdev, port, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
- port);
- }
-
- const unsigned int total_events = MAX_EVENTS / nr_queues;
-
- /* Link all even number of queues to port0 and odd numbers to port 1*/
- for (queue = 0; queue < nr_queues; queue++) {
- port = queue & 0x1;
- ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
- RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
- queue, port);
-
- ret = inject_events(0x100 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- queue /* queue */, port /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- if (port == 0)
- port0_events += total_events;
- else
- port1_events += total_events;
- }
-
- ret = consume_events(0 /* port */, port0_events,
- validate_queue_to_port_multi_link);
- if (ret)
- return -1;
- ret = consume_events(1 /* port */, port1_events,
- validate_queue_to_port_multi_link);
- if (ret)
- return -1;
-
- return 0;
-}
-
-static int
-worker_flow_based_pipeline(void *arg)
-{
- struct test_core_param *param = arg;
- uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t new_sched_type = param->sched_type;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
- dequeue_tmo_ticks);
- if (!valid_event)
- continue;
-
- /* Events from stage 0 */
- if (ev.sub_event_type == 0) {
- /* Move to atomic flow to maintain the ordering */
- ev.flow_id = 0x2;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sub_event_type = 1; /* stage 1 */
- ev.sched_type = new_sched_type;
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
- uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
-
- if (seqn_list_update(seqn) == 0) {
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- otx2_err("Failed to update seqn_list");
- return -1;
- }
- } else {
- otx2_err("Invalid ev.sub_event_type = %d",
- ev.sub_event_type);
- return -1;
- }
- }
- return 0;
-}
-
-static int
-test_multiport_flow_sched_type_test(uint8_t in_sched_type,
- uint8_t out_sched_type)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d", nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- in_sched_type,
- 0 /* queue */,
- 0 /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- rte_mb();
- ret = launch_workers_and_wait(worker_flow_based_pipeline,
- worker_flow_based_pipeline, total_events,
- nr_ports, out_sched_type);
- if (ret)
- return -1;
-
- if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
- out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
- /* Check the events order maintained or not */
- return seqn_list_check(total_events);
- }
-
- return 0;
-}
-
-/* Multi port ordered to atomic transaction */
-static int
-test_multi_port_flow_ordered_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_ordered_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_ordered_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_flow_atomic_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_atomic_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_atomic_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_flow_parallel_to_atomic(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_parallel_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_parallel_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-worker_group_based_pipeline(void *arg)
-{
- struct test_core_param *param = arg;
- uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t new_sched_type = param->sched_type;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
- dequeue_tmo_ticks);
- if (!valid_event)
- continue;
-
- /* Events from stage 0(group 0) */
- if (ev.queue_id == 0) {
- /* Move to atomic flow to maintain the ordering */
- ev.flow_id = 0x2;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sched_type = new_sched_type;
- ev.queue_id = 1; /* Stage 1*/
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
- uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
-
- if (seqn_list_update(seqn) == 0) {
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- otx2_err("Failed to update seqn_list");
- return -1;
- }
- } else {
- otx2_err("Invalid ev.queue_id = %d", ev.queue_id);
- return -1;
- }
- }
-
- return 0;
-}
-
-static int
-test_multiport_queue_sched_type_test(uint8_t in_sched_type,
- uint8_t out_sched_type)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t queue_count;
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
-
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count < 2 || !nr_ports) {
- otx2_err("Not enough queues=%d ports=%d or workers=%d",
- queue_count, nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- in_sched_type,
- 0 /* queue */,
- 0 /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- ret = launch_workers_and_wait(worker_group_based_pipeline,
- worker_group_based_pipeline, total_events,
- nr_ports, out_sched_type);
- if (ret)
- return -1;
-
- if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
- out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
- /* Check the events order maintained or not */
- return seqn_list_check(total_events);
- }
-
- return 0;
-}
-
-static int
-test_multi_port_queue_ordered_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_ordered_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_ordered_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_queue_atomic_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_atomic_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_atomic_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_queue_parallel_to_atomic(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_parallel_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_parallel_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.sub_event_type == 255) { /* last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sub_event_type++;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-static int
-launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
-{
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d",
- nr_ports, rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- rte_rand() %
- (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
- 0 /* queue */,
- 0 /* port */,
- MAX_EVENTS /* events */);
- if (ret)
- return -1;
-
- return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
- 0xff /* invalid */);
-}
-
-/* Flow based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_flow_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_flow_based_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- uint32_t queue_count;
- uint16_t valid_event;
- struct rte_event ev;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- uint8_t nr_queues = queue_count;
- rte_atomic32_t *total_events = param->total_events;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.queue_id == nr_queues - 1) { /* last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.queue_id++;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-/* Queue based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_queue_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_queue_based_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- uint32_t queue_count;
- uint16_t valid_event;
- struct rte_event ev;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- uint8_t nr_queues = queue_count;
- rte_atomic32_t *total_events = param->total_events;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.queue_id == nr_queues - 1) { /* Last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.queue_id++;
- ev.sub_event_type = rte_rand() % 256;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-/* Queue and flow based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_mixed_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_mixed_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_ordered_flow_producer(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- struct rte_mbuf *m;
- int counter = 0;
-
- while (counter < NUM_PACKETS) {
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- if (m == NULL)
- continue;
-
- *rte_event_pmd_selftest_seqn(m) = counter++;
-
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- ev.flow_id = 0x1; /* Generate a fat flow */
- ev.sub_event_type = 0;
- /* Inject the new event */
- ev.op = RTE_EVENT_OP_NEW;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sched_type = RTE_SCHED_TYPE_ORDERED;
- ev.queue_id = 0;
- ev.mbuf = m;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
-
- return 0;
-}
-
-static inline int
-test_producer_consumer_ingress_order_test(int (*fn)(void *))
-{
- uint32_t nr_ports;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (rte_lcore_count() < 3 || nr_ports < 2) {
- otx2_err("### Not enough cores for test.");
- return 0;
- }
-
- launch_workers_and_wait(worker_ordered_flow_producer, fn,
- NUM_PACKETS, nr_ports, RTE_SCHED_TYPE_ATOMIC);
- /* Check the events order maintained or not */
- return seqn_list_check(NUM_PACKETS);
-}
-
-/* Flow based producer consumer ingress order test */
-static int
-test_flow_producer_consumer_ingress_order_test(void)
-{
- return test_producer_consumer_ingress_order_test(
- worker_flow_based_pipeline);
-}
-
-/* Queue based producer consumer ingress order test */
-static int
-test_queue_producer_consumer_ingress_order_test(void)
-{
- return test_producer_consumer_ingress_order_test(
- worker_group_based_pipeline);
-}
-
-static void octeontx_test_run(int (*setup)(void), void (*tdown)(void),
- int (*test)(void), const char *name)
-{
- if (setup() < 0) {
- printf("Error setting up test %s", name);
- unsupported++;
- } else {
- if (test() < 0) {
- failed++;
- printf("+ TestCase [%2d] : %s failed\n", total, name);
- } else {
- passed++;
- printf("+ TestCase [%2d] : %s succeeded\n", total,
- name);
- }
- }
-
- total++;
- tdown();
-}
-
-int
-otx2_sso_selftest(void)
-{
- testsuite_setup();
-
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_queue_enq_single_port_deq);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_dev_stop_flush);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_queue_enq_multi_port_deq);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_to_port_single_link);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_to_port_multi_link);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_mixed_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_flow_producer_consumer_ingress_order_test);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_producer_consumer_ingress_order_test);
- OCTEONTX2_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
- test_multi_queue_priority);
- OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
- test_multi_port_flow_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
- test_multi_port_queue_ordered_to_atomic);
- printf("Total tests : %d\n", total);
- printf("Passed : %d\n", passed);
- printf("Failed : %d\n", failed);
- printf("Not supported : %d\n", unsupported);
-
- testsuite_teardown();
-
- if (failed)
- return -1;
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h
deleted file mode 100644
index 74fcec8a07..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_stats.h
+++ /dev/null
@@ -1,286 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_EVDEV_STATS_H__
-#define __OTX2_EVDEV_STATS_H__
-
-#include "otx2_evdev.h"
-
-struct otx2_sso_xstats_name {
- const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
- const size_t offset;
- const uint64_t mask;
- const uint8_t shift;
- uint64_t reset_snap[OTX2_SSO_MAX_VHGRP];
-};
-
-static struct otx2_sso_xstats_name sso_hws_xstats[] = {
- {"last_grp_serviced", offsetof(struct sso_hws_stats, arbitration),
- 0x3FF, 0, {0} },
- {"affinity_arbitration_credits",
- offsetof(struct sso_hws_stats, arbitration),
- 0xF, 16, {0} },
-};
-
-static struct otx2_sso_xstats_name sso_grp_xstats[] = {
- {"wrk_sched", offsetof(struct sso_grp_stats, ws_pc), ~0x0, 0,
- {0} },
- {"xaq_dram", offsetof(struct sso_grp_stats, ext_pc), ~0x0,
- 0, {0} },
- {"add_wrk", offsetof(struct sso_grp_stats, wa_pc), ~0x0, 0,
- {0} },
- {"tag_switch_req", offsetof(struct sso_grp_stats, ts_pc), ~0x0, 0,
- {0} },
- {"desched_req", offsetof(struct sso_grp_stats, ds_pc), ~0x0, 0,
- {0} },
- {"desched_wrk", offsetof(struct sso_grp_stats, dq_pc), ~0x0, 0,
- {0} },
- {"xaq_cached", offsetof(struct sso_grp_stats, aw_status), 0x3,
- 0, {0} },
- {"work_inflight", offsetof(struct sso_grp_stats, aw_status), 0x3F,
- 16, {0} },
- {"inuse_pages", offsetof(struct sso_grp_stats, page_cnt),
- 0xFFFFFFFF, 0, {0} },
-};
-
-#define OTX2_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
-#define OTX2_SSO_NUM_GRP_XSTATS RTE_DIM(sso_grp_xstats)
-
-#define OTX2_SSO_NUM_XSTATS (OTX2_SSO_NUM_HWS_XSTATS + OTX2_SSO_NUM_GRP_XSTATS)
-
-static int
-otx2_sso_xstats_get(const struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
- const unsigned int ids[], uint64_t values[], unsigned int n)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_sso_xstats_name *xstats;
- struct otx2_sso_xstats_name *xstat;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int i;
- uint64_t value;
- void *req_rsp;
- int rc;
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- return 0;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_hws_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
- 2 * queue_port_id : queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- if (dev->dual_ws) {
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- values[i] = *(uint64_t *)
- ((char *)req_rsp + xstat->offset);
- values[i] = (values[i] >> xstat->shift) &
- xstat->mask;
- }
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws =
- (2 * queue_port_id) + 1;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
- }
-
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_grp_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- break;
- default:
- otx2_err("Invalid mode received");
- goto invalid_value;
- };
-
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- value = *(uint64_t *)((char *)req_rsp + xstat->offset);
- value = (value >> xstat->shift) & xstat->mask;
-
- if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
- values[i] += value;
- else
- values[i] = value;
-
- values[i] -= xstat->reset_snap[queue_port_id];
- }
-
- return i;
-invalid_value:
- return -EINVAL;
-}
-
-static int
-otx2_sso_xstats_reset(struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode,
- int16_t queue_port_id, const uint32_t ids[], uint32_t n)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_sso_xstats_name *xstats;
- struct otx2_sso_xstats_name *xstat;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int i;
- uint64_t value;
- void *req_rsp;
- int rc;
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- return 0;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_hws_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
- 2 * queue_port_id : queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- if (dev->dual_ws) {
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- xstat->reset_snap[queue_port_id] = *(uint64_t *)
- ((char *)req_rsp + xstat->offset);
- xstat->reset_snap[queue_port_id] =
- (xstat->reset_snap[queue_port_id] >>
- xstat->shift) & xstat->mask;
- }
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws =
- (2 * queue_port_id) + 1;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
- }
-
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_grp_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void *)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- break;
- default:
- otx2_err("Invalid mode received");
- goto invalid_value;
- };
-
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- value = *(uint64_t *)((char *)req_rsp + xstat->offset);
- value = (value >> xstat->shift) & xstat->mask;
-
- if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
- xstat->reset_snap[queue_port_id] += value;
- else
- xstat->reset_snap[queue_port_id] = value;
- }
- return i;
-invalid_value:
- return -EINVAL;
-}
-
-static int
-otx2_sso_xstats_get_names(const struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode,
- uint8_t queue_port_id,
- struct rte_event_dev_xstats_name *xstats_names,
- unsigned int *ids, unsigned int size)
-{
- struct rte_event_dev_xstats_name xstats_names_copy[OTX2_SSO_NUM_XSTATS];
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int xidx = 0;
- unsigned int i;
-
- for (i = 0; i < OTX2_SSO_NUM_HWS_XSTATS; i++) {
- snprintf(xstats_names_copy[i].name,
- sizeof(xstats_names_copy[i].name), "%s",
- sso_hws_xstats[i].name);
- }
-
- for (; i < OTX2_SSO_NUM_XSTATS; i++) {
- snprintf(xstats_names_copy[i].name,
- sizeof(xstats_names_copy[i].name), "%s",
- sso_grp_xstats[i - OTX2_SSO_NUM_HWS_XSTATS].name);
- }
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- break;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- break;
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- break;
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- break;
- default:
- otx2_err("Invalid mode received");
- return -EINVAL;
- };
-
- if (xstats_mode_count > size || !ids || !xstats_names)
- return xstats_mode_count;
-
- for (i = 0; i < xstats_mode_count; i++) {
- xidx = i + start_offset;
- strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
- sizeof(xstats_names[i].name));
- ids[i] = xidx;
- }
-
- return i;
-}
-
-#endif
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
deleted file mode 100644
index 6da8b14b78..0000000000
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ /dev/null
@@ -1,735 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <rte_mbuf_pool_ops.h>
-
-#include "otx2_evdev.h"
-#include "otx2_tim_evdev.h"
-
-static struct event_timer_adapter_ops otx2_tim_ops;
-
-static inline int
-tim_get_msix_offsets(void)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int i, rc;
-
- /* Get TIM MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- for (i = 0; i < dev->nb_rings; i++)
- dev->tim_msixoff[i] = msix_rsp->timlf_msixoff[i];
-
- return rc;
-}
-
-static void
-tim_set_fp_ops(struct otx2_tim_ring *tim_ring)
-{
- uint8_t prod_flag = !tim_ring->prod_type_sp;
-
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
-#define FP(_name, _f3, _f2, _f1, flags) \
- [_f3][_f2][_f1] = otx2_tim_arm_burst_##_name,
- TIM_ARM_FASTPATH_MODES
-#undef FP
- };
-
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) \
- [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_##_name,
- TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
- };
-
- otx2_tim_ops.arm_burst =
- arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
- otx2_tim_ops.arm_tmo_tick_burst =
- arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
- otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst;
-}
-
-static void
-otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer_adapter_info *adptr_info)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
-
- adptr_info->max_tmo_ns = tim_ring->max_tout;
- adptr_info->min_resolution_ns = tim_ring->ena_periodic ?
- tim_ring->max_tout : tim_ring->tck_nsec;
- rte_memcpy(&adptr_info->conf, &adptr->data->conf,
- sizeof(struct rte_event_timer_adapter_conf));
-}
-
-static int
-tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
- struct rte_event_timer_adapter_conf *rcfg)
-{
- unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
- unsigned int mp_flags = 0;
- char pool_name[25];
- int rc;
-
- cache_sz /= rte_lcore_count();
- /* Create chunk pool. */
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
- mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
- otx2_tim_dbg("Using single producer mode");
- tim_ring->prod_type_sp = true;
- }
-
- snprintf(pool_name, sizeof(pool_name), "otx2_tim_chunk_pool%d",
- tim_ring->ring_id);
-
- if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
- cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
-
- cache_sz = cache_sz != 0 ? cache_sz : 2;
- tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- if (!tim_ring->disable_npa) {
- tim_ring->chunk_pool = rte_mempool_create_empty(pool_name,
- tim_ring->nb_chunks, tim_ring->chunk_sz,
- cache_sz, 0, rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
-
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(),
- NULL);
- if (rc < 0) {
- otx2_err("Unable to set chunkpool ops");
- goto free;
- }
-
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate chunkpool.");
- goto free;
- }
- tim_ring->aura = npa_lf_aura_handle_to_aura(
- tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = tim_ring->ena_periodic ? 1 : 0;
- } else {
- tim_ring->chunk_pool = rte_mempool_create(pool_name,
- tim_ring->nb_chunks, tim_ring->chunk_sz,
- cache_sz, 0, NULL, NULL, NULL, NULL,
- rte_socket_id(),
- mp_flags);
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
- tim_ring->ena_dfb = 1;
- }
-
- return 0;
-
-free:
- rte_mempool_free(tim_ring->chunk_pool);
- return rc;
-}
-
-static void
-tim_err_desc(int rc)
-{
- switch (rc) {
- case TIM_AF_NO_RINGS_LEFT:
- otx2_err("Unable to allocat new TIM ring.");
- break;
- case TIM_AF_INVALID_NPA_PF_FUNC:
- otx2_err("Invalid NPA pf func.");
- break;
- case TIM_AF_INVALID_SSO_PF_FUNC:
- otx2_err("Invalid SSO pf func.");
- break;
- case TIM_AF_RING_STILL_RUNNING:
- otx2_tim_dbg("Ring busy.");
- break;
- case TIM_AF_LF_INVALID:
- otx2_err("Invalid Ring id.");
- break;
- case TIM_AF_CSIZE_NOT_ALIGNED:
- otx2_err("Chunk size specified needs to be multiple of 16.");
- break;
- case TIM_AF_CSIZE_TOO_SMALL:
- otx2_err("Chunk size too small.");
- break;
- case TIM_AF_CSIZE_TOO_BIG:
- otx2_err("Chunk size too big.");
- break;
- case TIM_AF_INTERVAL_TOO_SMALL:
- otx2_err("Bucket traversal interval too small.");
- break;
- case TIM_AF_INVALID_BIG_ENDIAN_VALUE:
- otx2_err("Invalid Big endian value.");
- break;
- case TIM_AF_INVALID_CLOCK_SOURCE:
- otx2_err("Invalid Clock source specified.");
- break;
- case TIM_AF_GPIO_CLK_SRC_NOT_ENABLED:
- otx2_err("GPIO clock source not enabled.");
- break;
- case TIM_AF_INVALID_BSIZE:
- otx2_err("Invalid bucket size.");
- break;
- case TIM_AF_INVALID_ENABLE_PERIODIC:
- otx2_err("Invalid bucket size.");
- break;
- case TIM_AF_INVALID_ENABLE_DONTFREE:
- otx2_err("Invalid Don't free value.");
- break;
- case TIM_AF_ENA_DONTFRE_NSET_PERIODIC:
- otx2_err("Don't free bit not set when periodic is enabled.");
- break;
- case TIM_AF_RING_ALREADY_DISABLED:
- otx2_err("Ring already stopped");
- break;
- default:
- otx2_err("Unknown Error.");
- }
-}
-
-static int
-otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
-{
- struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct otx2_tim_ring *tim_ring;
- struct tim_config_req *cfg_req;
- struct tim_ring_req *free_req;
- struct tim_lf_alloc_req *req;
- struct tim_lf_alloc_rsp *rsp;
- uint8_t is_periodic;
- int i, rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- if (adptr->data->id >= dev->nb_rings)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_lf_alloc(dev->mbox);
- req->npa_pf_func = otx2_npa_pf_func_get();
- req->sso_pf_func = otx2_sso_pf_func_get();
- req->ring = adptr->data->id;
-
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (rc < 0) {
- tim_err_desc(rc);
- return -ENODEV;
- }
-
- if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10),
- rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) {
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
- rcfg->timer_tick_ns = TICK2NSEC(OTX2_TIM_MIN_TMO_TKS,
- rsp->tenns_clk);
- else {
- rc = -ERANGE;
- goto rng_mem_err;
- }
- }
-
- is_periodic = 0;
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_PERIODIC) {
- if (rcfg->max_tmo_ns &&
- rcfg->max_tmo_ns != rcfg->timer_tick_ns) {
- rc = -ERANGE;
- goto rng_mem_err;
- }
-
- /* Use 2 buckets to avoid contention */
- rcfg->max_tmo_ns = rcfg->timer_tick_ns;
- rcfg->timer_tick_ns /= 2;
- is_periodic = 1;
- }
-
- tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0);
- if (tim_ring == NULL) {
- rc = -ENOMEM;
- goto rng_mem_err;
- }
-
- adptr->data->adapter_priv = tim_ring;
-
- tim_ring->tenns_clk_freq = rsp->tenns_clk;
- tim_ring->clk_src = (int)rcfg->clk_src;
- tim_ring->ring_id = adptr->data->id;
- tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10);
- tim_ring->max_tout = is_periodic ?
- rcfg->timer_tick_ns * 2 : rcfg->max_tmo_ns;
- tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
- tim_ring->chunk_sz = dev->chunk_sz;
- tim_ring->nb_timers = rcfg->nb_timers;
- tim_ring->disable_npa = dev->disable_npa;
- tim_ring->ena_periodic = is_periodic;
- tim_ring->enable_stats = dev->enable_stats;
-
- for (i = 0; i < dev->ring_ctl_cnt ; i++) {
- struct otx2_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
-
- if (ring_ctl->ring == tim_ring->ring_id) {
- tim_ring->chunk_sz = ring_ctl->chunk_slots ?
- ((uint32_t)(ring_ctl->chunk_slots + 1) *
- OTX2_TIM_CHUNK_ALIGNMENT) : tim_ring->chunk_sz;
- tim_ring->enable_stats = ring_ctl->enable_stats;
- tim_ring->disable_npa = ring_ctl->disable_npa;
- }
- }
-
- if (tim_ring->disable_npa) {
- tim_ring->nb_chunks =
- tim_ring->nb_timers /
- OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
- tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
- } else {
- tim_ring->nb_chunks = tim_ring->nb_timers;
- }
- tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
- tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) *
- sizeof(struct otx2_tim_bkt),
- RTE_CACHE_LINE_SIZE);
- if (tim_ring->bkt == NULL)
- goto bkt_mem_err;
-
- rc = tim_chnk_pool_create(tim_ring, rcfg);
- if (rc < 0)
- goto chnk_mem_err;
-
- cfg_req = otx2_mbox_alloc_msg_tim_config_ring(dev->mbox);
-
- cfg_req->ring = tim_ring->ring_id;
- cfg_req->bigendian = false;
- cfg_req->clocksource = tim_ring->clk_src;
- cfg_req->enableperiodic = tim_ring->ena_periodic;
- cfg_req->enabledontfreebuffer = tim_ring->ena_dfb;
- cfg_req->bucketsize = tim_ring->nb_bkts;
- cfg_req->chunksize = tim_ring->chunk_sz;
- cfg_req->interval = NSEC2TICK(tim_ring->tck_nsec,
- tim_ring->tenns_clk_freq);
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- goto chnk_mem_err;
- }
-
- tim_ring->base = dev->bar2 +
- (RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12);
-
- rc = tim_register_irq(tim_ring->ring_id);
- if (rc < 0)
- goto chnk_mem_err;
-
- otx2_write64((uint64_t)tim_ring->bkt,
- tim_ring->base + TIM_LF_RING_BASE);
- otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
-
- /* Set fastpath ops. */
- tim_set_fp_ops(tim_ring);
-
- /* Update SSO xae count. */
- sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)tim_ring,
- RTE_EVENT_TYPE_TIMER);
- sso_xae_reconfigure(dev->event_dev);
-
- otx2_tim_dbg("Total memory used %"PRIu64"MB\n",
- (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz)
- + (tim_ring->nb_bkts * sizeof(struct otx2_tim_bkt))) /
- BIT_ULL(20)));
-
- return rc;
-
-chnk_mem_err:
- rte_free(tim_ring->bkt);
-bkt_mem_err:
- rte_free(tim_ring);
-rng_mem_err:
- free_req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
- free_req->ring = adptr->data->id;
- otx2_mbox_process(dev->mbox);
- return rc;
-}
-
-static void
-otx2_tim_calibrate_start_tsc(struct otx2_tim_ring *tim_ring)
-{
-#define OTX2_TIM_CALIB_ITER 1E6
- uint32_t real_bkt, bucket;
- int icount, ecount = 0;
- uint64_t bkt_cyc;
-
- for (icount = 0; icount < OTX2_TIM_CALIB_ITER; icount++) {
- real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
- bkt_cyc = tim_cntvct();
- bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
- tim_ring->tck_int;
- bucket = bucket % (tim_ring->nb_bkts);
- tim_ring->ring_start_cyc = bkt_cyc - (real_bkt *
- tim_ring->tck_int);
- if (bucket != real_bkt)
- ecount++;
- }
- tim_ring->last_updt_cyc = bkt_cyc;
- otx2_tim_dbg("Bucket mispredict %3.2f distance %d\n",
- 100 - (((double)(icount - ecount) / (double)icount) * 100),
- bucket - real_bkt);
-}
-
-static int
-otx2_tim_ring_start(const struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_enable_rsp *rsp;
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_enable_ring(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (rc < 0) {
- tim_err_desc(rc);
- goto fail;
- }
- tim_ring->ring_start_cyc = rsp->timestarted;
- tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, tim_cntfrq());
- tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
- tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
- tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
-
- otx2_tim_calibrate_start_tsc(tim_ring);
-
-fail:
- return rc;
-}
-
-static int
-otx2_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_disable_ring(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- rc = -EBUSY;
- }
-
- return rc;
-}
-
-static int
-otx2_tim_ring_free(struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- tim_unregister_irq(tim_ring->ring_id);
-
- req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- return -EBUSY;
- }
-
- rte_free(tim_ring->bkt);
- rte_mempool_free(tim_ring->chunk_pool);
- rte_free(adptr->data->adapter_priv);
-
- return 0;
-}
-
-static int
-otx2_tim_stats_get(const struct rte_event_timer_adapter *adapter,
- struct rte_event_timer_adapter_stats *stats)
-{
- struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
- uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc;
-
- stats->evtim_exp_count = __atomic_load_n(&tim_ring->arm_cnt,
- __ATOMIC_RELAXED);
- stats->ev_enq_count = stats->evtim_exp_count;
- stats->adapter_tick_count = rte_reciprocal_divide_u64(bkt_cyc,
- &tim_ring->fast_div);
- return 0;
-}
-
-static int
-otx2_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
-{
- struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
-
- __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
- return 0;
-}
-
-int
-otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
- uint32_t *caps, const struct event_timer_adapter_ops **ops)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
-
- RTE_SET_USED(flags);
-
- if (dev == NULL)
- return -ENODEV;
-
- otx2_tim_ops.init = otx2_tim_ring_create;
- otx2_tim_ops.uninit = otx2_tim_ring_free;
- otx2_tim_ops.start = otx2_tim_ring_start;
- otx2_tim_ops.stop = otx2_tim_ring_stop;
- otx2_tim_ops.get_info = otx2_tim_ring_info_get;
-
- if (dev->enable_stats) {
- otx2_tim_ops.stats_get = otx2_tim_stats_get;
- otx2_tim_ops.stats_reset = otx2_tim_stats_reset;
- }
-
- /* Store evdev pointer for later use. */
- dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
- *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT |
- RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC;
- *ops = &otx2_tim_ops;
-
- return 0;
-}
-
-#define OTX2_TIM_DISABLE_NPA "tim_disable_npa"
-#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots"
-#define OTX2_TIM_STATS_ENA "tim_stats_ena"
-#define OTX2_TIM_RINGS_LMT "tim_rings_lmt"
-#define OTX2_TIM_RING_CTL "tim_ring_ctl"
-
-static void
-tim_parse_ring_param(char *value, void *opaque)
-{
- struct otx2_tim_evdev *dev = opaque;
- struct otx2_tim_ctl ring_ctl = {0};
- char *tok = strtok(value, "-");
- struct otx2_tim_ctl *old_ptr;
- uint16_t *val;
-
- val = (uint16_t *)&ring_ctl;
-
- if (!strlen(value))
- return;
-
- while (tok != NULL) {
- *val = atoi(tok);
- tok = strtok(NULL, "-");
- val++;
- }
-
- if (val != (&ring_ctl.enable_stats + 1)) {
- otx2_err(
- "Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
- return;
- }
-
- dev->ring_ctl_cnt++;
- old_ptr = dev->ring_ctl_data;
- dev->ring_ctl_data = rte_realloc(dev->ring_ctl_data,
- sizeof(struct otx2_tim_ctl) *
- dev->ring_ctl_cnt, 0);
- if (dev->ring_ctl_data == NULL) {
- dev->ring_ctl_data = old_ptr;
- dev->ring_ctl_cnt--;
- return;
- }
-
- dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
-}
-
-static void
-tim_parse_ring_ctl_list(const char *value, void *opaque)
-{
- char *s = strdup(value);
- char *start = NULL;
- char *end = NULL;
- char *f = s;
-
- while (*s) {
- if (*s == '[')
- start = s;
- else if (*s == ']')
- end = s;
-
- if (start && start < end) {
- *end = 0;
- tim_parse_ring_param(start + 1, opaque);
- start = end;
- s = end;
- }
- s++;
- }
-
- free(f);
-}
-
-static int
-tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
- * isn't allowed. 0 represents default.
- */
- tim_parse_ring_ctl_list(value, opaque);
-
- return 0;
-}
-
-static void
-tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
-{
- struct rte_kvargs *kvlist;
-
- if (devargs == NULL)
- return;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA,
- &parse_kvargs_flag, &dev->disable_npa);
- rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS,
- &parse_kvargs_value, &dev->chunk_slots);
- rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag,
- &dev->enable_stats);
- rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value,
- &dev->min_ring_cnt);
- rte_kvargs_process(kvlist, OTX2_TIM_RING_CTL,
- &tim_parse_kvargs_dict, &dev);
-
- rte_kvargs_free(kvlist);
-}
-
-void
-otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
-{
- struct rsrc_attach_req *atch_req;
- struct rsrc_detach_req *dtch_req;
- struct free_rsrcs_rsp *rsrc_cnt;
- const struct rte_memzone *mz;
- struct otx2_tim_evdev *dev;
- int rc;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return;
-
- mz = rte_memzone_reserve(RTE_STR(OTX2_TIM_EVDEV_NAME),
- sizeof(struct otx2_tim_evdev),
- rte_socket_id(), 0);
- if (mz == NULL) {
- otx2_tim_dbg("Unable to allocate memory for TIM Event device");
- return;
- }
-
- dev = mz->addr;
- dev->pci_dev = pci_dev;
- dev->mbox = cmn_dev->mbox;
- dev->bar2 = cmn_dev->bar2;
-
- tim_parse_devargs(pci_dev->device.devargs, dev);
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
- rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
- if (rc < 0) {
- otx2_err("Unable to get free rsrc count.");
- goto mz_free;
- }
-
- dev->nb_rings = dev->min_ring_cnt ?
- RTE_MIN(dev->min_ring_cnt, rsrc_cnt->tim) : rsrc_cnt->tim;
-
- if (!dev->nb_rings) {
- otx2_tim_dbg("No TIM Logical functions provisioned.");
- goto mz_free;
- }
-
- atch_req = otx2_mbox_alloc_msg_attach_resources(dev->mbox);
- atch_req->modify = true;
- atch_req->timlfs = dev->nb_rings;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- otx2_err("Unable to attach TIM rings.");
- goto mz_free;
- }
-
- rc = tim_get_msix_offsets();
- if (rc < 0) {
- otx2_err("Unable to get MSIX offsets for TIM.");
- goto detach;
- }
-
- if (dev->chunk_slots &&
- dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS &&
- dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) {
- dev->chunk_sz = (dev->chunk_slots + 1) *
- OTX2_TIM_CHUNK_ALIGNMENT;
- } else {
- dev->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ;
- }
-
- return;
-
-detach:
- dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
- dtch_req->partial = true;
- dtch_req->timlfs = true;
-
- otx2_mbox_process(dev->mbox);
-mz_free:
- rte_memzone_free(mz);
-}
-
-void
-otx2_tim_fini(void)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct rsrc_detach_req *dtch_req;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return;
-
- dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
- dtch_req->partial = true;
- dtch_req->timlfs = true;
-
- otx2_mbox_process(dev->mbox);
- rte_memzone_free(rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME)));
-}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
deleted file mode 100644
index dac642e0e1..0000000000
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ /dev/null
@@ -1,256 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_EVDEV_H__
-#define __OTX2_TIM_EVDEV_H__
-
-#include <event_timer_adapter_pmd.h>
-#include <rte_event_timer_adapter.h>
-#include <rte_reciprocal.h>
-
-#include "otx2_dev.h"
-
-#define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev
-
-#define otx2_tim_func_trace otx2_tim_dbg
-
-#define TIM_LF_RING_AURA (0x0)
-#define TIM_LF_RING_BASE (0x130)
-#define TIM_LF_NRSPERR_INT (0x200)
-#define TIM_LF_NRSPERR_INT_W1S (0x208)
-#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210)
-#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218)
-#define TIM_LF_RAS_INT (0x300)
-#define TIM_LF_RAS_INT_W1S (0x308)
-#define TIM_LF_RAS_INT_ENA_W1S (0x310)
-#define TIM_LF_RAS_INT_ENA_W1C (0x318)
-#define TIM_LF_RING_REL (0x400)
-
-#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
-#define TIM_BUCKET_W1_M_CHUNK_REMAINDER ((1ULL << (64 - \
- TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
-#define TIM_BUCKET_W1_S_LOCK (40)
-#define TIM_BUCKET_W1_M_LOCK ((1ULL << \
- (TIM_BUCKET_W1_S_CHUNK_REMAINDER - \
- TIM_BUCKET_W1_S_LOCK)) - 1)
-#define TIM_BUCKET_W1_S_RSVD (35)
-#define TIM_BUCKET_W1_S_BSK (34)
-#define TIM_BUCKET_W1_M_BSK ((1ULL << \
- (TIM_BUCKET_W1_S_RSVD - \
- TIM_BUCKET_W1_S_BSK)) - 1)
-#define TIM_BUCKET_W1_S_HBT (33)
-#define TIM_BUCKET_W1_M_HBT ((1ULL << \
- (TIM_BUCKET_W1_S_BSK - \
- TIM_BUCKET_W1_S_HBT)) - 1)
-#define TIM_BUCKET_W1_S_SBT (32)
-#define TIM_BUCKET_W1_M_SBT ((1ULL << \
- (TIM_BUCKET_W1_S_HBT - \
- TIM_BUCKET_W1_S_SBT)) - 1)
-#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
-#define TIM_BUCKET_W1_M_NUM_ENTRIES ((1ULL << \
- (TIM_BUCKET_W1_S_SBT - \
- TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
-
-#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
-
-#define TIM_BUCKET_CHUNK_REMAIN \
- (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
-
-#define TIM_BUCKET_LOCK \
- (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
-
-#define TIM_BUCKET_SEMA_WLOCK \
- (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
-
-#define OTX2_MAX_TIM_RINGS (256)
-#define OTX2_TIM_MAX_BUCKETS (0xFFFFF)
-#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
-#define OTX2_TIM_CHUNK_ALIGNMENT (16)
-#define OTX2_TIM_MAX_BURST (RTE_CACHE_LINE_SIZE / \
- OTX2_TIM_CHUNK_ALIGNMENT)
-#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1)
-#define OTX2_TIM_MIN_CHUNK_SLOTS (0x8)
-#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE)
-#define OTX2_TIM_MIN_TMO_TKS (256)
-
-#define OTX2_TIM_SP 0x1
-#define OTX2_TIM_MP 0x2
-#define OTX2_TIM_ENA_FB 0x10
-#define OTX2_TIM_ENA_DFB 0x20
-#define OTX2_TIM_ENA_STATS 0x40
-
-enum otx2_tim_clk_src {
- OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
- OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
- OTX2_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
- OTX2_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
-};
-
-struct otx2_tim_bkt {
- uint64_t first_chunk;
- union {
- uint64_t w1;
- struct {
- uint32_t nb_entry;
- uint8_t sbt:1;
- uint8_t hbt:1;
- uint8_t bsk:1;
- uint8_t rsvd:5;
- uint8_t lock;
- int16_t chunk_remainder;
- };
- };
- uint64_t current_chunk;
- uint64_t pad;
-} __rte_packed __rte_aligned(32);
-
-struct otx2_tim_ent {
- uint64_t w0;
- uint64_t wqe;
-} __rte_packed;
-
-struct otx2_tim_ctl {
- uint16_t ring;
- uint16_t chunk_slots;
- uint16_t disable_npa;
- uint16_t enable_stats;
-};
-
-struct otx2_tim_evdev {
- struct rte_pci_device *pci_dev;
- struct rte_eventdev *event_dev;
- struct otx2_mbox *mbox;
- uint16_t nb_rings;
- uint32_t chunk_sz;
- uintptr_t bar2;
- /* Dev args */
- uint8_t disable_npa;
- uint16_t chunk_slots;
- uint16_t min_ring_cnt;
- uint8_t enable_stats;
- uint16_t ring_ctl_cnt;
- struct otx2_tim_ctl *ring_ctl_data;
- /* HW const */
- /* MSIX offsets */
- uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS];
-};
-
-struct otx2_tim_ring {
- uintptr_t base;
- uint16_t nb_chunk_slots;
- uint32_t nb_bkts;
- uint64_t last_updt_cyc;
- uint64_t ring_start_cyc;
- uint64_t tck_int;
- uint64_t tot_int;
- struct otx2_tim_bkt *bkt;
- struct rte_mempool *chunk_pool;
- struct rte_reciprocal_u64 fast_div;
- struct rte_reciprocal_u64 fast_bkt;
- uint64_t arm_cnt;
- uint8_t prod_type_sp;
- uint8_t enable_stats;
- uint8_t disable_npa;
- uint8_t ena_dfb;
- uint8_t ena_periodic;
- uint16_t ring_id;
- uint32_t aura;
- uint64_t nb_timers;
- uint64_t tck_nsec;
- uint64_t max_tout;
- uint64_t nb_chunks;
- uint64_t chunk_sz;
- uint64_t tenns_clk_freq;
- enum otx2_tim_clk_src clk_src;
-} __rte_cache_aligned;
-
-static inline struct otx2_tim_evdev *
-tim_priv_get(void)
-{
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME));
- if (mz == NULL)
- return NULL;
-
- return mz->addr;
-}
-
-#ifdef RTE_ARCH_ARM64
-static inline uint64_t
-tim_cntvct(void)
-{
- return __rte_arm64_cntvct();
-}
-
-static inline uint64_t
-tim_cntfrq(void)
-{
- return __rte_arm64_cntfrq();
-}
-#else
-static inline uint64_t
-tim_cntvct(void)
-{
- return 0;
-}
-
-static inline uint64_t
-tim_cntfrq(void)
-{
- return 0;
-}
-#endif
-
-#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, 0, OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
- FP(mp, 0, 0, 1, OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
- FP(fb_sp, 0, 1, 0, OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
- FP(fb_mp, 0, 1, 1, OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
- FP(stats_mod_sp, 1, 0, 0, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
- FP(stats_mod_mp, 1, 0, 1, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
- FP(stats_mod_fb_sp, 1, 1, 0, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
- FP(stats_mod_fb_mp, 1, 1, 1, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_MP)
-
-#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, 0, OTX2_TIM_ENA_DFB) \
- FP(fb, 0, 1, OTX2_TIM_ENA_FB) \
- FP(stats_dfb, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB) \
- FP(stats_fb, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB)
-
-#define FP(_name, _f3, _f2, _f1, flags) \
- uint16_t otx2_tim_arm_burst_##_name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, const uint16_t nb_timers);
-TIM_ARM_FASTPATH_MODES
-#undef FP
-
-#define FP(_name, _f2, _f1, flags) \
- uint16_t otx2_tim_arm_tmo_tick_burst_##_name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, const uint64_t timeout_tick, \
- const uint16_t nb_timers);
-TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
-
-uint16_t otx2_tim_timer_cancel_burst(
- const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim, const uint16_t nb_timers);
-
-int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
- uint32_t *caps,
- const struct event_timer_adapter_ops **ops);
-
-void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev);
-void otx2_tim_fini(void);
-
-/* TIM IRQ */
-int tim_register_irq(uint16_t ring_id);
-void tim_unregister_irq(uint16_t ring_id);
-
-#endif /* __OTX2_TIM_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
deleted file mode 100644
index 9ee07958fd..0000000000
--- a/drivers/event/octeontx2/otx2_tim_worker.c
+++ /dev/null
@@ -1,192 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_tim_evdev.h"
-#include "otx2_tim_worker.h"
-
-static inline int
-tim_arm_checks(const struct otx2_tim_ring * const tim_ring,
- struct rte_event_timer * const tim)
-{
- if (unlikely(tim->state)) {
- tim->state = RTE_EVENT_TIMER_ERROR;
- rte_errno = EALREADY;
- goto fail;
- }
-
- if (unlikely(!tim->timeout_ticks ||
- tim->timeout_ticks >= tim_ring->nb_bkts)) {
- tim->state = tim->timeout_ticks ? RTE_EVENT_TIMER_ERROR_TOOLATE
- : RTE_EVENT_TIMER_ERROR_TOOEARLY;
- rte_errno = EINVAL;
- goto fail;
- }
-
- return 0;
-
-fail:
- return -EINVAL;
-}
-
-static inline void
-tim_format_event(const struct rte_event_timer * const tim,
- struct otx2_tim_ent * const entry)
-{
- entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
- (tim->ev.event & 0xFFFFFFFFF);
- entry->wqe = tim->ev.u64;
-}
-
-static inline void
-tim_sync_start_cyc(struct otx2_tim_ring *tim_ring)
-{
- uint64_t cur_cyc = tim_cntvct();
- uint32_t real_bkt;
-
- if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
- real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
- cur_cyc = tim_cntvct();
-
- tim_ring->ring_start_cyc = cur_cyc -
- (real_bkt * tim_ring->tck_int);
- tim_ring->last_updt_cyc = cur_cyc;
- }
-
-}
-
-static __rte_always_inline uint16_t
-tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint16_t nb_timers,
- const uint8_t flags)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_ent entry;
- uint16_t index;
- int ret;
-
- tim_sync_start_cyc(tim_ring);
- for (index = 0; index < nb_timers; index++) {
- if (tim_arm_checks(tim_ring, tim[index]))
- break;
-
- tim_format_event(tim[index], &entry);
- if (flags & OTX2_TIM_SP)
- ret = tim_add_entry_sp(tim_ring,
- tim[index]->timeout_ticks,
- tim[index], &entry, flags);
- if (flags & OTX2_TIM_MP)
- ret = tim_add_entry_mp(tim_ring,
- tim[index]->timeout_ticks,
- tim[index], &entry, flags);
-
- if (unlikely(ret)) {
- rte_errno = -ret;
- break;
- }
- }
-
- if (flags & OTX2_TIM_ENA_STATS)
- __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
-
- return index;
-}
-
-static __rte_always_inline uint16_t
-tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint64_t timeout_tick,
- const uint16_t nb_timers, const uint8_t flags)
-{
- struct otx2_tim_ent entry[OTX2_TIM_MAX_BURST] __rte_cache_aligned;
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- uint16_t set_timers = 0;
- uint16_t arr_idx = 0;
- uint16_t idx;
- int ret;
-
- if (unlikely(!timeout_tick || timeout_tick >= tim_ring->nb_bkts)) {
- const enum rte_event_timer_state state = timeout_tick ?
- RTE_EVENT_TIMER_ERROR_TOOLATE :
- RTE_EVENT_TIMER_ERROR_TOOEARLY;
- for (idx = 0; idx < nb_timers; idx++)
- tim[idx]->state = state;
-
- rte_errno = EINVAL;
- return 0;
- }
-
- tim_sync_start_cyc(tim_ring);
- while (arr_idx < nb_timers) {
- for (idx = 0; idx < OTX2_TIM_MAX_BURST && (arr_idx < nb_timers);
- idx++, arr_idx++) {
- tim_format_event(tim[arr_idx], &entry[idx]);
- }
- ret = tim_add_entry_brst(tim_ring, timeout_tick,
- &tim[set_timers], entry, idx, flags);
- set_timers += ret;
- if (ret != idx)
- break;
- }
- if (flags & OTX2_TIM_ENA_STATS)
- __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
- __ATOMIC_RELAXED);
-
- return set_timers;
-}
-
-#define FP(_name, _f3, _f2, _f1, _flags) \
-uint16_t __rte_noinline \
-otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint16_t nb_timers) \
-{ \
- return tim_timer_arm_burst(adptr, tim, nb_timers, _flags); \
-}
-TIM_ARM_FASTPATH_MODES
-#undef FP
-
-#define FP(_name, _f2, _f1, _flags) \
-uint16_t __rte_noinline \
-otx2_tim_arm_tmo_tick_burst_ ## _name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint64_t timeout_tick, \
- const uint16_t nb_timers) \
-{ \
- return tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
- nb_timers, _flags); \
-}
-TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
-
-uint16_t
-otx2_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint16_t nb_timers)
-{
- uint16_t index;
- int ret;
-
- RTE_SET_USED(adptr);
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
- for (index = 0; index < nb_timers; index++) {
- if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
- rte_errno = EALREADY;
- break;
- }
-
- if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
- rte_errno = EINVAL;
- break;
- }
- ret = tim_rm_entry(tim[index]);
- if (ret) {
- rte_errno = -ret;
- break;
- }
- }
-
- return index;
-}
diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h
deleted file mode 100644
index efe88a8692..0000000000
--- a/drivers/event/octeontx2/otx2_tim_worker.h
+++ /dev/null
@@ -1,598 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_WORKER_H__
-#define __OTX2_TIM_WORKER_H__
-
-#include "otx2_tim_evdev.h"
-
-static inline uint8_t
-tim_bkt_fetch_lock(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_LOCK) &
- TIM_BUCKET_W1_M_LOCK;
-}
-
-static inline int16_t
-tim_bkt_fetch_rem(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
- TIM_BUCKET_W1_M_CHUNK_REMAINDER;
-}
-
-static inline int16_t
-tim_bkt_get_rem(struct otx2_tim_bkt *bktp)
-{
- return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
-}
-
-static inline void
-tim_bkt_set_rem(struct otx2_tim_bkt *bktp, uint16_t v)
-{
- __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
-}
-
-static inline void
-tim_bkt_sub_rem(struct otx2_tim_bkt *bktp, uint16_t v)
-{
- __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
-}
-
-static inline uint8_t
-tim_bkt_get_hbt(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
-}
-
-static inline uint8_t
-tim_bkt_get_bsk(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
-}
-
-static inline uint64_t
-tim_bkt_clr_bsk(struct otx2_tim_bkt *bktp)
-{
- /* Clear everything except lock. */
- const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
-
- return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
-}
-
-static inline uint64_t
-tim_bkt_fetch_sema_lock(struct otx2_tim_bkt *bktp)
-{
- return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
- __ATOMIC_ACQUIRE);
-}
-
-static inline uint64_t
-tim_bkt_fetch_sema(struct otx2_tim_bkt *bktp)
-{
- return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
-}
-
-static inline uint64_t
-tim_bkt_inc_lock(struct otx2_tim_bkt *bktp)
-{
- const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
-
- return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
-}
-
-static inline void
-tim_bkt_dec_lock(struct otx2_tim_bkt *bktp)
-{
- __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
-}
-
-static inline void
-tim_bkt_dec_lock_relaxed(struct otx2_tim_bkt *bktp)
-{
- __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
-}
-
-static inline uint32_t
-tim_bkt_get_nent(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
- TIM_BUCKET_W1_M_NUM_ENTRIES;
-}
-
-static inline void
-tim_bkt_inc_nent(struct otx2_tim_bkt *bktp)
-{
- __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
-}
-
-static inline void
-tim_bkt_add_nent(struct otx2_tim_bkt *bktp, uint32_t v)
-{
- __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
-}
-
-static inline uint64_t
-tim_bkt_clr_nent(struct otx2_tim_bkt *bktp)
-{
- const uint64_t v = ~(TIM_BUCKET_W1_M_NUM_ENTRIES <<
- TIM_BUCKET_W1_S_NUM_ENTRIES);
-
- return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
-}
-
-static inline uint64_t
-tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
-{
- return (n - (d * rte_reciprocal_divide_u64(n, &R)));
-}
-
-static __rte_always_inline void
-tim_get_target_bucket(struct otx2_tim_ring *const tim_ring,
- const uint32_t rel_bkt, struct otx2_tim_bkt **bkt,
- struct otx2_tim_bkt **mirr_bkt)
-{
- const uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc;
- uint64_t bucket =
- rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
- rel_bkt;
- uint64_t mirr_bucket = 0;
-
- bucket =
- tim_bkt_fast_mod(bucket, tim_ring->nb_bkts, tim_ring->fast_bkt);
- mirr_bucket = tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
- tim_ring->nb_bkts, tim_ring->fast_bkt);
- *bkt = &tim_ring->bkt[bucket];
- *mirr_bkt = &tim_ring->bkt[mirr_bucket];
-}
-
-static struct otx2_tim_ent *
-tim_clr_bkt(struct otx2_tim_ring * const tim_ring,
- struct otx2_tim_bkt * const bkt)
-{
-#define TIM_MAX_OUTSTANDING_OBJ 64
- void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
- struct otx2_tim_ent *chunk;
- struct otx2_tim_ent *pnext;
- uint8_t objs = 0;
-
-
- chunk = ((struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk);
- chunk = (struct otx2_tim_ent *)(uintptr_t)(chunk +
- tim_ring->nb_chunk_slots)->w0;
- while (chunk) {
- pnext = (struct otx2_tim_ent *)(uintptr_t)
- ((chunk + tim_ring->nb_chunk_slots)->w0);
- if (objs == TIM_MAX_OUTSTANDING_OBJ) {
- rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
- objs);
- objs = 0;
- }
- pend_chunks[objs++] = chunk;
- chunk = pnext;
- }
-
- if (objs)
- rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
- objs);
-
- return (struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk;
-}
-
-static struct otx2_tim_ent *
-tim_refill_chunk(struct otx2_tim_bkt * const bkt,
- struct otx2_tim_bkt * const mirr_bkt,
- struct otx2_tim_ring * const tim_ring)
-{
- struct otx2_tim_ent *chunk;
-
- if (bkt->nb_entry || !bkt->first_chunk) {
- if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
- (void **)&chunk)))
- return NULL;
- if (bkt->nb_entry) {
- *(uint64_t *)(((struct otx2_tim_ent *)
- mirr_bkt->current_chunk) +
- tim_ring->nb_chunk_slots) =
- (uintptr_t)chunk;
- } else {
- bkt->first_chunk = (uintptr_t)chunk;
- }
- } else {
- chunk = tim_clr_bkt(tim_ring, bkt);
- bkt->first_chunk = (uintptr_t)chunk;
- }
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
-
- return chunk;
-}
-
-static struct otx2_tim_ent *
-tim_insert_chunk(struct otx2_tim_bkt * const bkt,
- struct otx2_tim_bkt * const mirr_bkt,
- struct otx2_tim_ring * const tim_ring)
-{
- struct otx2_tim_ent *chunk;
-
- if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
- return NULL;
-
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
- if (bkt->nb_entry) {
- *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t)
- mirr_bkt->current_chunk) +
- tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
- } else {
- bkt->first_chunk = (uintptr_t)chunk;
- }
- return chunk;
-}
-
-static __rte_always_inline int
-tim_add_entry_sp(struct otx2_tim_ring * const tim_ring,
- const uint32_t rel_bkt,
- struct rte_event_timer * const tim,
- const struct otx2_tim_ent * const pent,
- const uint8_t flags)
-{
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_ent *chunk;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
- int16_t rem;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
-
- /* Get Bucket sema*/
- lock_sema = tim_bkt_fetch_sema_lock(bkt);
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
- /* Insert the work. */
- rem = tim_bkt_fetch_rem(lock_sema);
-
- if (!rem) {
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- bkt->chunk_remainder = 0;
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim->state = RTE_EVENT_TIMER_ERROR;
- tim_bkt_dec_lock(bkt);
- return -ENOMEM;
- }
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += tim_ring->nb_chunk_slots - rem;
- }
-
- /* Copy work entry. */
- *chunk = *pent;
-
- tim->impl_opaque[0] = (uintptr_t)chunk;
- tim->impl_opaque[1] = (uintptr_t)bkt;
- __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
- tim_bkt_inc_nent(bkt);
- tim_bkt_dec_lock_relaxed(bkt);
-
- return 0;
-}
-
-static __rte_always_inline int
-tim_add_entry_mp(struct otx2_tim_ring * const tim_ring,
- const uint32_t rel_bkt,
- struct rte_event_timer * const tim,
- const struct otx2_tim_ent * const pent,
- const uint8_t flags)
-{
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_ent *chunk;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
- int16_t rem;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
- /* Get Bucket sema*/
- lock_sema = tim_bkt_fetch_sema_lock(bkt);
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
-
- rem = tim_bkt_fetch_rem(lock_sema);
- if (rem < 0) {
- tim_bkt_dec_lock(bkt);
-#ifdef RTE_ARCH_ARM64
- uint64_t w1;
- asm volatile(" ldxr %[w1], [%[crem]] \n"
- " tbz %[w1], 63, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[w1], [%[crem]] \n"
- " tbnz %[w1], 63, rty%= \n"
- "dne%=: \n"
- : [w1] "=&r"(w1)
- : [crem] "r"(&bkt->w1)
- : "memory");
-#else
- while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
- 0)
- ;
-#endif
- goto __retry;
- } else if (!rem) {
- /* Only one thread can be here*/
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim->state = RTE_EVENT_TIMER_ERROR;
- tim_bkt_set_rem(bkt, 0);
- tim_bkt_dec_lock(bkt);
- return -ENOMEM;
- }
- *chunk = *pent;
- if (tim_bkt_fetch_lock(lock_sema)) {
- do {
- lock_sema = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (tim_bkt_fetch_lock(lock_sema) - 1);
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
- }
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- __atomic_store_n(&bkt->chunk_remainder,
- tim_ring->nb_chunk_slots - 1, __ATOMIC_RELEASE);
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += tim_ring->nb_chunk_slots - rem;
- *chunk = *pent;
- }
-
- tim->impl_opaque[0] = (uintptr_t)chunk;
- tim->impl_opaque[1] = (uintptr_t)bkt;
- __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
- tim_bkt_inc_nent(bkt);
- tim_bkt_dec_lock_relaxed(bkt);
-
- return 0;
-}
-
-static inline uint16_t
-tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt,
- struct otx2_tim_ent *chunk,
- struct rte_event_timer ** const tim,
- const struct otx2_tim_ent * const ents,
- const struct otx2_tim_bkt * const bkt)
-{
- for (; index < cpy_lmt; index++) {
- *chunk = *(ents + index);
- tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
- tim[index]->impl_opaque[1] = (uintptr_t)bkt;
- tim[index]->state = RTE_EVENT_TIMER_ARMED;
- }
-
- return index;
-}
-
-/* Burst mode functions */
-static inline int
-tim_add_entry_brst(struct otx2_tim_ring * const tim_ring,
- const uint16_t rel_bkt,
- struct rte_event_timer ** const tim,
- const struct otx2_tim_ent *ents,
- const uint16_t nb_timers, const uint8_t flags)
-{
- struct otx2_tim_ent *chunk = NULL;
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_bkt *bkt;
- uint16_t chunk_remainder;
- uint16_t index = 0;
- uint64_t lock_sema;
- int16_t rem, crem;
- uint8_t lock_cnt;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
-
- /* Only one thread beyond this. */
- lock_sema = tim_bkt_inc_lock(bkt);
- lock_cnt = (uint8_t)
- ((lock_sema >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK);
-
- if (lock_cnt) {
- tim_bkt_dec_lock(bkt);
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxrb %w[lock_cnt], [%[lock]] \n"
- " tst %w[lock_cnt], 255 \n"
- " beq dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxrb %w[lock_cnt], [%[lock]] \n"
- " tst %w[lock_cnt], 255 \n"
- " bne rty%= \n"
- "dne%=: \n"
- : [lock_cnt] "=&r"(lock_cnt)
- : [lock] "r"(&bkt->lock)
- : "memory");
-#else
- while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
- ;
-#endif
- goto __retry;
- }
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
-
- chunk_remainder = tim_bkt_fetch_rem(lock_sema);
- rem = chunk_remainder - nb_timers;
- if (rem < 0) {
- crem = tim_ring->nb_chunk_slots - chunk_remainder;
- if (chunk_remainder && crem) {
- chunk = ((struct otx2_tim_ent *)
- mirr_bkt->current_chunk) + crem;
-
- index = tim_cpy_wrk(index, chunk_remainder, chunk, tim,
- ents, bkt);
- tim_bkt_sub_rem(bkt, chunk_remainder);
- tim_bkt_add_nent(bkt, chunk_remainder);
- }
-
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- tim_bkt_dec_lock(bkt);
- rte_errno = ENOMEM;
- tim[index]->state = RTE_EVENT_TIMER_ERROR;
- return crem;
- }
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
-
- rem = nb_timers - chunk_remainder;
- tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
- tim_bkt_add_nent(bkt, rem);
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
-
- tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
- tim_bkt_sub_rem(bkt, nb_timers);
- tim_bkt_add_nent(bkt, nb_timers);
- }
-
- tim_bkt_dec_lock(bkt);
-
- return nb_timers;
-}
-
-static int
-tim_rm_entry(struct rte_event_timer *tim)
-{
- struct otx2_tim_ent *entry;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
-
- if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
- return -ENOENT;
-
- entry = (struct otx2_tim_ent *)(uintptr_t)tim->impl_opaque[0];
- if (entry->wqe != tim->ev.u64) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- return -ENOENT;
- }
-
- bkt = (struct otx2_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
- lock_sema = tim_bkt_inc_lock(bkt);
- if (tim_bkt_get_hbt(lock_sema) || !tim_bkt_get_nent(lock_sema)) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim_bkt_dec_lock(bkt);
- return -ENOENT;
- }
-
- entry->w0 = 0;
- entry->wqe = 0;
- tim->state = RTE_EVENT_TIMER_CANCELED;
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim_bkt_dec_lock(bkt);
-
- return 0;
-}
-
-#endif /* __OTX2_TIM_WORKER_H__ */
diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c
deleted file mode 100644
index 95139d27a3..0000000000
--- a/drivers/event/octeontx2/otx2_worker.c
+++ /dev/null
@@ -1,372 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_worker.h"
-
-static __rte_noinline uint8_t
-otx2_ssogws_new_event(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint64_t event_ptr = ev->u64;
- const uint16_t grp = ev->queue_id;
-
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- otx2_ssogws_add_work(ws, event_ptr, tag, new_tt, grp);
-
- return 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_fwd_swtag(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op));
-
- /* 96XX model
- * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
- *
- * SSO_SYNC_ORDERED norm norm untag
- * SSO_SYNC_ATOMIC norm norm untag
- * SSO_SYNC_UNTAGGED norm norm NOOP
- */
-
- if (new_tt == SSO_SYNC_UNTAGGED) {
- if (cur_tt != SSO_SYNC_UNTAGGED)
- otx2_ssogws_swtag_untag(ws);
- } else {
- otx2_ssogws_swtag_norm(ws, tag, new_tt);
- }
-
- ws->swtag_req = 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_fwd_group(struct otx2_ssogws *ws, const struct rte_event *ev,
- const uint16_t grp)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_UPD_WQP_GRP1);
- rte_smp_wmb();
- otx2_ssogws_swtag_desched(ws, tag, new_tt, grp);
-}
-
-static __rte_always_inline void
-otx2_ssogws_forward_event(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint8_t grp = ev->queue_id;
-
- /* Group hasn't changed, Use SWTAG to forward the event */
- if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(ws->tag_op)) == grp)
- otx2_ssogws_fwd_swtag(ws, ev);
- else
- /*
- * Group has been changed for group based work pipelining,
- * Use deschedule/add_work operation to transfer the event to
- * new group/core
- */
- otx2_ssogws_fwd_group(ws, ev, grp);
-}
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(timeout_ticks); \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return 1; \
- } \
- \
- return otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint16_t ret = 1; \
- uint64_t iter; \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return ret; \
- } \
- \
- ret = otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
- ret = otx2_ssogws_get_work(ws, ev, flags, \
- ws->lookup_mem); \
- \
- return ret; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_timeout_burst_ ##name(void *port, struct rte_event ev[],\
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_timeout_ ##name(port, ev, timeout_ticks);\
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(timeout_ticks); \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return 1; \
- } \
- \
- return otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_seg_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint16_t ret = 1; \
- uint64_t iter; \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return ret; \
- } \
- \
- ret = otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
- ret = otx2_ssogws_get_work(ws, ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
- \
- return ret; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_seg_timeout_ ##name(port, ev, \
- timeout_ticks); \
-}
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-uint16_t __rte_hot
-otx2_ssogws_enq(void *port, const struct rte_event *ev)
-{
- struct otx2_ssogws *ws = port;
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- rte_smp_mb();
- return otx2_ssogws_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- otx2_ssogws_forward_event(ws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return otx2_ssogws_enq(port, ev);
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
- uint16_t i, rc = 1;
-
- rte_smp_mb();
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- for (i = 0; i < nb_events && rc; i++)
- rc = otx2_ssogws_new_event(ws, &ev[i]);
-
- return nb_events;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
-
- RTE_SET_USED(nb_events);
- otx2_ssogws_forward_event(ws, ev);
-
- return 1;
-}
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint64_t cmd[sz]; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \
- (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- flags); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, struct rte_event ev[],\
- uint16_t nb_events) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \
- (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- (flags) | NIX_TX_MULTI_SEG_F); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-void
-ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base,
- otx2_handle_event_t fn, void *arg)
-{
- uint64_t cq_ds_cnt = 1;
- uint64_t aq_cnt = 1;
- uint64_t ds_cnt = 1;
- struct rte_event ev;
- uint64_t enable;
- uint64_t val;
-
- enable = otx2_read64(base + SSO_LF_GGRP_QCTL);
- if (!enable)
- return;
-
- val = queue_id; /* GGRP ID */
- val |= BIT_ULL(18); /* Grouped */
- val |= BIT_ULL(16); /* WAIT */
-
- aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
- ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
- cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
- cq_ds_cnt &= 0x3FFF3FFF0000;
-
- while (aq_cnt || cq_ds_cnt || ds_cnt) {
- otx2_write64(val, ws->getwrk_op);
- otx2_ssogws_get_work_empty(ws, &ev, 0);
- if (fn != NULL && ev.u64 != 0)
- fn(arg, ev);
- if (ev.sched_type != SSO_TT_EMPTY)
- otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
- rte_mb();
- aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
- ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
- cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
- /* Extract cq and ds count */
- cq_ds_cnt &= 0x3FFF3FFF0000;
- }
-
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_GWC_INVAL);
- rte_mb();
-}
-
-void
-ssogws_reset(struct otx2_ssogws *ws)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
- uint64_t pend_state;
- uint8_t pend_tt;
- uint64_t tag;
-
- /* Wait till getwork/swtp/waitw/desched completes. */
- do {
- pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
- rte_mb();
- } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58)));
-
- tag = otx2_read64(base + SSOW_LF_GWS_TAG);
- pend_tt = (tag >> 32) & 0x3;
- if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
- if (pend_tt == SSO_SYNC_ATOMIC || pend_tt == SSO_SYNC_ORDERED)
- otx2_ssogws_swtag_untag(ws);
- otx2_ssogws_desched(ws);
- }
- rte_mb();
-
- /* Wait for desched to complete. */
- do {
- pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
- rte_mb();
- } while (pend_state & BIT_ULL(58));
-}
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
deleted file mode 100644
index aa766c6602..0000000000
--- a/drivers/event/octeontx2/otx2_worker.h
+++ /dev/null
@@ -1,339 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_WORKER_H__
-#define __OTX2_WORKER_H__
-
-#include <rte_common.h>
-#include <rte_branch_prediction.h>
-
-#include <otx2_common.h>
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_rx.h"
-#include "otx2_ethdev_sec_tx.h"
-
-/* SSO Operations */
-
-static __rte_always_inline uint16_t
-otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev,
- const uint32_t flags, const void * const lookup_mem)
-{
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
- otx2_write64(BIT_ULL(16) | /* wait for work. */
- 1, /* Use Mask set 0. */
- ws->getwrk_op);
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F)
- rte_prefetch_non_temporal(lookup_mem);
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbz %[tag], 63, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8] \n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
-
- get_work1 = otx2_read64(ws->wqp_op);
- rte_prefetch0((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch0((const void *)mbuf);
-#endif
-
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY) {
- if ((flags & NIX_RX_OFFLOAD_SECURITY_F) &&
- (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
- get_work1 = otx2_handle_crypto_event(get_work1);
- } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type,
- (uint32_t) event.get_work0, flags,
- lookup_mem);
- /* Extracting tstamp, if PTP enabled*/
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)
- get_work1) +
- OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf,
- ws->tstamp, flags,
- (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-/* Used in cleaning up workslot. */
-static __rte_always_inline uint16_t
-otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev,
- const uint32_t flags)
-{
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbz %[tag], 63, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8] \n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
-
- get_work1 = otx2_read64(ws->wqp_op);
- rte_prefetch_non_temporal((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch_non_temporal((const void *)mbuf);
-#endif
-
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY &&
- event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type,
- (uint32_t) event.get_work0, flags, NULL);
- /* Extracting tstamp, if PTP enabled*/
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)get_work1)
- + OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, ws->tstamp,
- flags, (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_add_work(struct otx2_ssogws *ws, const uint64_t event_ptr,
- const uint32_t tag, const uint8_t new_tt,
- const uint16_t grp)
-{
- uint64_t add_work0;
-
- add_work0 = tag | ((uint64_t)(new_tt) << 32);
- otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_desched(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt,
- uint16_t grp)
-{
- uint64_t val;
-
- val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
- otx2_write64(val, ws->swtag_desched_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_norm(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt)
-{
- uint64_t val;
-
- val = tag | ((uint64_t)(new_tt & 0x3) << 32);
- otx2_write64(val, ws->swtag_norm_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_untag(struct otx2_ssogws *ws)
-{
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_SWTAG_UNTAG);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
-{
- if (OTX2_SSOW_TT_FROM_TAG(otx2_read64(tag_op)) == SSO_TT_EMPTY)
- return;
- otx2_write64(0, flush_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_desched(struct otx2_ssogws *ws)
-{
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_DESCHED);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_wait(struct otx2_ssogws *ws)
-{
-#ifdef RTE_ARCH_ARM64
- uint64_t swtp;
-
- asm volatile(" ldr %[swtb], [%[swtp_loc]] \n"
- " tbz %[swtb], 62, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[swtb], [%[swtp_loc]] \n"
- " tbnz %[swtb], 62, rty%= \n"
- "done%=: \n"
- : [swtb] "=&r" (swtp)
- : [swtp_loc] "r" (ws->tag_op));
-#else
- /* Wait for the SWTAG/SWTAG_FULL operation */
- while (otx2_read64(ws->tag_op) & BIT_ULL(62))
- ;
-#endif
-}
-
-static __rte_always_inline void
-otx2_ssogws_head_wait(uint64_t tag_op)
-{
-#ifdef RTE_ARCH_ARM64
- uint64_t tag;
-
- asm volatile (
- " ldr %[tag], [%[tag_op]] \n"
- " tbnz %[tag], 35, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_op]] \n"
- " tbz %[tag], 35, rty%= \n"
- "done%=: \n"
- : [tag] "=&r" (tag)
- : [tag_op] "r" (tag_op)
- );
-#else
- /* Wait for the HEAD to be set */
- while (!(otx2_read64(tag_op) & BIT_ULL(35)))
- ;
-#endif
-}
-
-static __rte_always_inline const struct otx2_eth_txq *
-otx2_ssogws_xtract_meta(struct rte_mbuf *m,
- const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT])
-{
- return (const struct otx2_eth_txq *)txq_data[m->port][
- rte_event_eth_tx_adapter_txq_get(m)];
-}
-
-static __rte_always_inline void
-otx2_ssogws_prepare_pkt(const struct otx2_eth_txq *txq, struct rte_mbuf *m,
- uint64_t *cmd, const uint32_t flags)
-{
- otx2_lmt_mov(cmd, txq->cmd, otx2_nix_tx_ext_subs(flags));
- otx2_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt);
-}
-
-static __rte_always_inline uint16_t
-otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
- const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
- const uint32_t flags)
-{
- struct rte_mbuf *m = ev->mbuf;
- const struct otx2_eth_txq *txq;
- uint16_t ref_cnt = m->refcnt;
-
- if ((flags & NIX_TX_OFFLOAD_SECURITY_F) &&
- (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
- txq = otx2_ssogws_xtract_meta(m, txq_data);
- return otx2_sec_event_tx(base, ev, m, txq, flags);
- }
-
- /* Perform header writes before barrier for TSO */
- otx2_nix_xmit_prepare_tso(m, flags);
- /* Lets commit any changes in the packet here in case when
- * fast free is set as no further changes will be made to mbuf.
- * In case of fast free is not set, both otx2_nix_prepare_mseg()
- * and otx2_nix_xmit_prepare() has a barrier after refcnt update.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
- txq = otx2_ssogws_xtract_meta(m, txq_data);
- otx2_ssogws_prepare_pkt(txq, m, cmd, flags);
-
- if (flags & NIX_TX_MULTI_SEG_F) {
- const uint16_t segdw = otx2_nix_prepare_mseg(m, cmd, flags);
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- m->ol_flags, segdw, flags);
- if (!ev->sched_type) {
- otx2_nix_xmit_mseg_prep_lmt(cmd, txq->lmt_addr, segdw);
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
- if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0)
- otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr,
- txq->io_addr, segdw);
- } else {
- otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr,
- txq->io_addr, segdw);
- }
- } else {
- /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- m->ol_flags, 4, flags);
-
- if (!ev->sched_type) {
- otx2_nix_xmit_prep_lmt(cmd, txq->lmt_addr, flags);
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
- if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0)
- otx2_nix_xmit_one(cmd, txq->lmt_addr,
- txq->io_addr, flags);
- } else {
- otx2_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr,
- flags);
- }
- }
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- if (ref_cnt > 1)
- return 1;
- }
-
- otx2_ssogws_swtag_flush(base + SSOW_LF_GWS_TAG,
- base + SSOW_LF_GWS_OP_SWTAG_FLUSH);
-
- return 1;
-}
-
-#endif
diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c
deleted file mode 100644
index 81af4ca904..0000000000
--- a/drivers/event/octeontx2/otx2_worker_dual.c
+++ /dev/null
@@ -1,345 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_worker_dual.h"
-#include "otx2_worker.h"
-
-static __rte_noinline uint8_t
-otx2_ssogws_dual_new_event(struct otx2_ssogws_dual *ws,
- const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint64_t event_ptr = ev->u64;
- const uint16_t grp = ev->queue_id;
-
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- otx2_ssogws_dual_add_work(ws, event_ptr, tag, new_tt, grp);
-
- return 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_fwd_swtag(struct otx2_ssogws_state *ws,
- const struct rte_event *ev)
-{
- const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op));
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- /* 96XX model
- * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
- *
- * SSO_SYNC_ORDERED norm norm untag
- * SSO_SYNC_ATOMIC norm norm untag
- * SSO_SYNC_UNTAGGED norm norm NOOP
- */
- if (new_tt == SSO_SYNC_UNTAGGED) {
- if (cur_tt != SSO_SYNC_UNTAGGED)
- otx2_ssogws_swtag_untag((struct otx2_ssogws *)ws);
- } else {
- otx2_ssogws_swtag_norm((struct otx2_ssogws *)ws, tag, new_tt);
- }
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_fwd_group(struct otx2_ssogws_state *ws,
- const struct rte_event *ev, const uint16_t grp)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_UPD_WQP_GRP1);
- rte_smp_wmb();
- otx2_ssogws_swtag_desched((struct otx2_ssogws *)ws, tag, new_tt, grp);
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws,
- struct otx2_ssogws_state *vws,
- const struct rte_event *ev)
-{
- const uint8_t grp = ev->queue_id;
-
- /* Group hasn't changed, Use SWTAG to forward the event */
- if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(vws->tag_op)) == grp) {
- otx2_ssogws_dual_fwd_swtag(vws, ev);
- ws->swtag_req = 1;
- } else {
- /*
- * Group has been changed for group based work pipelining,
- * Use deschedule/add_work operation to transfer the event to
- * new group/core
- */
- otx2_ssogws_dual_fwd_group(vws, ev, grp);
- }
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq(void *port, const struct rte_event *ev)
-{
- struct otx2_ssogws_dual *ws = port;
- struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- rte_smp_mb();
- return otx2_ssogws_dual_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- otx2_ssogws_dual_forward_event(ws, vws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- otx2_ssogws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return otx2_ssogws_dual_enq(port, ev);
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
- uint16_t i, rc = 1;
-
- rte_smp_mb();
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- for (i = 0; i < nb_events && rc; i++)
- rc = otx2_ssogws_dual_new_event(ws, &ev[i]);
-
- return nb_events;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
- struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
-
- RTE_SET_USED(nb_events);
- otx2_ssogws_dual_forward_event(ws, vws, ev);
-
- return 1;
-}
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint8_t gw; \
- \
- rte_prefetch_non_temporal(ws); \
- RTE_SET_USED(timeout_ticks); \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags, ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t iter; \
- uint8_t gw; \
- \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags, ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], \
- ev, flags, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- } \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_timeout_ ##name(port, ev, \
- timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint8_t gw; \
- \
- RTE_SET_USED(timeout_ticks); \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_seg_ ##name(port, ev, \
- timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t iter; \
- uint8_t gw; \
- \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], \
- ev, flags | \
- NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- } \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_seg_timeout_ ##name(port, ev, \
- timeout_ticks); \
-}
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t cmd[sz]; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \
- cmd, (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, flags); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- struct otx2_ssogws_dual *ws = port; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \
- cmd, (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- (flags) | NIX_TX_MULTI_SEG_F);\
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h
deleted file mode 100644
index 36ae4dd88f..0000000000
--- a/drivers/event/octeontx2/otx2_worker_dual.h
+++ /dev/null
@@ -1,110 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_WORKER_DUAL_H__
-#define __OTX2_WORKER_DUAL_H__
-
-#include <rte_branch_prediction.h>
-#include <rte_common.h>
-
-#include <otx2_common.h>
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_rx.h"
-
-/* SSO Operations */
-static __rte_always_inline uint16_t
-otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws,
- struct otx2_ssogws_state *ws_pair,
- struct rte_event *ev, const uint32_t flags,
- const void * const lookup_mem,
- struct otx2_timesync_info * const tstamp)
-{
- const uint64_t set_gw = BIT_ULL(16) | 1;
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F)
- rte_prefetch_non_temporal(lookup_mem);
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- "rty%=: \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: str %[gw], [%[pong]] \n"
- " dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8]\n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op),
- [gw] "r" (set_gw),
- [pong] "r" (ws_pair->getwrk_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
- get_work1 = otx2_read64(ws->wqp_op);
- otx2_write64(set_gw, ws_pair->getwrk_op);
-
- rte_prefetch0((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch0((const void *)mbuf);
-#endif
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY) {
- if ((flags & NIX_RX_OFFLOAD_SECURITY_F) &&
- (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
- get_work1 = otx2_handle_crypto_event(get_work1);
- } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- uint8_t port = event.sub_event_type;
-
- event.sub_event_type = 0;
- otx2_wqe_to_mbuf(get_work1, mbuf, port,
- event.flow_id, flags, lookup_mem);
- /* Extracting tstamp, if PTP enabled. CGX will prepend
- * the timestamp at starting of packet data and it can
- * be derieved from WQE 9 dword which corresponds to SG
- * iova.
- * rte_pktmbuf_mtod_offset can be used for this purpose
- * but it brings down the performance as it reads
- * mbuf->buf_addr which is not part of cache in general
- * fast path.
- */
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)
- get_work1) +
- OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, tstamp,
- flags, (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_add_work(struct otx2_ssogws_dual *ws, const uint64_t event_ptr,
- const uint32_t tag, const uint8_t new_tt,
- const uint16_t grp)
-{
- uint64_t add_work0;
-
- add_work0 = tag | ((uint64_t)(new_tt) << 32);
- otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
-}
-
-#endif
diff --git a/drivers/event/octeontx2/version.map b/drivers/event/octeontx2/version.map
deleted file mode 100644
index c2e0723b4c..0000000000
--- a/drivers/event/octeontx2/version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_22 {
- local: *;
-};
diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c
index 57be33b862..ea473552dd 100644
--- a/drivers/mempool/cnxk/cnxk_mempool.c
+++ b/drivers/mempool/cnxk/cnxk_mempool.c
@@ -161,48 +161,20 @@ npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id npa_pci_map[] = {
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA,
- },
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/mempool/meson.build b/drivers/mempool/meson.build
index d295263b87..dc88812585 100644
--- a/drivers/mempool/meson.build
+++ b/drivers/mempool/meson.build
@@ -7,7 +7,6 @@ drivers = [
'dpaa',
'dpaa2',
'octeontx',
- 'octeontx2',
'ring',
'stack',
]
diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build
deleted file mode 100644
index a4bea6d364..0000000000
--- a/drivers/mempool/octeontx2/meson.build
+++ /dev/null
@@ -1,18 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_mempool.c',
- 'otx2_mempool_debug.c',
- 'otx2_mempool_irq.c',
- 'otx2_mempool_ops.c',
-)
-
-deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'mempool']
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
deleted file mode 100644
index f63dc06ef2..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ /dev/null
@@ -1,457 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_io.h>
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_mempool.h"
-
-#define OTX2_NPA_DEV_NAME RTE_STR(otx2_npa_dev_)
-#define OTX2_NPA_DEV_NAME_LEN (sizeof(OTX2_NPA_DEV_NAME) + PCI_PRI_STR_SIZE)
-
-static inline int
-npa_lf_alloc(struct otx2_npa_lf *lf)
-{
- struct otx2_mbox *mbox = lf->mbox;
- struct npa_lf_alloc_req *req;
- struct npa_lf_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_lf_alloc(mbox);
- req->aura_sz = lf->aura_sz;
- req->nr_pools = lf->nr_pools;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return NPA_LF_ERR_ALLOC;
-
- lf->stack_pg_ptrs = rsp->stack_pg_ptrs;
- lf->stack_pg_bytes = rsp->stack_pg_bytes;
- lf->qints = rsp->qints;
-
- return 0;
-}
-
-static int
-npa_lf_free(struct otx2_mbox *mbox)
-{
- otx2_mbox_alloc_msg_npa_lf_free(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npa_lf_init(struct otx2_npa_lf *lf, uintptr_t base, uint8_t aura_sz,
- uint32_t nr_pools, struct otx2_mbox *mbox)
-{
- uint32_t i, bmp_sz;
- int rc;
-
- /* Sanity checks */
- if (!lf || !base || !mbox || !nr_pools)
- return NPA_LF_ERR_PARAM;
-
- if (base & AURA_ID_MASK)
- return NPA_LF_ERR_BASE_INVALID;
-
- if (aura_sz == NPA_AURA_SZ_0 || aura_sz >= NPA_AURA_SZ_MAX)
- return NPA_LF_ERR_PARAM;
-
- memset(lf, 0x0, sizeof(*lf));
- lf->base = base;
- lf->aura_sz = aura_sz;
- lf->nr_pools = nr_pools;
- lf->mbox = mbox;
-
- rc = npa_lf_alloc(lf);
- if (rc)
- goto exit;
-
- bmp_sz = rte_bitmap_get_memory_footprint(nr_pools);
-
- /* Allocate memory for bitmap */
- lf->npa_bmp_mem = rte_zmalloc("npa_bmp_mem", bmp_sz,
- RTE_CACHE_LINE_SIZE);
- if (lf->npa_bmp_mem == NULL) {
- rc = -ENOMEM;
- goto lf_free;
- }
-
- /* Initialize pool resource bitmap array */
- lf->npa_bmp = rte_bitmap_init(nr_pools, lf->npa_bmp_mem, bmp_sz);
- if (lf->npa_bmp == NULL) {
- rc = -EINVAL;
- goto bmap_mem_free;
- }
-
- /* Mark all pools available */
- for (i = 0; i < nr_pools; i++)
- rte_bitmap_set(lf->npa_bmp, i);
-
- /* Allocate memory for qint context */
- lf->npa_qint_mem = rte_zmalloc("npa_qint_mem",
- sizeof(struct otx2_npa_qint) * nr_pools, 0);
- if (lf->npa_qint_mem == NULL) {
- rc = -ENOMEM;
- goto bmap_free;
- }
-
- /* Allocate memory for nap_aura_lim memory */
- lf->aura_lim = rte_zmalloc("npa_aura_lim_mem",
- sizeof(struct npa_aura_lim) * nr_pools, 0);
- if (lf->aura_lim == NULL) {
- rc = -ENOMEM;
- goto qint_free;
- }
-
- /* Init aura start & end limits */
- for (i = 0; i < nr_pools; i++) {
- lf->aura_lim[i].ptr_start = UINT64_MAX;
- lf->aura_lim[i].ptr_end = 0x0ull;
- }
-
- return 0;
-
-qint_free:
- rte_free(lf->npa_qint_mem);
-bmap_free:
- rte_bitmap_free(lf->npa_bmp);
-bmap_mem_free:
- rte_free(lf->npa_bmp_mem);
-lf_free:
- npa_lf_free(lf->mbox);
-exit:
- return rc;
-}
-
-static int
-npa_lf_fini(struct otx2_npa_lf *lf)
-{
- if (!lf)
- return NPA_LF_ERR_PARAM;
-
- rte_free(lf->aura_lim);
- rte_free(lf->npa_qint_mem);
- rte_bitmap_free(lf->npa_bmp);
- rte_free(lf->npa_bmp_mem);
-
- return npa_lf_free(lf->mbox);
-
-}
-
-static inline uint32_t
-otx2_aura_size_to_u32(uint8_t val)
-{
- if (val == NPA_AURA_SZ_0)
- return 128;
- if (val >= NPA_AURA_SZ_MAX)
- return BIT_ULL(20);
-
- return 1 << (val + 6);
-}
-
-static int
-parse_max_pools(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
- if (val < otx2_aura_size_to_u32(NPA_AURA_SZ_128))
- val = 128;
- if (val > otx2_aura_size_to_u32(NPA_AURA_SZ_1M))
- val = BIT_ULL(20);
-
- *(uint8_t *)extra_args = rte_log2_u32(val) - 6;
- return 0;
-}
-
-#define OTX2_MAX_POOLS "max_pools"
-
-static uint8_t
-otx2_parse_aura_size(struct rte_devargs *devargs)
-{
- uint8_t aura_sz = NPA_AURA_SZ_128;
- struct rte_kvargs *kvlist;
-
- if (devargs == NULL)
- goto exit;
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- goto exit;
-
- rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
- otx2_parse_common_devargs(kvlist);
- rte_kvargs_free(kvlist);
-exit:
- return aura_sz;
-}
-
-static inline int
-npa_lf_attach(struct otx2_mbox *mbox)
-{
- struct rsrc_attach_req *req;
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- req->npalf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-npa_lf_detach(struct otx2_mbox *mbox)
-{
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->npalf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-npa_lf_get_msix_offset(struct otx2_mbox *mbox, uint16_t *npa_msixoff)
-{
- struct msix_offset_rsp *msix_rsp;
- int rc;
-
- /* Get NPA and NIX MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- *npa_msixoff = msix_rsp->npa_msixoff;
-
- return rc;
-}
-
-/**
- * @internal
- * Finalize NPA LF.
- */
-int
-otx2_npa_lf_fini(void)
-{
- struct otx2_idev_cfg *idev;
- int rc = 0;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- if (rte_atomic16_add_return(&idev->npa_refcnt, -1) == 0) {
- otx2_npa_unregister_irqs(idev->npa_lf);
- rc |= npa_lf_fini(idev->npa_lf);
- rc |= npa_lf_detach(idev->npa_lf->mbox);
- otx2_npa_set_defaults(idev);
- }
-
- return rc;
-}
-
-/**
- * @internal
- * Initialize NPA LF.
- */
-int
-otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_npa_lf *lf;
- uint16_t npa_msixoff;
- uint32_t nr_pools;
- uint8_t aura_sz;
- int rc;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- /* Is NPA LF initialized by any another driver? */
- if (rte_atomic16_add_return(&idev->npa_refcnt, 1) == 1) {
-
- rc = npa_lf_attach(dev->mbox);
- if (rc)
- goto fail;
-
- rc = npa_lf_get_msix_offset(dev->mbox, &npa_msixoff);
- if (rc)
- goto npa_detach;
-
- aura_sz = otx2_parse_aura_size(pci_dev->device.devargs);
- nr_pools = otx2_aura_size_to_u32(aura_sz);
-
- lf = &dev->npalf;
- rc = npa_lf_init(lf, dev->bar2 + (RVU_BLOCK_ADDR_NPA << 20),
- aura_sz, nr_pools, dev->mbox);
-
- if (rc)
- goto npa_detach;
-
- lf->pf_func = dev->pf_func;
- lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = pci_dev->intr_handle;
- lf->pci_dev = pci_dev;
-
- idev->npa_pf_func = dev->pf_func;
- idev->npa_lf = lf;
- rte_smp_wmb();
- rc = otx2_npa_register_irqs(lf);
- if (rc)
- goto npa_fini;
-
- rte_mbuf_set_platform_mempool_ops("octeontx2_npa");
- otx2_npa_dbg("npa_lf=%p pools=%d sz=%d pf_func=0x%x msix=0x%x",
- lf, nr_pools, aura_sz, lf->pf_func, npa_msixoff);
- }
-
- return 0;
-
-npa_fini:
- npa_lf_fini(idev->npa_lf);
-npa_detach:
- npa_lf_detach(dev->mbox);
-fail:
- rte_atomic16_dec(&idev->npa_refcnt);
- return rc;
-}
-
-static inline char*
-otx2_npa_dev_to_name(struct rte_pci_device *pci_dev, char *name)
-{
- snprintf(name, OTX2_NPA_DEV_NAME_LEN,
- OTX2_NPA_DEV_NAME PCI_PRI_FMT,
- pci_dev->addr.domain, pci_dev->addr.bus,
- pci_dev->addr.devid, pci_dev->addr.function);
-
- return name;
-}
-
-static int
-otx2_npa_init(struct rte_pci_device *pci_dev)
-{
- char name[OTX2_NPA_DEV_NAME_LEN];
- const struct rte_memzone *mz;
- struct otx2_dev *dev;
- int rc = -ENOMEM;
-
- mz = rte_memzone_reserve_aligned(otx2_npa_dev_to_name(pci_dev, name),
- sizeof(*dev), SOCKET_ID_ANY,
- 0, OTX2_ALIGN);
- if (mz == NULL)
- goto error;
-
- dev = mz->addr;
-
- /* Initialize the base otx2_dev object */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc)
- goto malloc_fail;
-
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc)
- goto dev_uninit;
-
- dev->drv_inited = true;
- return 0;
-
-dev_uninit:
- otx2_npa_lf_fini();
- otx2_dev_fini(pci_dev, dev);
-malloc_fail:
- rte_memzone_free(mz);
-error:
- otx2_err("Failed to initialize npa device rc=%d", rc);
- return rc;
-}
-
-static int
-otx2_npa_fini(struct rte_pci_device *pci_dev)
-{
- char name[OTX2_NPA_DEV_NAME_LEN];
- const struct rte_memzone *mz;
- struct otx2_dev *dev;
-
- mz = rte_memzone_lookup(otx2_npa_dev_to_name(pci_dev, name));
- if (mz == NULL)
- return -EINVAL;
-
- dev = mz->addr;
- if (!dev->drv_inited)
- goto dev_fini;
-
- dev->drv_inited = false;
- otx2_npa_lf_fini();
-
-dev_fini:
- if (otx2_npa_lf_active(dev)) {
- otx2_info("%s: common resource in use by other devices",
- pci_dev->name);
- return -EAGAIN;
- }
-
- otx2_dev_fini(pci_dev, dev);
- rte_memzone_free(mz);
-
- return 0;
-}
-
-static int
-npa_remove(struct rte_pci_device *pci_dev)
-{
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- return otx2_npa_fini(pci_dev);
-}
-
-static int
-npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- RTE_SET_USED(pci_drv);
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- return otx2_npa_init(pci_dev);
-}
-
-static const struct rte_pci_id pci_npa_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_NPA_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_NPA_VF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_npa = {
- .id_table = pci_npa_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
- .probe = npa_probe,
- .remove = npa_remove,
-};
-
-RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa);
-RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
-RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
- OTX2_MAX_POOLS "=<128-1048576>"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h
deleted file mode 100644
index 8aa548248d..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool.h
+++ /dev/null
@@ -1,221 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_MEMPOOL_H__
-#define __OTX2_MEMPOOL_H__
-
-#include <rte_bitmap.h>
-#include <rte_bus_pci.h>
-#include <rte_devargs.h>
-#include <rte_mempool.h>
-
-#include "otx2_common.h"
-#include "otx2_mbox.h"
-
-enum npa_lf_status {
- NPA_LF_ERR_PARAM = -512,
- NPA_LF_ERR_ALLOC = -513,
- NPA_LF_ERR_INVALID_BLOCK_SZ = -514,
- NPA_LF_ERR_AURA_ID_ALLOC = -515,
- NPA_LF_ERR_AURA_POOL_INIT = -516,
- NPA_LF_ERR_AURA_POOL_FINI = -517,
- NPA_LF_ERR_BASE_INVALID = -518,
-};
-
-struct otx2_npa_lf;
-struct otx2_npa_qint {
- struct otx2_npa_lf *lf;
- uint8_t qintx;
-};
-
-struct npa_aura_lim {
- uint64_t ptr_start;
- uint64_t ptr_end;
-};
-
-struct otx2_npa_lf {
- uint16_t qints;
- uintptr_t base;
- uint8_t aura_sz;
- uint16_t pf_func;
- uint32_t nr_pools;
- void *npa_bmp_mem;
- void *npa_qint_mem;
- uint16_t npa_msixoff;
- struct otx2_mbox *mbox;
- uint32_t stack_pg_ptrs;
- uint32_t stack_pg_bytes;
- struct rte_bitmap *npa_bmp;
- struct npa_aura_lim *aura_lim;
- struct rte_pci_device *pci_dev;
- struct rte_intr_handle *intr_handle;
-};
-
-#define AURA_ID_MASK (BIT_ULL(16) - 1)
-
-/*
- * Generate 64bit handle to have optimized alloc and free aura operation.
- * 0 - AURA_ID_MASK for storing the aura_id.
- * AURA_ID_MASK+1 - (2^64 - 1) for storing the lf base address.
- * This scheme is valid when OS can give AURA_ID_MASK
- * aligned address for lf base address.
- */
-static inline uint64_t
-npa_lf_aura_handle_gen(uint32_t aura_id, uintptr_t addr)
-{
- uint64_t val;
-
- val = aura_id & AURA_ID_MASK;
- return (uint64_t)addr | val;
-}
-
-static inline uint64_t
-npa_lf_aura_handle_to_aura(uint64_t aura_handle)
-{
- return aura_handle & AURA_ID_MASK;
-}
-
-static inline uintptr_t
-npa_lf_aura_handle_to_base(uint64_t aura_handle)
-{
- return (uintptr_t)(aura_handle & ~AURA_ID_MASK);
-}
-
-static inline uint64_t
-npa_lf_aura_op_alloc(uint64_t aura_handle, const int drop)
-{
- uint64_t wdata = npa_lf_aura_handle_to_aura(aura_handle);
-
- if (drop)
- wdata |= BIT_ULL(63); /* DROP */
-
- return otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_ALLOCX(0)));
-}
-
-static inline void
-npa_lf_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova)
-{
- uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
-
- if (fabs)
- reg |= BIT_ULL(63); /* FABS */
-
- otx2_store_pair(iova, reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0);
-}
-
-static inline uint64_t
-npa_lf_aura_op_cnt_get(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_CNT));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count)
-{
- uint64_t reg = count & (BIT_ULL(36) - 1);
-
- if (sign)
- reg |= BIT_ULL(43); /* CNT_ADD */
-
- reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44);
-
- otx2_write64(reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_CNT);
-}
-
-static inline uint64_t
-npa_lf_aura_op_limit_get(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_LIMIT));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_limit_set(uint64_t aura_handle, uint64_t limit)
-{
- uint64_t reg = limit & (BIT_ULL(36) - 1);
-
- reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44);
-
- otx2_write64(reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_LIMIT);
-}
-
-static inline uint64_t
-npa_lf_aura_op_available(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(
- aura_handle) + NPA_LF_POOL_OP_AVAILABLE));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova,
- uint64_t end_iova)
-{
- uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- struct npa_aura_lim *lim = lf->aura_lim;
-
- lim[reg].ptr_start = RTE_MIN(lim[reg].ptr_start, start_iova);
- lim[reg].ptr_end = RTE_MAX(lim[reg].ptr_end, end_iova);
-
- otx2_store_pair(lim[reg].ptr_start, reg,
- npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_POOL_OP_PTR_START0);
- otx2_store_pair(lim[reg].ptr_end, reg,
- npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_POOL_OP_PTR_END0);
-}
-
-/* NPA LF */
-__rte_internal
-int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev);
-__rte_internal
-int otx2_npa_lf_fini(void);
-
-/* IRQ */
-int otx2_npa_register_irqs(struct otx2_npa_lf *lf);
-void otx2_npa_unregister_irqs(struct otx2_npa_lf *lf);
-
-/* Debug */
-int otx2_mempool_ctx_dump(struct otx2_npa_lf *lf);
-
-#endif /* __OTX2_MEMPOOL_H__ */
diff --git a/drivers/mempool/octeontx2/otx2_mempool_debug.c b/drivers/mempool/octeontx2/otx2_mempool_debug.c
deleted file mode 100644
index 279ea2e25f..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_debug.c
+++ /dev/null
@@ -1,135 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_mempool.h"
-
-#define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
-
-static inline void
-npa_lf_pool_dump(__otx2_io struct npa_pool_s *pool)
-{
- npa_dump("W0: Stack base\t\t0x%"PRIx64"", pool->stack_base);
- npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d",
- pool->ena, pool->nat_align, pool->stack_caching);
- npa_dump("W1: stack_way_mask\t%d\nW1: buf_offset\t\t%d",
- pool->stack_way_mask, pool->buf_offset);
- npa_dump("W1: buf_size \t\t%d", pool->buf_size);
-
- npa_dump("W2: stack_max_pages \t%d\nW2: stack_pages\t\t%d",
- pool->stack_max_pages, pool->stack_pages);
-
- npa_dump("W3: op_pc \t\t0x%"PRIx64"", (uint64_t)pool->op_pc);
-
- npa_dump("W4: stack_offset\t%d\nW4: shift\t\t%d\nW4: avg_level\t\t%d",
- pool->stack_offset, pool->shift, pool->avg_level);
- npa_dump("W4: avg_con \t\t%d\nW4: fc_ena\t\t%d\nW4: fc_stype\t\t%d",
- pool->avg_con, pool->fc_ena, pool->fc_stype);
- npa_dump("W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d",
- pool->fc_hyst_bits, pool->fc_up_crossing);
- npa_dump("W4: update_time\t\t%d\n", pool->update_time);
-
- npa_dump("W5: fc_addr\t\t0x%"PRIx64"\n", pool->fc_addr);
-
- npa_dump("W6: ptr_start\t\t0x%"PRIx64"\n", pool->ptr_start);
-
- npa_dump("W7: ptr_end\t\t0x%"PRIx64"\n", pool->ptr_end);
- npa_dump("W8: err_int\t\t%d\nW8: err_int_ena\t\t%d",
- pool->err_int, pool->err_int_ena);
- npa_dump("W8: thresh_int\t\t%d", pool->thresh_int);
-
- npa_dump("W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d",
- pool->thresh_int_ena, pool->thresh_up);
- npa_dump("W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d",
- pool->thresh_qint_idx, pool->err_qint_idx);
-}
-
-static inline void
-npa_lf_aura_dump(__otx2_io struct npa_aura_s *aura)
-{
- npa_dump("W0: Pool addr\t\t0x%"PRIx64"\n", aura->pool_addr);
-
- npa_dump("W1: ena\t\t\t%d\nW1: pool caching\t%d\nW1: pool way mask\t%d",
- aura->ena, aura->pool_caching, aura->pool_way_mask);
- npa_dump("W1: avg con\t\t%d\nW1: pool drop ena\t%d",
- aura->avg_con, aura->pool_drop_ena);
- npa_dump("W1: aura drop ena\t%d", aura->aura_drop_ena);
- npa_dump("W1: bp_ena\t\t%d\nW1: aura drop\t\t%d\nW1: aura shift\t\t%d",
- aura->bp_ena, aura->aura_drop, aura->shift);
- npa_dump("W1: avg_level\t\t%d\n", aura->avg_level);
-
- npa_dump("W2: count\t\t%"PRIx64"\nW2: nix0_bpid\t\t%d",
- (uint64_t)aura->count, aura->nix0_bpid);
- npa_dump("W2: nix1_bpid\t\t%d", aura->nix1_bpid);
-
- npa_dump("W3: limit\t\t%"PRIx64"\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n",
- (uint64_t)aura->limit, aura->bp, aura->fc_ena);
- npa_dump("W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d",
- aura->fc_up_crossing, aura->fc_stype);
-
- npa_dump("W3: fc_hyst_bits\t%d", aura->fc_hyst_bits);
-
- npa_dump("W4: fc_addr\t\t0x%"PRIx64"\n", aura->fc_addr);
-
- npa_dump("W5: pool_drop\t\t%d\nW5: update_time\t\t%d",
- aura->pool_drop, aura->update_time);
- npa_dump("W5: err_int\t\t%d", aura->err_int);
- npa_dump("W5: err_int_ena\t\t%d\nW5: thresh_int\t\t%d",
- aura->err_int_ena, aura->thresh_int);
- npa_dump("W5: thresh_int_ena\t%d", aura->thresh_int_ena);
-
- npa_dump("W5: thresh_up\t\t%d\nW5: thresh_qint_idx\t%d",
- aura->thresh_up, aura->thresh_qint_idx);
- npa_dump("W5: err_qint_idx\t%d", aura->err_qint_idx);
-
- npa_dump("W6: thresh\t\t%"PRIx64"\n", (uint64_t)aura->thresh);
-}
-
-int
-otx2_mempool_ctx_dump(struct otx2_npa_lf *lf)
-{
- struct npa_aq_enq_req *aq;
- struct npa_aq_enq_rsp *rsp;
- uint32_t q;
- int rc = 0;
-
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled POOL */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
- aq->aura_id = q;
- aq->ctype = NPA_AQ_CTYPE_POOL;
- aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get pool(%d) context", q);
- return rc;
- }
- npa_dump("============== pool=%d ===============\n", q);
- npa_lf_pool_dump(&rsp->pool);
- }
-
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled AURA */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
- aq->aura_id = q;
- aq->ctype = NPA_AQ_CTYPE_AURA;
- aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get aura(%d) context", q);
- return rc;
- }
- npa_dump("============== aura=%d ===============\n", q);
- npa_lf_aura_dump(&rsp->aura);
- }
-
- return rc;
-}
diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c
deleted file mode 100644
index 5fa22b9612..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_irq.c
+++ /dev/null
@@ -1,303 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_common.h>
-#include <rte_bus_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-#include "otx2_mempool.h"
-
-static void
-npa_lf_err_irq(void *param)
-{
- struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_ERR_INT);
- if (intr == 0)
- return;
-
- otx2_err("Err_intr=0x%" PRIx64 "", intr);
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_ERR_INT);
-}
-
-static int
-npa_lf_register_err_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int rc, vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
- /* Register err interrupt vector */
- rc = otx2_register_irq(handle, npa_lf_err_irq, lf, vec);
-
- /* Enable hw interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-npa_lf_unregister_err_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
- otx2_unregister_irq(handle, npa_lf_err_irq, lf, vec);
-}
-
-static void
-npa_lf_ras_irq(void *param)
-{
- struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_RAS);
- if (intr == 0)
- return;
-
- otx2_err("Ras_intr=0x%" PRIx64 "", intr);
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_RAS);
-}
-
-static int
-npa_lf_register_ras_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int rc, vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, npa_lf_ras_irq, lf, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S);
-
- return rc;
-}
-
-static void
-npa_lf_unregister_ras_irq(struct otx2_npa_lf *lf)
-{
- int vec;
- struct rte_intr_handle *handle = lf->intr_handle;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
- otx2_unregister_irq(handle, npa_lf_ras_irq, lf, vec);
-}
-
-static inline uint8_t
-npa_lf_q_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t q,
- uint32_t off, uint64_t mask)
-{
- uint64_t reg, wdata;
- uint8_t qint;
-
- wdata = (uint64_t)q << 44;
- reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off));
-
- if (reg & BIT_ULL(42) /* OP_ERR */) {
- otx2_err("Failed execute irq get off=0x%x", off);
- return 0;
- }
-
- qint = reg & 0xff;
- wdata &= mask;
- otx2_write64(wdata | qint, lf->base + off);
-
- return qint;
-}
-
-static inline uint8_t
-npa_lf_pool_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t p)
-{
- return npa_lf_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-npa_lf_aura_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t a)
-{
- return npa_lf_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00);
-}
-
-static void
-npa_lf_q_irq(void *param)
-{
- struct otx2_npa_qint *qint = (struct otx2_npa_qint *)param;
- struct otx2_npa_lf *lf = qint->lf;
- uint8_t irq, qintx = qint->qintx;
- uint32_t q, pool, aura;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_QINTX_INT(qintx));
- if (intr == 0)
- return;
-
- otx2_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx);
-
- /* Handle pool queue interrupts */
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled POOL */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- pool = q % lf->qints;
- irq = npa_lf_pool_irq_get_and_clear(lf, pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool);
- }
-
- /* Handle aura queue interrupts */
- for (q = 0; q < lf->nr_pools; q++) {
-
- /* Skip disabled AURA */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aura = q % lf->qints;
- irq = npa_lf_aura_irq_get_and_clear(lf, aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS))
- otx2_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura);
- }
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx));
- otx2_mempool_ctx_dump(lf);
-}
-
-static int
-npa_lf_register_queue_irqs(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec, q, qs, rc = 0;
-
- /* Figure out max qintx required */
- qs = RTE_MIN(lf->qints, lf->nr_pools);
-
- for (q = 0; q < qs; q++) {
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
-
- struct otx2_npa_qint *qintmem = lf->npa_qint_mem;
- qintmem += q;
-
- qintmem->lf = lf;
- qintmem->qintx = q;
-
- /* Sync qints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, npa_lf_q_irq, qintmem, vec);
- if (rc)
- break;
-
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
- otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q));
- /* Enable QINT interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q));
- }
-
- return rc;
-}
-
-static void
-npa_lf_unregister_queue_irqs(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec, q, qs;
-
- /* Figure out max qintx required */
- qs = RTE_MIN(lf->qints, lf->nr_pools);
-
- for (q = 0; q < qs; q++) {
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
- otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
-
- struct otx2_npa_qint *qintmem = lf->npa_qint_mem;
- qintmem += q;
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, npa_lf_q_irq, qintmem, vec);
-
- qintmem->lf = NULL;
- qintmem->qintx = 0;
- }
-}
-
-int
-otx2_npa_register_irqs(struct otx2_npa_lf *lf)
-{
- int rc;
-
- if (lf->npa_msixoff == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid NPALF MSIX vector offset vector: 0x%x",
- lf->npa_msixoff);
- return -EINVAL;
- }
-
- /* Register lf err interrupt */
- rc = npa_lf_register_err_irq(lf);
- /* Register RAS interrupt */
- rc |= npa_lf_register_ras_irq(lf);
- /* Register queue interrupts */
- rc |= npa_lf_register_queue_irqs(lf);
-
- return rc;
-}
-
-void
-otx2_npa_unregister_irqs(struct otx2_npa_lf *lf)
-{
- npa_lf_unregister_err_irq(lf);
- npa_lf_unregister_ras_irq(lf);
- npa_lf_unregister_queue_irqs(lf);
-}
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
deleted file mode 100644
index 332e4f1cb2..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ /dev/null
@@ -1,901 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_mempool.h>
-#include <rte_vect.h>
-
-#include "otx2_mempool.h"
-
-static int __rte_hot
-otx2_npa_enq(struct rte_mempool *mp, void * const *obj_table, unsigned int n)
-{
- unsigned int index; const uint64_t aura_handle = mp->pool_id;
- const uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
- const uint64_t addr = npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_FREE0;
-
- /* Ensure mbuf init changes are written before the free pointers
- * are enqueued to the stack.
- */
- rte_io_wmb();
- for (index = 0; index < n; index++)
- otx2_store_pair((uint64_t)obj_table[index], reg, addr);
-
- return 0;
-}
-
-static __rte_noinline int
-npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr,
- void **obj_table, uint8_t i)
-{
- uint8_t retry = 4;
-
- do {
- obj_table[i] = (void *)otx2_atomic64_add_nosync(wdata, addr);
- if (obj_table[i] != NULL)
- return 0;
-
- } while (retry--);
-
- return -ENOENT;
-}
-
-#if defined(RTE_ARCH_ARM64)
-static __rte_noinline int
-npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr,
- void **obj_table, unsigned int n)
-{
- uint8_t i;
-
- for (i = 0; i < n; i++) {
- if (obj_table[i] != NULL)
- continue;
- if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i))
- return -ENOENT;
- }
-
- return 0;
-}
-
-static __rte_noinline int
-npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr,
- unsigned int n, void **obj_table)
-{
- register const uint64_t wdata64 __asm("x26") = wdata;
- register const uint64_t wdata128 __asm("x27") = wdata;
- uint64x2_t failed = vdupq_n_u64(~0);
-
- switch (n) {
- case 32:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x16, x17, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x18, x19, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x20, x21, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x22, x23, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x8\n"
- "fmov v20.D[1], x9\n"
- "fmov d21, x10\n"
- "fmov v21.D[1], x11\n"
- "fmov d22, x12\n"
- "fmov v22.D[1], x13\n"
- "fmov d23, x14\n"
- "fmov v23.D[1], x15\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- "fmov d16, x16\n"
- "fmov v16.D[1], x17\n"
- "fmov d17, x18\n"
- "fmov v17.D[1], x19\n"
- "fmov d18, x20\n"
- "fmov v18.D[1], x21\n"
- "fmov d19, x22\n"
- "fmov v19.D[1], x23\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x0\n"
- "fmov v20.D[1], x1\n"
- "fmov d21, x2\n"
- "fmov v21.D[1], x3\n"
- "fmov d22, x4\n"
- "fmov v22.D[1], x5\n"
- "fmov d23, x6\n"
- "fmov v23.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16",
- "x17", "x18", "x19", "x20", "x21", "x22", "x23", "v16", "v17",
- "v18", "v19", "v20", "v21", "v22", "v23"
- );
- break;
- }
- case 16:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x8\n"
- "fmov v20.D[1], x9\n"
- "fmov d21, x10\n"
- "fmov v21.D[1], x11\n"
- "fmov d22, x12\n"
- "fmov v22.D[1], x13\n"
- "fmov d23, x14\n"
- "fmov v23.D[1], x15\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "v16",
- "v17", "v18", "v19", "v20", "v21", "v22", "v23"
- );
- break;
- }
- case 8:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "v16", "v17", "v18", "v19"
- );
- break;
- }
- case 4:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "st1 { v16.2d, v17.2d}, [%[dst]], 32\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "v16", "v17"
- );
- break;
- }
- case 2:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "st1 { v16.2d}, [%[dst]], 16\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "v16"
- );
- break;
- }
- case 1:
- return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0);
- }
-
- if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1))))
- return npa_lf_aura_op_search_alloc(wdata, addr, (void **)
- ((char *)obj_table - (sizeof(uint64_t) * n)), n);
-
- return 0;
-}
-
-static __rte_noinline void
-otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- unsigned int i;
-
- for (i = 0; i < n; i++) {
- if (obj_table[i] != NULL) {
- otx2_npa_enq(mp, &obj_table[i], 1);
- obj_table[i] = NULL;
- }
- }
-}
-
-static __rte_noinline int __rte_hot
-otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id);
- void **obj_table_bak = obj_table;
- const unsigned int nfree = n;
- unsigned int parts;
-
- int64_t * const addr = (int64_t * const)
- (npa_lf_aura_handle_to_base(mp->pool_id) +
- NPA_LF_AURA_OP_ALLOCX(0));
- while (n) {
- parts = n > 31 ? 32 : rte_align32prevpow2(n);
- n -= parts;
- if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr,
- parts, obj_table))) {
- otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n);
- return -ENOENT;
- }
- obj_table += parts;
- }
-
- return 0;
-}
-
-#else
-
-static inline int __rte_hot
-otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id);
- unsigned int index;
- uint64_t obj;
-
- int64_t * const addr = (int64_t *)
- (npa_lf_aura_handle_to_base(mp->pool_id) +
- NPA_LF_AURA_OP_ALLOCX(0));
- for (index = 0; index < n; index++, obj_table++) {
- obj = npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0);
- if (obj == 0) {
- for (; index > 0; index--) {
- obj_table--;
- otx2_npa_enq(mp, obj_table, 1);
- }
- return -ENOENT;
- }
- *obj_table = (void *)obj;
- }
-
- return 0;
-}
-
-#endif
-
-static unsigned int
-otx2_npa_get_count(const struct rte_mempool *mp)
-{
- return (unsigned int)npa_lf_aura_op_available(mp->pool_id);
-}
-
-static int
-npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
- struct npa_aura_s *aura, struct npa_pool_s *pool)
-{
- struct npa_aq_enq_req *aura_init_req, *pool_init_req;
- struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct otx2_idev_cfg *idev;
- int rc, off;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- aura_init_req->aura_id = aura_id;
- aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_init_req->op = NPA_AQ_INSTOP_INIT;
- otx2_mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura));
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- pool_init_req->aura_id = aura_id;
- pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_init_req->op = NPA_AQ_INSTOP_INIT;
- otx2_mbox_memcpy(&pool_init_req->pool, pool, sizeof(*pool));
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- aura_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
- off = mbox->rx_start + aura_init_rsp->hdr.next_msgoff;
- pool_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- if (rc == 2 && aura_init_rsp->hdr.rc == 0 && pool_init_rsp->hdr.rc == 0)
- return 0;
- else
- return NPA_LF_ERR_AURA_POOL_INIT;
-
- if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
- return 0;
-
- aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_init_req->aura_id = aura_id;
- aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_init_req->op = NPA_AQ_INSTOP_LOCK;
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (!pool_init_req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK AURA context");
- return -ENOMEM;
- }
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (!pool_init_req) {
- otx2_err("Failed to LOCK POOL context");
- return -ENOMEM;
- }
- }
- pool_init_req->aura_id = aura_id;
- pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_init_req->op = NPA_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to lock POOL ctx to NDC");
- return -ENOMEM;
- }
-
- return 0;
-}
-
-static int
-npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
- uint32_t aura_id,
- uint64_t aura_handle)
-{
- struct npa_aq_enq_req *aura_req, *pool_req;
- struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct ndc_sync_op *ndc_req;
- struct otx2_idev_cfg *idev;
- int rc, off;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -EINVAL;
-
- /* Procedure for disabling an aura/pool */
- rte_delay_us(10);
- npa_lf_aura_op_alloc(aura_handle, 0);
-
- pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- pool_req->aura_id = aura_id;
- pool_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_req->op = NPA_AQ_INSTOP_WRITE;
- pool_req->pool.ena = 0;
- pool_req->pool_mask.ena = ~pool_req->pool_mask.ena;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_req->aura_id = aura_id;
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
- aura_req->aura.ena = 0;
- aura_req->aura_mask.ena = ~aura_req->aura_mask.ena;
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- off = mbox->rx_start + pool_rsp->hdr.next_msgoff;
- aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0)
- return NPA_LF_ERR_AURA_POOL_FINI;
-
- /* Sync NDC-NPA for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->npa_lf_sync = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
- return NPA_LF_ERR_AURA_POOL_FINI;
- }
-
- if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
- return 0;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_req->aura_id = aura_id;
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to unlock AURA ctx to NDC");
- return -EINVAL;
- }
-
- pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- pool_req->aura_id = aura_id;
- pool_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to unlock POOL ctx to NDC");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static inline char*
-npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name)
-{
- snprintf(name, RTE_MEMZONE_NAMESIZE, "otx2_npa_stack_%x_%d",
- lf->pf_func, pool_id);
-
- return name;
-}
-
-static inline const struct rte_memzone *
-npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name,
- int pool_id, size_t size)
-{
- return rte_memzone_reserve_aligned(
- npa_lf_stack_memzone_name(lf, pool_id, name), size, 0,
- RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN);
-}
-
-static inline int
-npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id)
-{
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name));
- if (mz == NULL)
- return -EINVAL;
-
- return rte_memzone_free(mz);
-}
-
-static inline int
-bitmap_ctzll(uint64_t slab)
-{
- if (slab == 0)
- return 0;
-
- return __builtin_ctzll(slab);
-}
-
-static int
-npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size,
- const uint32_t block_count, struct npa_aura_s *aura,
- struct npa_pool_s *pool, uint64_t *aura_handle)
-{
- int rc, aura_id, pool_id, stack_size, alloc_size;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- uint64_t slab;
- uint32_t pos;
-
- /* Sanity check */
- if (!lf || !block_size || !block_count ||
- !pool || !aura || !aura_handle)
- return NPA_LF_ERR_PARAM;
-
- /* Block size should be cache line aligned and in range of 128B-128KB */
- if (block_size % OTX2_ALIGN || block_size < 128 ||
- block_size > 128 * 1024)
- return NPA_LF_ERR_INVALID_BLOCK_SZ;
-
- pos = slab = 0;
- /* Scan from the beginning */
- __rte_bitmap_scan_init(lf->npa_bmp);
- /* Scan bitmap to get the free pool */
- rc = rte_bitmap_scan(lf->npa_bmp, &pos, &slab);
- /* Empty bitmap */
- if (rc == 0) {
- otx2_err("Mempools exhausted, 'max_pools' devargs to increase");
- return -ERANGE;
- }
-
- /* Get aura_id from resource bitmap */
- aura_id = pos + bitmap_ctzll(slab);
- /* Mark pool as reserved */
- rte_bitmap_clear(lf->npa_bmp, aura_id);
-
- /* Configuration based on each aura has separate pool(aura-pool pair) */
- pool_id = aura_id;
- rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools || aura_id >=
- (int)BIT_ULL(6 + lf->aura_sz)) ? NPA_LF_ERR_AURA_ID_ALLOC : 0;
- if (rc)
- goto exit;
-
- /* Allocate stack memory */
- stack_size = (block_count + lf->stack_pg_ptrs - 1) / lf->stack_pg_ptrs;
- alloc_size = stack_size * lf->stack_pg_bytes;
-
- mz = npa_lf_stack_dma_alloc(lf, name, pool_id, alloc_size);
- if (mz == NULL) {
- rc = -ENOMEM;
- goto aura_res_put;
- }
-
- /* Update aura fields */
- aura->pool_addr = pool_id;/* AF will translate to associated poolctx */
- aura->ena = 1;
- aura->shift = rte_log2_u32(block_count);
- aura->shift = aura->shift < 8 ? 0 : aura->shift - 8;
- aura->limit = block_count;
- aura->pool_caching = 1;
- aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS);
- /* Many to one reduction */
- aura->err_qint_idx = aura_id % lf->qints;
-
- /* Update pool fields */
- pool->stack_base = mz->iova;
- pool->ena = 1;
- pool->buf_size = block_size / OTX2_ALIGN;
- pool->stack_max_pages = stack_size;
- pool->shift = rte_log2_u32(block_count);
- pool->shift = pool->shift < 8 ? 0 : pool->shift - 8;
- pool->ptr_start = 0;
- pool->ptr_end = ~0;
- pool->stack_caching = 1;
- pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS);
- pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE);
- pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR);
-
- /* Many to one reduction */
- pool->err_qint_idx = pool_id % lf->qints;
-
- /* Issue AURA_INIT and POOL_INIT op */
- rc = npa_lf_aura_pool_init(lf->mbox, aura_id, aura, pool);
- if (rc)
- goto stack_mem_free;
-
- *aura_handle = npa_lf_aura_handle_gen(aura_id, lf->base);
-
- /* Update aura count */
- npa_lf_aura_op_cnt_set(*aura_handle, 0, block_count);
- /* Read it back to make sure aura count is updated */
- npa_lf_aura_op_cnt_get(*aura_handle);
-
- return 0;
-
-stack_mem_free:
- rte_memzone_free(mz);
-aura_res_put:
- rte_bitmap_set(lf->npa_bmp, aura_id);
-exit:
- return rc;
-}
-
-static int
-npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle)
-{
- char name[RTE_MEMZONE_NAMESIZE];
- int aura_id, pool_id, rc;
-
- if (!lf || !aura_handle)
- return NPA_LF_ERR_PARAM;
-
- aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle);
- rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle);
- rc |= npa_lf_stack_dma_free(lf, name, pool_id);
-
- rte_bitmap_set(lf->npa_bmp, aura_id);
-
- return rc;
-}
-
-static int
-npa_lf_aura_range_update_check(uint64_t aura_handle)
-{
- uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- struct npa_aura_lim *lim = lf->aura_lim;
- __otx2_io struct npa_pool_s *pool;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
-
- req->aura_id = aura_id;
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id);
- return rc;
- }
-
- pool = &rsp->pool;
-
- if (lim[aura_id].ptr_start != pool->ptr_start ||
- lim[aura_id].ptr_end != pool->ptr_end) {
- otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id);
- return -ERANGE;
- }
-
- return 0;
-}
-
-static int
-otx2_npa_alloc(struct rte_mempool *mp)
-{
- uint32_t block_size, block_count;
- uint64_t aura_handle = 0;
- struct otx2_npa_lf *lf;
- struct npa_aura_s aura;
- struct npa_pool_s pool;
- size_t padding;
- int rc;
-
- lf = otx2_npa_lf_obj_get();
- if (lf == NULL) {
- rc = -EINVAL;
- goto error;
- }
-
- block_size = mp->elt_size + mp->header_size + mp->trailer_size;
- /*
- * OCTEON TX2 has 8 sets, 41 ways L1D cache, VA<9:7> bits dictate
- * the set selection.
- * Add additional padding to ensure that the element size always
- * occupies odd number of cachelines to ensure even distribution
- * of elements among L1D cache sets.
- */
- padding = ((block_size / RTE_CACHE_LINE_SIZE) % 2) ? 0 :
- RTE_CACHE_LINE_SIZE;
- mp->trailer_size += padding;
- block_size += padding;
-
- block_count = mp->size;
-
- if (block_size % OTX2_ALIGN != 0) {
- otx2_err("Block size should be multiple of 128B");
- rc = -ERANGE;
- goto error;
- }
-
- memset(&aura, 0, sizeof(struct npa_aura_s));
- memset(&pool, 0, sizeof(struct npa_pool_s));
- pool.nat_align = 1;
- pool.buf_offset = 1;
-
- if ((uint32_t)pool.buf_offset * OTX2_ALIGN != mp->header_size) {
- otx2_err("Unsupported mp->header_size=%d", mp->header_size);
- rc = -EINVAL;
- goto error;
- }
-
- /* Use driver specific mp->pool_config to override aura config */
- if (mp->pool_config != NULL)
- memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s));
-
- rc = npa_lf_aura_pool_pair_alloc(lf, block_size, block_count,
- &aura, &pool, &aura_handle);
- if (rc) {
- otx2_err("Failed to alloc pool or aura rc=%d", rc);
- goto error;
- }
-
- /* Store aura_handle for future queue operations */
- mp->pool_id = aura_handle;
- otx2_npa_dbg("lf=%p block_sz=%d block_count=%d aura_handle=0x%"PRIx64,
- lf, block_size, block_count, aura_handle);
-
- /* Just hold the reference of the object */
- otx2_npa_lf_obj_ref();
- return 0;
-error:
- return rc;
-}
-
-static void
-otx2_npa_free(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- int rc = 0;
-
- otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id);
- if (lf != NULL)
- rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id);
-
- if (rc)
- otx2_err("Failed to free pool or aura rc=%d", rc);
-
- /* Release the reference of npalf */
- otx2_npa_lf_fini();
-}
-
-static ssize_t
-otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
- uint32_t pg_shift, size_t *min_chunk_size, size_t *align)
-{
- size_t total_elt_sz;
-
- /* Need space for one more obj on each chunk to fulfill
- * alignment requirements.
- */
- total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
- return rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift,
- total_elt_sz, min_chunk_size,
- align);
-}
-
-static uint8_t
-otx2_npa_l1d_way_set_get(uint64_t iova)
-{
- return (iova >> rte_log2_u32(RTE_CACHE_LINE_SIZE)) & 0x7;
-}
-
-static int
-otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr,
- rte_iova_t iova, size_t len,
- rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
-{
-#define OTX2_L1D_NB_SETS 8
- uint64_t distribution[OTX2_L1D_NB_SETS];
- rte_iova_t start_iova;
- size_t total_elt_sz;
- uint8_t set;
- size_t off;
- int i;
-
- if (iova == RTE_BAD_IOVA)
- return -EINVAL;
-
- total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
- /* Align object start address to a multiple of total_elt_sz */
- off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1);
-
- if (len < off)
- return -EINVAL;
-
-
- vaddr = (char *)vaddr + off;
- iova += off;
- len -= off;
-
- memset(distribution, 0, sizeof(uint64_t) * OTX2_L1D_NB_SETS);
- start_iova = iova;
- while (start_iova < iova + len) {
- set = otx2_npa_l1d_way_set_get(start_iova + mp->header_size);
- distribution[set]++;
- start_iova += total_elt_sz;
- }
-
- otx2_npa_dbg("iova %"PRIx64", aligned iova %"PRIx64"", iova - off,
- iova);
- otx2_npa_dbg("length %"PRIu64", aligned length %"PRIu64"",
- (uint64_t)(len + off), (uint64_t)len);
- otx2_npa_dbg("element size %"PRIu64"", (uint64_t)total_elt_sz);
- otx2_npa_dbg("requested objects %"PRIu64", possible objects %"PRIu64"",
- (uint64_t)max_objs, (uint64_t)(len / total_elt_sz));
- otx2_npa_dbg("L1D set distribution :");
- for (i = 0; i < OTX2_L1D_NB_SETS; i++)
- otx2_npa_dbg("set[%d] : objects : %"PRIu64"", i,
- distribution[i]);
-
- npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len);
-
- if (npa_lf_aura_range_update_check(mp->pool_id) < 0)
- return -EBUSY;
-
- return rte_mempool_op_populate_helper(mp,
- RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ,
- max_objs, vaddr, iova, len,
- obj_cb, obj_cb_arg);
-}
-
-static struct rte_mempool_ops otx2_npa_ops = {
- .name = "octeontx2_npa",
- .alloc = otx2_npa_alloc,
- .free = otx2_npa_free,
- .enqueue = otx2_npa_enq,
- .get_count = otx2_npa_get_count,
- .calc_mem_size = otx2_npa_calc_mem_size,
- .populate = otx2_npa_populate,
-#if defined(RTE_ARCH_ARM64)
- .dequeue = otx2_npa_deq_arm64,
-#else
- .dequeue = otx2_npa_deq,
-#endif
-};
-
-RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops);
diff --git a/drivers/mempool/octeontx2/version.map b/drivers/mempool/octeontx2/version.map
deleted file mode 100644
index e6887ceb8f..0000000000
--- a/drivers/mempool/octeontx2/version.map
+++ /dev/null
@@ -1,8 +0,0 @@
-INTERNAL {
- global:
-
- otx2_npa_lf_fini;
- otx2_npa_lf_init;
-
- local: *;
-};
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index f8f3d3895e..d34bc6898f 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -579,6 +579,21 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id cn9k_pci_nix_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_AF_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 2355d1cde8..e35652fe63 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -45,7 +45,6 @@ drivers = [
'ngbe',
'null',
'octeontx',
- 'octeontx2',
'octeontx_ep',
'pcap',
'pfe',
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
deleted file mode 100644
index ab15844cbc..0000000000
--- a/drivers/net/octeontx2/meson.build
+++ /dev/null
@@ -1,47 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_rx.c',
- 'otx2_tx.c',
- 'otx2_tm.c',
- 'otx2_rss.c',
- 'otx2_mac.c',
- 'otx2_ptp.c',
- 'otx2_flow.c',
- 'otx2_link.c',
- 'otx2_vlan.c',
- 'otx2_stats.c',
- 'otx2_mcast.c',
- 'otx2_lookup.c',
- 'otx2_ethdev.c',
- 'otx2_flow_ctrl.c',
- 'otx2_flow_dump.c',
- 'otx2_flow_parse.c',
- 'otx2_flow_utils.c',
- 'otx2_ethdev_irq.c',
- 'otx2_ethdev_ops.c',
- 'otx2_ethdev_sec.c',
- 'otx2_ethdev_debug.c',
- 'otx2_ethdev_devargs.c',
-)
-
-deps += ['bus_pci', 'cryptodev', 'eventdev', 'security']
-deps += ['common_octeontx2', 'mempool_octeontx2']
-
-extra_flags = ['-flax-vector-conversions']
-foreach flag: extra_flags
- if cc.has_argument(flag)
- cflags += flag
- endif
-endforeach
-
-includes += include_directories('../../common/cpt')
-includes += include_directories('../../crypto/octeontx2')
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
deleted file mode 100644
index 4f1c0b98de..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ /dev/null
@@ -1,2814 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <ethdev_pci.h>
-#include <rte_io.h>
-#include <rte_malloc.h>
-#include <rte_mbuf.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_mempool.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-
-static inline uint64_t
-nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
-{
- uint64_t capa = NIX_RX_OFFLOAD_CAPA;
-
- if (otx2_dev_is_vf(dev) ||
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
-
- return capa;
-}
-
-static inline uint64_t
-nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
-{
- uint64_t capa = NIX_TX_OFFLOAD_CAPA;
-
- /* TSO not supported for earlier chip revisions */
- if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
- RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
- return capa;
-}
-
-static const struct otx2_dev_ops otx2_dev_ops = {
- .link_status_update = otx2_eth_dev_link_status_update,
- .ptp_info_update = otx2_eth_dev_ptp_info_update,
- .link_status_get = otx2_eth_dev_link_status_get,
-};
-
-static int
-nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lf_alloc_req *req;
- struct nix_lf_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox);
- req->rq_cnt = nb_rxq;
- req->sq_cnt = nb_txq;
- req->cq_cnt = nb_rxq;
- /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */
- RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128);
- req->xqe_sz = NIX_XQESZ_W16;
- req->rss_sz = dev->rss_info.rss_size;
- req->rss_grps = NIX_RSS_GRPS;
- req->npa_func = otx2_npa_pf_func_get();
- req->sso_func = otx2_sso_pf_func_get();
- req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
- req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
- req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
- }
- req->rx_cfg |= (BIT_ULL(32 /* DROP_RE */) |
- BIT_ULL(33 /* Outer L2 Length */) |
- BIT_ULL(38 /* Inner L4 UDP Length */) |
- BIT_ULL(39 /* Inner L3 Length */) |
- BIT_ULL(40 /* Outer L4 UDP Length */) |
- BIT_ULL(41 /* Outer L3 Length */));
-
- if (dev->rss_tag_as_xor == 0)
- req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->sqb_size = rsp->sqb_size;
- dev->tx_chan_base = rsp->tx_chan_base;
- dev->rx_chan_base = rsp->rx_chan_base;
- dev->rx_chan_cnt = rsp->rx_chan_cnt;
- dev->tx_chan_cnt = rsp->tx_chan_cnt;
- dev->lso_tsov4_idx = rsp->lso_tsov4_idx;
- dev->lso_tsov6_idx = rsp->lso_tsov6_idx;
- dev->lf_tx_stats = rsp->lf_tx_stats;
- dev->lf_rx_stats = rsp->lf_rx_stats;
- dev->cints = rsp->cints;
- dev->qints = rsp->qints;
- dev->npc_flow.channel = dev->rx_chan_base;
- dev->ptp_en = rsp->hw_rx_tstamp_en;
-
- return 0;
-}
-
-static int
-nix_lf_switch_header_type_enable(struct otx2_eth_dev *dev, bool enable)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct npc_set_pkind *req;
- struct msg_resp *rsp;
- int rc;
-
- if (dev->npc_flow.switch_header_type == 0)
- return 0;
-
- /* Notify AF about higig2 config */
- req = otx2_mbox_alloc_msg_npc_set_pkind(mbox);
- req->mode = dev->npc_flow.switch_header_type;
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_CHLEN90B_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_CH_LEN_24B) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_CHLEN24B_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_EXDSA) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_EXDSA_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_VLAN_EXDSA) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_VLAN_EXDSA_PKIND;
- }
-
- if (enable == 0)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
- req->dir = PKIND_RX;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
- req = otx2_mbox_alloc_msg_npc_set_pkind(mbox);
- req->mode = dev->npc_flow.switch_header_type;
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B ||
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_24B)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
-
- if (enable == 0)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
- req->dir = PKIND_TX;
- return otx2_mbox_process_msg(mbox, (void *)&rsp);
-}
-
-static int
-nix_lf_free(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lf_free_req *req;
- struct ndc_sync_op *ndc_req;
- int rc;
-
- /* Sync NDC-NIX for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->nix_lf_tx_sync = 1;
- ndc_req->nix_lf_rx_sync = 1;
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc);
-
- req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
- /* Let AF driver free all this nix lf's
- * NPC entries allocated using NPC MBOX.
- */
- req->flags = 0;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npc_rx_enable(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- otx2_mbox_alloc_msg_nix_lf_start_rx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npc_rx_disable(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cgx_start_link_event(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_linkevents(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (en && otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (en)
- otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox);
- else
- otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cgx_stop_link_event(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static inline void
-nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
-{
- rxq->head = 0;
- rxq->available = 0;
-}
-
-static inline uint32_t
-nix_qsize_to_val(enum nix_q_size_e qsize)
-{
- return (16UL << (qsize * 2));
-}
-
-static inline enum nix_q_size_e
-nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val)
-{
- int i;
-
- if (otx2_ethdev_fixup_is_min_4k_q(dev))
- i = nix_q_size_4K;
- else
- i = nix_q_size_16;
-
- for (; i < nix_q_size_max; i++)
- if (val <= nix_qsize_to_val(i))
- break;
-
- if (i >= nix_q_size_max)
- i = nix_q_size_max - 1;
-
- return i;
-}
-
-static int
-nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
- uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp)
-{
- struct otx2_mbox *mbox = dev->mbox;
- const struct rte_memzone *rz;
- uint32_t ring_size, cq_size;
- struct nix_aq_enq_req *aq;
- uint16_t first_skip;
- int rc;
-
- cq_size = rxq->qlen;
- ring_size = cq_size * NIX_CQ_ENTRY_SZ;
- rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size,
- NIX_CQ_ALIGN, dev->node);
- if (rz == NULL) {
- otx2_err("Failed to allocate mem for cq hw ring");
- return -ENOMEM;
- }
- memset(rz->addr, 0, rz->len);
- rxq->desc = (uintptr_t)rz->addr;
- rxq->qmask = cq_size - 1;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_INIT;
-
- aq->cq.ena = 1;
- aq->cq.caching = 1;
- aq->cq.qsize = rxq->qsize;
- aq->cq.base = rz->iova;
- aq->cq.avg_level = 0xff;
- aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
- aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
-
- /* Many to one reduction */
- aq->cq.qint_idx = qid % dev->qints;
- /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */
- aq->cq.cint_idx = qid;
-
- if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
- const float rx_cq_skid = NIX_CQ_FULL_ERRATA_SKID;
- uint16_t min_rx_drop;
-
- min_rx_drop = ceil(rx_cq_skid / (float)cq_size);
- aq->cq.drop = min_rx_drop;
- aq->cq.drop_ena = 1;
- rxq->cq_drop = min_rx_drop;
- } else {
- rxq->cq_drop = NIX_CQ_THRESH_LEVEL;
- aq->cq.drop = rxq->cq_drop;
- aq->cq.drop_ena = 1;
- }
-
- /* TX pause frames enable flowctrl on RX side */
- if (dev->fc_info.tx_pause) {
- /* Single bpid is allocated for all rx channels for now */
- aq->cq.bpid = dev->fc_info.bpid[0];
- aq->cq.bp = rxq->cq_drop;
- aq->cq.bp_ena = 1;
- }
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to init cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_INIT;
-
- aq->rq.sso_ena = 0;
-
- if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
- aq->rq.ipsech_ena = 1;
-
- aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
- aq->rq.spb_ena = 0;
- aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id);
- first_skip = (sizeof(struct rte_mbuf));
- first_skip += RTE_PKTMBUF_HEADROOM;
- first_skip += rte_pktmbuf_priv_size(mp);
- rxq->data_off = first_skip;
-
- first_skip /= 8; /* Expressed in number of dwords */
- aq->rq.first_skip = first_skip;
- aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8);
- aq->rq.flow_tagw = 32; /* 32-bits */
- aq->rq.lpb_sizem1 = mp->elt_size / 8;
- aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
- aq->rq.ena = 1;
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
- aq->rq.rq_int_ena = 0;
- /* Many to one reduction */
- aq->rq.qint_idx = qid % dev->qints;
-
- aq->rq.xqe_drop_ena = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to init rq context");
- return rc;
- }
-
- if (dev->lock_rx_ctx) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_LOCK;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- otx2_err("Failed to LOCK rq context");
- return -ENOMEM;
- }
- }
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_LOCK;
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to LOCK rq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-static int
-nix_rq_enb_dis(struct rte_eth_dev *eth_dev,
- struct otx2_eth_rxq *rxq, const bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
-
- /* Pkts will be dropped silently if RQ is disabled */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.ena = enb;
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- /* RQ is already disabled */
- /* Disable CQ */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 0;
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to disable cq context");
- return rc;
- }
-
- if (dev->lock_rx_ctx) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- otx2_err("Failed to UNLOCK rq context");
- return -ENOMEM;
- }
- }
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK rq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-static inline int
-nix_get_data_off(struct otx2_eth_dev *dev)
-{
- return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0;
-}
-
-uint64_t
-otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id)
-{
- struct rte_mbuf mb_def;
- uint64_t *tmp;
-
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
- offsetof(struct rte_mbuf, data_off) != 2);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) -
- offsetof(struct rte_mbuf, data_off) != 4);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) -
- offsetof(struct rte_mbuf, data_off) != 6);
- mb_def.nb_segs = 1;
- mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev);
- mb_def.port = port_id;
- rte_mbuf_refcnt_set(&mb_def, 1);
-
- /* Prevent compiler reordering: rearm_data covers previous fields */
- rte_compiler_barrier();
- tmp = (uint64_t *)&mb_def.rearm_data;
-
- return *tmp;
-}
-
-static void
-otx2_nix_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- struct otx2_eth_rxq *rxq = dev->data->rx_queues[qid];
-
- if (!rxq)
- return;
-
- otx2_nix_dbg("Releasing rxq %u", rxq->rq);
- nix_cq_rq_uninit(rxq->eth_dev, rxq);
- rte_free(rxq);
- dev->data->rx_queues[qid] = NULL;
-}
-
-static int
-otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
- uint16_t nb_desc, unsigned int socket,
- const struct rte_eth_rxconf *rx_conf,
- struct rte_mempool *mp)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_mempool_ops *ops;
- struct otx2_eth_rxq *rxq;
- const char *platform_ops;
- enum nix_q_size_e qsize;
- uint64_t offloads;
- int rc;
-
- rc = -EINVAL;
-
- /* Compile time check to make sure all fast path elements in a CL */
- RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128);
-
- /* Sanity checks */
- if (rx_conf->rx_deferred_start == 1) {
- otx2_err("Deferred Rx start is not supported");
- goto fail;
- }
-
- platform_ops = rte_mbuf_platform_mempool_ops();
- /* This driver needs octeontx2_npa mempool ops to work */
- ops = rte_mempool_get_ops(mp->ops_index);
- if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) {
- otx2_err("mempool ops should be of octeontx2_npa type");
- goto fail;
- }
-
- if (mp->pool_id == 0) {
- otx2_err("Invalid pool_id");
- goto fail;
- }
-
- /* Free memory prior to re-allocation if needed */
- if (eth_dev->data->rx_queues[rq] != NULL) {
- otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq);
- otx2_nix_rx_queue_release(eth_dev, rq);
- rte_eth_dma_zone_free(eth_dev, "cq", rq);
- }
-
- offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads;
- dev->rx_offloads |= offloads;
-
- /* Find the CQ queue size */
- qsize = nix_qsize_clampup_get(dev, nb_desc);
- /* Allocate rxq memory */
- rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket);
- if (rxq == NULL) {
- otx2_err("Failed to allocate rq=%d", rq);
- rc = -ENOMEM;
- goto fail;
- }
-
- rxq->eth_dev = eth_dev;
- rxq->rq = rq;
- rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR;
- rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS);
- rxq->wdata = (uint64_t)rq << 32;
- rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id);
- rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev,
- eth_dev->data->port_id);
- rxq->offloads = offloads;
- rxq->pool = mp;
- rxq->qlen = nix_qsize_to_val(qsize);
- rxq->qsize = qsize;
- rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
- rxq->tstamp = &dev->tstamp;
-
- eth_dev->data->rx_queues[rq] = rxq;
-
- /* Alloc completion queue */
- rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
- if (rc) {
- otx2_err("Failed to allocate rxq=%u", rq);
- goto free_rxq;
- }
-
- rxq->qconf.socket_id = socket;
- rxq->qconf.nb_desc = nb_desc;
- rxq->qconf.mempool = mp;
- memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf));
-
- nix_rx_queue_reset(rxq);
- otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d",
- rq, mp->name, qsize, nb_desc, rxq->qlen);
-
- eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED;
-
- /* Calculating delta and freq mult between PTP HI clock and tsc.
- * These are needed in deriving raw clock value from tsc counter.
- * read_clock eth op returns raw clock value.
- */
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
- otx2_ethdev_is_ptp_en(dev)) {
- rc = otx2_nix_raw_clock_tsc_conv(dev);
- if (rc) {
- otx2_err("Failed to calculate delta and freq mult");
- goto fail;
- }
- }
-
- /* Setup scatter mode if needed by jumbo */
- otx2_nix_enable_mseg_on_jumbo(rxq);
-
- return 0;
-
-free_rxq:
- otx2_nix_rx_queue_release(eth_dev, rq);
-fail:
- return rc;
-}
-
-static inline uint8_t
-nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
-{
- /*
- * Maximum three segments can be supported with W8, Choose
- * NIX_MAXSQESZ_W16 for multi segment offload.
- */
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- return NIX_MAXSQESZ_W16;
- else
- return NIX_MAXSQESZ_W8;
-}
-
-static uint16_t
-nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct rte_eth_conf *conf = &data->dev_conf;
- struct rte_eth_rxmode *rxmode = &conf->rxmode;
- uint16_t flags = 0;
-
- if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
- (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
- flags |= NIX_RX_OFFLOAD_RSS_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
- flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
- flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
- flags |= NIX_RX_MULTI_SEG_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
- flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
-
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
- flags |= NIX_RX_OFFLOAD_TSTAMP_F;
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
- flags |= NIX_RX_OFFLOAD_SECURITY_F;
-
- if (!dev->ptype_disable)
- flags |= NIX_RX_OFFLOAD_PTYPE_F;
-
- return flags;
-}
-
-static uint16_t
-nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t conf = dev->tx_offloads;
- uint16_t flags = 0;
-
- /* Fastpath is dependent on these enums */
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
- RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
- RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
- RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
- RTE_BUILD_BUG_ON(RTE_MBUF_OUTL3_LEN_BITS != 9);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) !=
- offsetof(struct rte_mbuf, buf_iova) + 8);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
- offsetof(struct rte_mbuf, buf_iova) + 16);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
- offsetof(struct rte_mbuf, ol_flags) + 12);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
- offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
-
- if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
- conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
- flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
- flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
- flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
-
- if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
- flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- flags |= NIX_TX_MULTI_SEG_F;
-
- /* Enable Inner checksum for TSO */
- if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
- flags |= (NIX_TX_OFFLOAD_TSO_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F);
-
- /* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
- flags |= (NIX_TX_OFFLOAD_TSO_F |
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F);
-
- if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
- flags |= NIX_TX_OFFLOAD_SECURITY_F;
-
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
- flags |= NIX_TX_OFFLOAD_TSTAMP_F;
-
- return flags;
-}
-
-static int
-nix_sqb_lock(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_LOCK;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(npa_lf->mbox, 0);
- rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK AURA context");
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- otx2_err("Failed to LOCK POOL context");
- return -ENOMEM;
- }
- }
-
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(npa_lf->mbox);
- if (rc < 0) {
- otx2_err("Unable to lock POOL in NDC");
- return rc;
- }
-
- return 0;
-}
-
-static int
-nix_sqb_unlock(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_UNLOCK;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(npa_lf->mbox, 0);
- rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK AURA context");
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- otx2_err("Failed to UNLOCK POOL context");
- return -ENOMEM;
- }
- }
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(npa_lf->mbox);
- if (rc < 0) {
- otx2_err("Unable to UNLOCK AURA in NDC");
- return rc;
- }
-
- return 0;
-}
-
-void
-otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
-{
- struct rte_pktmbuf_pool_private *mbp_priv;
- struct rte_eth_dev *eth_dev;
- struct otx2_eth_dev *dev;
- uint32_t buffsz;
-
- eth_dev = rxq->eth_dev;
- dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Get rx buffer size */
- mbp_priv = rte_mempool_get_priv(rxq->pool);
- buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
-
- if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
-
- /* Setting up the rx[tx]_offload_flags due to change
- * in rx[tx]_offloads.
- */
- dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
- }
-}
-
-static int
-nix_sq_init(struct otx2_eth_txq *txq)
-{
- struct otx2_eth_dev *dev = txq->dev;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *sq;
- uint32_t rr_quantum;
- uint16_t smq;
- int rc;
-
- if (txq->sqb_pool->pool_id == 0)
- return -EINVAL;
-
- rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq);
- if (rc) {
- otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc);
- return rc;
- }
-
- sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- sq->qidx = txq->sq;
- sq->ctype = NIX_AQ_CTYPE_SQ;
- sq->op = NIX_AQ_INSTOP_INIT;
- sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
-
- sq->sq.smq = smq;
- sq->sq.smq_rr_quantum = rr_quantum;
- sq->sq.default_chan = dev->tx_chan_base;
- sq->sq.sqe_stype = NIX_STYPE_STF;
- sq->sq.ena = 1;
- if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
- sq->sq.sqe_stype = NIX_STYPE_STP;
- sq->sq.sqb_aura =
- npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id);
- sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
-
- /* Many to one reduction */
- sq->sq.qint_idx = txq->sq % dev->qints;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0)
- return rc;
-
- if (dev->lock_tx_ctx) {
- sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- sq->qidx = txq->sq;
- sq->ctype = NIX_AQ_CTYPE_SQ;
- sq->op = NIX_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(mbox);
- }
-
- return rc;
-}
-
-static int
-nix_sq_uninit(struct otx2_eth_txq *txq)
-{
- struct otx2_eth_dev *dev = txq->dev;
- struct otx2_mbox *mbox = dev->mbox;
- struct ndc_sync_op *ndc_req;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- uint16_t sqes_per_sqb;
- void *sqb_buf;
- int rc, count;
-
- otx2_nix_dbg("Cleaning up sq %u", txq->sq);
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Check if sq is already cleaned up */
- if (!rsp->sq.ena)
- return 0;
-
- /* Disable sq */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->sq_mask.ena = ~aq->sq_mask.ena;
- aq->sq.ena = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- if (dev->lock_tx_ctx) {
- /* Unlock sq */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0)
- return rc;
-
- nix_sqb_unlock(txq->sqb_pool);
- }
-
- /* Read SQ and free sqb's */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (aq->sq.smq_pend)
- otx2_err("SQ has pending sqe's");
-
- count = aq->sq.sqb_count;
- sqes_per_sqb = 1 << txq->sqes_per_sqb_log2;
- /* Free SQB's that are used */
- sqb_buf = (void *)rsp->sq.head_sqb;
- while (count) {
- void *next_sqb;
-
- next_sqb = *(void **)((uintptr_t)sqb_buf + (uint32_t)
- ((sqes_per_sqb - 1) *
- nix_sq_max_sqe_sz(txq)));
- npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
- (uint64_t)sqb_buf);
- sqb_buf = next_sqb;
- count--;
- }
-
- /* Free next to use sqb */
- if (rsp->sq.next_sqb)
- npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
- rsp->sq.next_sqb);
-
- /* Sync NDC-NIX-TX for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->nix_lf_tx_sync = 1;
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc);
-
- return rc;
-}
-
-static int
-nix_sqb_aura_limit_cfg(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *aura_req;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
-
- aura_req->aura.limit = nb_sqb_bufs;
- aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
-
- return otx2_mbox_process(npa_lf->mbox);
-}
-
-static int
-nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
-{
- struct otx2_eth_dev *dev = txq->dev;
- uint16_t sqes_per_sqb, nb_sqb_bufs;
- char name[RTE_MEMPOOL_NAMESIZE];
- struct rte_mempool_objsz sz;
- struct npa_aura_s *aura;
- uint32_t tmp, blk_sz;
-
- aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN);
- snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq);
- blk_sz = dev->sqb_size;
-
- if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16)
- sqes_per_sqb = (dev->sqb_size / 8) / 16;
- else
- sqes_per_sqb = (dev->sqb_size / 8) / 8;
-
- nb_sqb_bufs = nb_desc / sqes_per_sqb;
- /* Clamp up to devarg passed SQB count */
- nb_sqb_bufs = RTE_MIN(dev->max_sqb_count, RTE_MAX(NIX_DEF_SQB,
- nb_sqb_bufs + NIX_SQB_LIST_SPACE));
-
- txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
- 0, 0, dev->node,
- RTE_MEMPOOL_F_NO_SPREAD);
- txq->nb_sqb_bufs = nb_sqb_bufs;
- txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
- txq->nb_sqb_bufs_adj = nb_sqb_bufs -
- RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb;
- txq->nb_sqb_bufs_adj =
- (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100;
-
- if (txq->sqb_pool == NULL) {
- otx2_err("Failed to allocate sqe mempool");
- goto fail;
- }
-
- memset(aura, 0, sizeof(*aura));
- aura->fc_ena = 1;
- aura->fc_addr = txq->fc_iova;
- aura->fc_hyst_bits = 0; /* Store count on all updates */
- if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) {
- otx2_err("Failed to set ops for sqe mempool");
- goto fail;
- }
- if (rte_mempool_populate_default(txq->sqb_pool) < 0) {
- otx2_err("Failed to populate sqe mempool");
- goto fail;
- }
-
- tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz);
- if (dev->sqb_size != sz.elt_size) {
- otx2_err("sqe pool block size is not expected %d != %d",
- dev->sqb_size, tmp);
- goto fail;
- }
-
- nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
- if (dev->lock_tx_ctx)
- nix_sqb_lock(txq->sqb_pool);
-
- return 0;
-fail:
- return -ENOMEM;
-}
-
-void
-otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
-{
- struct nix_send_ext_s *send_hdr_ext;
- struct nix_send_hdr_s *send_hdr;
- struct nix_send_mem_s *send_mem;
- union nix_send_sg_s *sg;
-
- /* Initialize the fields based on basic single segment packet */
- memset(&txq->cmd, 0, sizeof(txq->cmd));
-
- if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) {
- send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
- /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */
- send_hdr->w0.sizem1 = 2;
-
- send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2];
- send_hdr_ext->w0.subdc = NIX_SUBDC_EXT;
- if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) {
- /* Default: one seg packet would have:
- * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM)
- * => 8/2 - 1 = 3
- */
- send_hdr->w0.sizem1 = 3;
- send_hdr_ext->w0.tstmp = 1;
-
- /* To calculate the offset for send_mem,
- * send_hdr->w0.sizem1 * 2
- */
- send_mem = (struct nix_send_mem_s *)(txq->cmd +
- (send_hdr->w0.sizem1 << 1));
- send_mem->subdc = NIX_SUBDC_MEM;
- send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
- send_mem->addr = txq->dev->tstamp.tx_tstamp_iova;
- }
- sg = (union nix_send_sg_s *)&txq->cmd[4];
- } else {
- send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
- /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */
- send_hdr->w0.sizem1 = 1;
- sg = (union nix_send_sg_s *)&txq->cmd[2];
- }
-
- send_hdr->w0.sq = txq->sq;
- sg->subdc = NIX_SUBDC_SG;
- sg->segs = 1;
- sg->ld_type = NIX_SENDLDTYPE_LDD;
-
- rte_smp_wmb();
-}
-
-static void
-otx2_nix_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
-{
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[qid];
-
- if (!txq)
- return;
-
- otx2_nix_dbg("Releasing txq %u", txq->sq);
-
- /* Flush and disable tm */
- otx2_nix_sq_flush_pre(txq, eth_dev->data->dev_started);
-
- /* Free sqb's and disable sq */
- nix_sq_uninit(txq);
-
- if (txq->sqb_pool) {
- rte_mempool_free(txq->sqb_pool);
- txq->sqb_pool = NULL;
- }
- otx2_nix_sq_flush_post(txq);
- rte_free(txq);
- eth_dev->data->tx_queues[qid] = NULL;
-}
-
-
-static int
-otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
- uint16_t nb_desc, unsigned int socket_id,
- const struct rte_eth_txconf *tx_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct rte_memzone *fc;
- struct otx2_eth_txq *txq;
- uint64_t offloads;
- int rc;
-
- rc = -EINVAL;
-
- /* Compile time check to make sure all fast path elements in a CL */
- RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128);
-
- if (tx_conf->tx_deferred_start) {
- otx2_err("Tx deferred start is not supported");
- goto fail;
- }
-
- /* Free memory prior to re-allocation if needed. */
- if (eth_dev->data->tx_queues[sq] != NULL) {
- otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq);
- otx2_nix_tx_queue_release(eth_dev, sq);
- }
-
- /* Find the expected offloads for this queue */
- offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads;
-
- /* Allocating tx queue data structure */
- txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq),
- OTX2_ALIGN, socket_id);
- if (txq == NULL) {
- otx2_err("Failed to alloc txq=%d", sq);
- rc = -ENOMEM;
- goto fail;
- }
- txq->sq = sq;
- txq->dev = dev;
- txq->sqb_pool = NULL;
- txq->offloads = offloads;
- dev->tx_offloads |= offloads;
- eth_dev->data->tx_queues[sq] = txq;
-
- /*
- * Allocate memory for flow control updates from HW.
- * Alloc one cache line, so that fits all FC_STYPE modes.
- */
- fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq,
- OTX2_ALIGN + sizeof(struct npa_aura_s),
- OTX2_ALIGN, dev->node);
- if (fc == NULL) {
- otx2_err("Failed to allocate mem for fcmem");
- rc = -ENOMEM;
- goto free_txq;
- }
- txq->fc_iova = fc->iova;
- txq->fc_mem = fc->addr;
-
- /* Initialize the aura sqb pool */
- rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc);
- if (rc) {
- otx2_err("Failed to alloc sqe pool rc=%d", rc);
- goto free_txq;
- }
-
- /* Initialize the SQ */
- rc = nix_sq_init(txq);
- if (rc) {
- otx2_err("Failed to init sq=%d context", sq);
- goto free_txq;
- }
-
- txq->fc_cache_pkts = 0;
- txq->io_addr = dev->base + NIX_LF_OP_SENDX(0);
- /* Evenly distribute LMT slot for each sq */
- txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12));
-
- txq->qconf.socket_id = socket_id;
- txq->qconf.nb_desc = nb_desc;
- memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf));
-
- txq->lso_tun_fmt = dev->lso_tun_fmt;
- otx2_nix_form_default_desc(txq);
-
- otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 ""
- " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq,
- fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr,
- txq->nb_sqb_bufs, txq->sqes_per_sqb_log2);
- eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED;
- return 0;
-
-free_txq:
- otx2_nix_tx_queue_release(eth_dev, sq);
-fail:
- return rc;
-}
-
-static int
-nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_eth_qconf *tx_qconf = NULL;
- struct otx2_eth_qconf *rx_qconf = NULL;
- struct otx2_eth_txq **txq;
- struct otx2_eth_rxq **rxq;
- int i, nb_rxq, nb_txq;
-
- nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
- nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
-
- tx_qconf = malloc(nb_txq * sizeof(*tx_qconf));
- if (tx_qconf == NULL) {
- otx2_err("Failed to allocate memory for tx_qconf");
- goto fail;
- }
-
- rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf));
- if (rx_qconf == NULL) {
- otx2_err("Failed to allocate memory for rx_qconf");
- goto fail;
- }
-
- txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
- for (i = 0; i < nb_txq; i++) {
- if (txq[i] == NULL) {
- tx_qconf[i].valid = false;
- otx2_info("txq[%d] is already released", i);
- continue;
- }
- memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf));
- tx_qconf[i].valid = true;
- otx2_nix_tx_queue_release(eth_dev, i);
- }
-
- rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
- for (i = 0; i < nb_rxq; i++) {
- if (rxq[i] == NULL) {
- rx_qconf[i].valid = false;
- otx2_info("rxq[%d] is already released", i);
- continue;
- }
- memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf));
- rx_qconf[i].valid = true;
- otx2_nix_rx_queue_release(eth_dev, i);
- }
-
- dev->tx_qconf = tx_qconf;
- dev->rx_qconf = rx_qconf;
- return 0;
-
-fail:
- free(tx_qconf);
- free(rx_qconf);
-
- return -ENOMEM;
-}
-
-static int
-nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_eth_qconf *tx_qconf = dev->tx_qconf;
- struct otx2_eth_qconf *rx_qconf = dev->rx_qconf;
- int rc, i, nb_rxq, nb_txq;
-
- nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
- nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
-
- rc = -ENOMEM;
- /* Setup tx & rx queues with previous configuration so
- * that the queues can be functional in cases like ports
- * are started without re configuring queues.
- *
- * Usual re config sequence is like below:
- * port_configure() {
- * if(reconfigure) {
- * queue_release()
- * queue_setup()
- * }
- * queue_configure() {
- * queue_release()
- * queue_setup()
- * }
- * }
- * port_start()
- *
- * In some application's control path, queue_configure() would
- * NOT be invoked for TXQs/RXQs in port_configure().
- * In such cases, queues can be functional after start as the
- * queues are already setup in port_configure().
- */
- for (i = 0; i < nb_txq; i++) {
- if (!tx_qconf[i].valid)
- continue;
- rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc,
- tx_qconf[i].socket_id,
- &tx_qconf[i].conf.tx);
- if (rc) {
- otx2_err("Failed to setup tx queue rc=%d", rc);
- for (i -= 1; i >= 0; i--)
- otx2_nix_tx_queue_release(eth_dev, i);
- goto fail;
- }
- }
-
- free(tx_qconf); tx_qconf = NULL;
-
- for (i = 0; i < nb_rxq; i++) {
- if (!rx_qconf[i].valid)
- continue;
- rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc,
- rx_qconf[i].socket_id,
- &rx_qconf[i].conf.rx,
- rx_qconf[i].mempool);
- if (rc) {
- otx2_err("Failed to setup rx queue rc=%d", rc);
- for (i -= 1; i >= 0; i--)
- otx2_nix_rx_queue_release(eth_dev, i);
- goto release_tx_queues;
- }
- }
-
- free(rx_qconf); rx_qconf = NULL;
-
- return 0;
-
-release_tx_queues:
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_release(eth_dev, i);
-fail:
- if (tx_qconf)
- free(tx_qconf);
- if (rx_qconf)
- free(rx_qconf);
-
- return rc;
-}
-
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
-static void
-nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
-{
- /* These dummy functions are required for supporting
- * some applications which reconfigure queues without
- * stopping tx burst and rx burst threads(eg kni app)
- * When the queues context is saved, txq/rxqs are released
- * which caused app crash since rx/tx burst is still
- * on different lcores
- */
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
- rte_mb();
-}
-
-static void
-nix_lso_tcp(struct nix_lso_format_cfg *req, bool v4)
-{
- volatile struct nix_lso_format *field;
-
- /* Format works only with TCP packet marked by OL3/OL4 */
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
- /* TCP flags field */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static void
-nix_lso_udp_tun_tcp(struct nix_lso_format_cfg *req,
- bool outer_v4, bool inner_v4)
-{
- volatile struct nix_lso_format *field;
-
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 len */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = outer_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (outer_v4) {
- /* IPID */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* Outer UDP length */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 4;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
-
- /* Inner IPv4/IPv6 */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = inner_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (inner_v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
-
- /* TCP flags field */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static void
-nix_lso_tun_tcp(struct nix_lso_format_cfg *req,
- bool outer_v4, bool inner_v4)
-{
- volatile struct nix_lso_format *field;
-
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 len */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = outer_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (outer_v4) {
- /* IPID */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* Inner IPv4/IPv6 */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = inner_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (inner_v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
-
- /* TCP flags field */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static int
-nix_setup_lso_formats(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lso_format_cfg_rsp *rsp;
- struct nix_lso_format_cfg *req;
- uint8_t *fmt;
- int rc;
-
- /* Skip if TSO was not requested */
- if (!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F))
- return 0;
- /*
- * IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tcp(req, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV4)
- return -EFAULT;
- otx2_nix_dbg("tcpv4 lso fmt=%u", rsp->lso_format_idx);
-
-
- /*
- * IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tcp(req, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV6)
- return -EFAULT;
- otx2_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/UDP/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, true, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/UDP/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, true, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/UDP/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, false, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/UDP/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, false, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, true, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, true, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, false, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, false, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx);
-
- /* Save all tun formats into u64 for fast path.
- * Lower 32bit has non-udp tunnel formats.
- * Upper 32bit has udp tunnel formats.
- */
- fmt = dev->lso_tun_idx;
- dev->lso_tun_fmt = ((uint64_t)fmt[NIX_LSO_TUN_V4V4] |
- (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 8 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 16 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 24);
-
- fmt = dev->lso_udp_tun_idx;
- dev->lso_tun_fmt |= ((uint64_t)fmt[NIX_LSO_TUN_V4V4] << 32 |
- (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 40 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 48 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 56);
-
- return 0;
-}
-
-static int
-otx2_nix_configure(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct rte_eth_conf *conf = &data->dev_conf;
- struct rte_eth_rxmode *rxmode = &conf->rxmode;
- struct rte_eth_txmode *txmode = &conf->txmode;
- char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE];
- struct rte_ether_addr *ea;
- uint8_t nb_rxq, nb_txq;
- int rc;
-
- rc = -EINVAL;
-
- /* Sanity checks */
- if (rte_eal_has_hugepages() == 0) {
- otx2_err("Huge page is not configured");
- goto fail_configure;
- }
-
- if (conf->dcb_capability_en == 1) {
- otx2_err("dcb enable is not supported");
- goto fail_configure;
- }
-
- if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
- otx2_err("Flow director is not supported");
- goto fail_configure;
- }
-
- if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
- rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
- otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
- goto fail_configure;
- }
-
- if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
- otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
- goto fail_configure;
- }
-
- if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
- otx2_err("Outer IP and SCTP checksum unsupported");
- goto fail_configure;
- }
-
- /* Free the resources allocated from the previous configure */
- if (dev->configured == 1) {
- otx2_eth_sec_fini(eth_dev);
- otx2_nix_rxchan_bpid_cfg(eth_dev, false);
- otx2_nix_vlan_fini(eth_dev);
- otx2_nix_mc_addr_list_uninstall(eth_dev);
- otx2_flow_free_all_resources(dev);
- oxt2_nix_unregister_queue_irqs(eth_dev);
- if (eth_dev->data->dev_conf.intr_conf.rxq)
- oxt2_nix_unregister_cq_irqs(eth_dev);
- nix_set_nop_rxtx_function(eth_dev);
- rc = nix_store_queue_cfg_and_then_release(eth_dev);
- if (rc)
- goto fail_configure;
- otx2_nix_tm_fini(eth_dev);
- nix_lf_free(dev);
- }
-
- dev->rx_offloads = rxmode->offloads;
- dev->tx_offloads = txmode->offloads;
- dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
- dev->rss_info.rss_grps = NIX_RSS_GRPS;
-
- nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
- nb_txq = RTE_MAX(data->nb_tx_queues, 1);
-
- /* Alloc a nix lf */
- rc = nix_lf_alloc(dev, nb_rxq, nb_txq);
- if (rc) {
- otx2_err("Failed to init nix_lf rc=%d", rc);
- goto fail_offloads;
- }
-
- otx2_nix_err_intr_enb_dis(eth_dev, true);
- otx2_nix_ras_intr_enb_dis(eth_dev, true);
-
- if (dev->ptp_en &&
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- otx2_err("Both PTP and switch header enabled");
- goto free_nix_lf;
- }
-
- rc = nix_lf_switch_header_type_enable(dev, true);
- if (rc) {
- otx2_err("Failed to enable switch type nix_lf rc=%d", rc);
- goto free_nix_lf;
- }
-
- rc = nix_setup_lso_formats(dev);
- if (rc) {
- otx2_err("failed to setup nix lso format fields, rc=%d", rc);
- goto free_nix_lf;
- }
-
- /* Configure RSS */
- rc = otx2_nix_rss_config(eth_dev);
- if (rc) {
- otx2_err("Failed to configure rss rc=%d", rc);
- goto free_nix_lf;
- }
-
- /* Init the default TM scheduler hierarchy */
- rc = otx2_nix_tm_init_default(eth_dev);
- if (rc) {
- otx2_err("Failed to init traffic manager rc=%d", rc);
- goto free_nix_lf;
- }
-
- rc = otx2_nix_vlan_offload_init(eth_dev);
- if (rc) {
- otx2_err("Failed to init vlan offload rc=%d", rc);
- goto tm_fini;
- }
-
- /* Register queue IRQs */
- rc = oxt2_nix_register_queue_irqs(eth_dev);
- if (rc) {
- otx2_err("Failed to register queue interrupts rc=%d", rc);
- goto vlan_fini;
- }
-
- /* Register cq IRQs */
- if (eth_dev->data->dev_conf.intr_conf.rxq) {
- if (eth_dev->data->nb_rx_queues > dev->cints) {
- otx2_err("Rx interrupt cannot be enabled, rxq > %d",
- dev->cints);
- goto q_irq_fini;
- }
- /* Rx interrupt feature cannot work with vector mode because,
- * vector mode doesn't process packets unless min 4 pkts are
- * received, while cq interrupts are generated even for 1 pkt
- * in the CQ.
- */
- dev->scalar_ena = true;
-
- rc = oxt2_nix_register_cq_irqs(eth_dev);
- if (rc) {
- otx2_err("Failed to register CQ interrupts rc=%d", rc);
- goto q_irq_fini;
- }
- }
-
- /* Configure loop back mode */
- rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
- if (rc) {
- otx2_err("Failed to configure cgx loop back mode rc=%d", rc);
- goto cq_fini;
- }
-
- rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
- if (rc) {
- otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
- goto cq_fini;
- }
-
- /* Enable security */
- rc = otx2_eth_sec_init(eth_dev);
- if (rc)
- goto cq_fini;
-
- rc = otx2_nix_flow_ctrl_init(eth_dev);
- if (rc) {
- otx2_err("Failed to init flow ctrl mode %d", rc);
- goto cq_fini;
- }
-
- rc = otx2_nix_mc_addr_list_install(eth_dev);
- if (rc < 0) {
- otx2_err("Failed to install mc address list rc=%d", rc);
- goto sec_fini;
- }
-
- /*
- * Restore queue config when reconfigure followed by
- * reconfigure and no queue configure invoked from application case.
- */
- if (dev->configured == 1) {
- rc = nix_restore_queue_cfg(eth_dev);
- if (rc)
- goto uninstall_mc_list;
- }
-
- /* Update the mac address */
- ea = eth_dev->data->mac_addrs;
- memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
- if (rte_is_zero_ether_addr(ea))
- rte_eth_random_addr((uint8_t *)ea);
-
- rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea);
-
- /* Apply new link configurations if changed */
- rc = otx2_apply_link_speed(eth_dev);
- if (rc) {
- otx2_err("Failed to set link configuration");
- goto uninstall_mc_list;
- }
-
- otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d"
- " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 ""
- " rx_flags=0x%x tx_flags=0x%x",
- eth_dev->data->port_id, ea_fmt, nb_rxq,
- nb_txq, dev->rx_offloads, dev->tx_offloads,
- dev->rx_offload_flags, dev->tx_offload_flags);
-
- /* All good */
- dev->configured = 1;
- dev->configured_nb_rx_qs = data->nb_rx_queues;
- dev->configured_nb_tx_qs = data->nb_tx_queues;
- return 0;
-
-uninstall_mc_list:
- otx2_nix_mc_addr_list_uninstall(eth_dev);
-sec_fini:
- otx2_eth_sec_fini(eth_dev);
-cq_fini:
- oxt2_nix_unregister_cq_irqs(eth_dev);
-q_irq_fini:
- oxt2_nix_unregister_queue_irqs(eth_dev);
-vlan_fini:
- otx2_nix_vlan_fini(eth_dev);
-tm_fini:
- otx2_nix_tm_fini(eth_dev);
-free_nix_lf:
- nix_lf_free(dev);
-fail_offloads:
- dev->rx_offload_flags &= ~nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags &= ~nix_tx_offload_flags(eth_dev);
-fail_configure:
- dev->configured = 0;
- return rc;
-}
-
-int
-otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_txq *txq;
- int rc = -EINVAL;
-
- txq = eth_dev->data->tx_queues[qidx];
-
- if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
- return 0;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d",
- qidx, rc);
- goto done;
- }
-
- data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
-
-done:
- return rc;
-}
-
-int
-otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_txq *txq;
- int rc;
-
- txq = eth_dev->data->tx_queues[qidx];
-
- if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
- return 0;
-
- txq->fc_cache_pkts = 0;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d",
- qidx, rc);
- goto done;
- }
-
- data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
- struct rte_eth_dev_data *data = eth_dev->data;
- int rc;
-
- if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
- return 0;
-
- rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true);
- if (rc) {
- otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc);
- goto done;
- }
-
- data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
- struct rte_eth_dev_data *data = eth_dev->data;
- int rc;
-
- if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
- return 0;
-
- rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false);
- if (rc) {
- otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc);
- goto done;
- }
-
- data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_dev_stop(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_mbuf *rx_pkts[32];
- struct otx2_eth_rxq *rxq;
- struct rte_eth_link link;
- int count, i, j, rc;
-
- nix_lf_switch_header_type_enable(dev, false);
- nix_cgx_stop_link_event(dev);
- npc_rx_disable(dev);
-
- /* Stop rx queues and free up pkts pending */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = otx2_nix_rx_queue_stop(eth_dev, i);
- if (rc)
- continue;
-
- rxq = eth_dev->data->rx_queues[i];
- count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
- while (count) {
- for (j = 0; j < count; j++)
- rte_pktmbuf_free(rx_pkts[j]);
- count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
- }
- }
-
- /* Stop tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_stop(eth_dev, i);
-
- /* Bring down link status internally */
- memset(&link, 0, sizeof(link));
- rte_eth_linkstatus_set(eth_dev, &link);
-
- return 0;
-}
-
-static int
-otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, i;
-
- /* MTU recalculate should be avoided here if PTP is enabled by PF, as
- * otx2_nix_recalc_mtu would be invoked during otx2_nix_ptp_enable_vf
- * call below.
- */
- if (eth_dev->data->nb_rx_queues != 0 && !otx2_ethdev_is_ptp_en(dev)) {
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- return rc;
- }
-
- /* Start rx queues */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = otx2_nix_rx_queue_start(eth_dev, i);
- if (rc)
- return rc;
- }
-
- /* Start tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = otx2_nix_tx_queue_start(eth_dev, i);
- if (rc)
- return rc;
- }
-
- rc = otx2_nix_update_flow_ctrl_mode(eth_dev);
- if (rc) {
- otx2_err("Failed to update flow ctrl mode %d", rc);
- return rc;
- }
-
- /* Enable PTP if it was requested by the app or if it is already
- * enabled in PF owning this VF
- */
- memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
- otx2_ethdev_is_ptp_en(dev))
- otx2_nix_timesync_enable(eth_dev);
- else
- otx2_nix_timesync_disable(eth_dev);
-
- /* Update VF about data off shifted by 8 bytes if PTP already
- * enabled in PF owning this VF
- */
- if (otx2_ethdev_is_ptp_en(dev) && otx2_dev_is_vf(dev))
- otx2_nix_ptp_enable_vf(eth_dev);
-
- if (dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F) {
- rc = rte_mbuf_dyn_rx_timestamp_register(
- &dev->tstamp.tstamp_dynfield_offset,
- &dev->tstamp.rx_tstamp_dynflag);
- if (rc != 0) {
- otx2_err("Failed to register Rx timestamp field/flag");
- return -rte_errno;
- }
- }
-
- rc = npc_rx_enable(dev);
- if (rc) {
- otx2_err("Failed to enable NPC rx %d", rc);
- return rc;
- }
-
- otx2_nix_toggle_flag_link_cfg(dev, true);
-
- rc = nix_cgx_start_link_event(dev);
- if (rc) {
- otx2_err("Failed to start cgx link event %d", rc);
- goto rx_disable;
- }
-
- otx2_nix_toggle_flag_link_cfg(dev, false);
- otx2_eth_set_tx_function(eth_dev);
- otx2_eth_set_rx_function(eth_dev);
-
- return 0;
-
-rx_disable:
- npc_rx_disable(dev);
- otx2_nix_toggle_flag_link_cfg(dev, false);
- return rc;
-}
-
-static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev);
-static int otx2_nix_dev_close(struct rte_eth_dev *eth_dev);
-
-/* Initialize and register driver with DPDK Application */
-static const struct eth_dev_ops otx2_eth_dev_ops = {
- .dev_infos_get = otx2_nix_info_get,
- .dev_configure = otx2_nix_configure,
- .link_update = otx2_nix_link_update,
- .tx_queue_setup = otx2_nix_tx_queue_setup,
- .tx_queue_release = otx2_nix_tx_queue_release,
- .tm_ops_get = otx2_nix_tm_ops_get,
- .rx_queue_setup = otx2_nix_rx_queue_setup,
- .rx_queue_release = otx2_nix_rx_queue_release,
- .dev_start = otx2_nix_dev_start,
- .dev_stop = otx2_nix_dev_stop,
- .dev_close = otx2_nix_dev_close,
- .tx_queue_start = otx2_nix_tx_queue_start,
- .tx_queue_stop = otx2_nix_tx_queue_stop,
- .rx_queue_start = otx2_nix_rx_queue_start,
- .rx_queue_stop = otx2_nix_rx_queue_stop,
- .dev_set_link_up = otx2_nix_dev_set_link_up,
- .dev_set_link_down = otx2_nix_dev_set_link_down,
- .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
- .dev_ptypes_set = otx2_nix_ptypes_set,
- .dev_reset = otx2_nix_dev_reset,
- .stats_get = otx2_nix_dev_stats_get,
- .stats_reset = otx2_nix_dev_stats_reset,
- .get_reg = otx2_nix_dev_get_reg,
- .mtu_set = otx2_nix_mtu_set,
- .mac_addr_add = otx2_nix_mac_addr_add,
- .mac_addr_remove = otx2_nix_mac_addr_del,
- .mac_addr_set = otx2_nix_mac_addr_set,
- .set_mc_addr_list = otx2_nix_set_mc_addr_list,
- .promiscuous_enable = otx2_nix_promisc_enable,
- .promiscuous_disable = otx2_nix_promisc_disable,
- .allmulticast_enable = otx2_nix_allmulticast_enable,
- .allmulticast_disable = otx2_nix_allmulticast_disable,
- .queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
- .reta_update = otx2_nix_dev_reta_update,
- .reta_query = otx2_nix_dev_reta_query,
- .rss_hash_update = otx2_nix_rss_hash_update,
- .rss_hash_conf_get = otx2_nix_rss_hash_conf_get,
- .xstats_get = otx2_nix_xstats_get,
- .xstats_get_names = otx2_nix_xstats_get_names,
- .xstats_reset = otx2_nix_xstats_reset,
- .xstats_get_by_id = otx2_nix_xstats_get_by_id,
- .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
- .rxq_info_get = otx2_nix_rxq_info_get,
- .txq_info_get = otx2_nix_txq_info_get,
- .rx_burst_mode_get = otx2_rx_burst_mode_get,
- .tx_burst_mode_get = otx2_tx_burst_mode_get,
- .tx_done_cleanup = otx2_nix_tx_done_cleanup,
- .set_queue_rate_limit = otx2_nix_tm_set_queue_rate_limit,
- .pool_ops_supported = otx2_nix_pool_ops_supported,
- .flow_ops_get = otx2_nix_dev_flow_ops_get,
- .get_module_info = otx2_nix_get_module_info,
- .get_module_eeprom = otx2_nix_get_module_eeprom,
- .fw_version_get = otx2_nix_fw_version_get,
- .flow_ctrl_get = otx2_nix_flow_ctrl_get,
- .flow_ctrl_set = otx2_nix_flow_ctrl_set,
- .timesync_enable = otx2_nix_timesync_enable,
- .timesync_disable = otx2_nix_timesync_disable,
- .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp,
- .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp,
- .timesync_adjust_time = otx2_nix_timesync_adjust_time,
- .timesync_read_time = otx2_nix_timesync_read_time,
- .timesync_write_time = otx2_nix_timesync_write_time,
- .vlan_offload_set = otx2_nix_vlan_offload_set,
- .vlan_filter_set = otx2_nix_vlan_filter_set,
- .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
- .vlan_tpid_set = otx2_nix_vlan_tpid_set,
- .vlan_pvid_set = otx2_nix_vlan_pvid_set,
- .rx_queue_intr_enable = otx2_nix_rx_queue_intr_enable,
- .rx_queue_intr_disable = otx2_nix_rx_queue_intr_disable,
- .read_clock = otx2_nix_read_clock,
-};
-
-static inline int
-nix_lf_attach(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct rsrc_attach_req *req;
-
- /* Attach NIX(lf) */
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- req->modify = true;
- req->nixlf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-nix_lf_get_msix_offset(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int rc;
-
- /* Get NPA and NIX MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- dev->nix_msixoff = msix_rsp->nix_msixoff;
-
- return rc;
-}
-
-static inline int
-otx2_eth_dev_lf_detach(struct otx2_mbox *mbox)
-{
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
-
- /* Detach all except npa lf */
- req->partial = true;
- req->nixlf = true;
- req->sso = true;
- req->ssow = true;
- req->timlfs = true;
- req->cptlfs = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static bool
-otx2_eth_dev_is_sdp(struct rte_pci_device *pci_dev)
-{
- if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_SDP_PF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_SDP_VF)
- return true;
- return false;
-}
-
-static inline uint64_t
-nix_get_blkaddr(struct otx2_eth_dev *dev)
-{
- uint64_t reg;
-
- /* Reading the discovery register to know which NIX is the LF
- * attached to.
- */
- reg = otx2_read64(dev->bar2 +
- RVU_PF_BLOCK_ADDRX_DISC(RVU_BLOCK_ADDR_NIX0));
-
- return reg & 0x1FFULL ? RVU_BLOCK_ADDR_NIX0 : RVU_BLOCK_ADDR_NIX1;
-}
-
-static int
-otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_pci_device *pci_dev;
- int rc, max_entries;
-
- eth_dev->dev_ops = &otx2_eth_dev_ops;
- eth_dev->rx_queue_count = otx2_nix_rx_queue_count;
- eth_dev->rx_descriptor_status = otx2_nix_rx_descriptor_status;
- eth_dev->tx_descriptor_status = otx2_nix_tx_descriptor_status;
-
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- /* Setup callbacks for secondary process */
- otx2_eth_set_tx_function(eth_dev);
- otx2_eth_set_rx_function(eth_dev);
- return 0;
- }
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- rte_eth_copy_pci_info(eth_dev, pci_dev);
- eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
-
- /* Zero out everything after OTX2_DEV to allow proper dev_reset() */
- memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
- offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
-
- /* Parse devargs string */
- rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev);
- if (rc) {
- otx2_err("Failed to parse devargs rc=%d", rc);
- goto error;
- }
-
- if (!dev->mbox_active) {
- /* Initialize the base otx2_dev object
- * only if already present
- */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc) {
- otx2_err("Failed to initialize otx2_dev rc=%d", rc);
- goto error;
- }
- }
- if (otx2_eth_dev_is_sdp(pci_dev))
- dev->sdp_link = true;
- else
- dev->sdp_link = false;
- /* Device generic callbacks */
- dev->ops = &otx2_dev_ops;
- dev->eth_dev = eth_dev;
-
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc)
- goto otx2_dev_uninit;
-
- dev->configured = 0;
- dev->drv_inited = true;
- dev->ptype_disable = 0;
- dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20);
-
- /* Attach NIX LF */
- rc = nix_lf_attach(dev);
- if (rc)
- goto otx2_npa_uninit;
-
- dev->base = dev->bar2 + (nix_get_blkaddr(dev) << 20);
-
- /* Get NIX MSIX offset */
- rc = nix_lf_get_msix_offset(dev);
- if (rc)
- goto otx2_npa_uninit;
-
- /* Register LF irq handlers */
- rc = otx2_nix_register_irqs(eth_dev);
- if (rc)
- goto mbox_detach;
-
- /* Get maximum number of supported MAC entries */
- max_entries = otx2_cgx_mac_max_entries_get(dev);
- if (max_entries < 0) {
- otx2_err("Failed to get max entries for mac addr");
- rc = -ENOTSUP;
- goto unregister_irq;
- }
-
- /* For VFs, returned max_entries will be 0. But to keep default MAC
- * address, one entry must be allocated. So setting up to 1.
- */
- if (max_entries == 0)
- max_entries = 1;
-
- eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries *
- RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL) {
- otx2_err("Failed to allocate memory for mac addr");
- rc = -ENOMEM;
- goto unregister_irq;
- }
-
- dev->max_mac_entries = max_entries;
-
- rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr);
- if (rc)
- goto free_mac_addrs;
-
- /* Update the mac address */
- memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN);
-
- /* Also sync same MAC address to CGX table */
- otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
-
- /* Initialize the tm data structures */
- otx2_nix_tm_conf_init(eth_dev);
-
- dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
- dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
-
- if (otx2_dev_is_96xx_A0(dev) ||
- otx2_dev_is_95xx_Ax(dev)) {
- dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q;
- dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
- }
-
- /* Create security ctx */
- rc = otx2_eth_sec_ctx_create(eth_dev);
- if (rc)
- goto free_mac_addrs;
- dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
-
- /* Initialize rte-flow */
- rc = otx2_flow_init(dev);
- if (rc)
- goto sec_ctx_destroy;
-
- otx2_nix_mc_filter_init(dev);
-
- otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
- " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
- eth_dev->data->port_id, dev->pf, dev->vf,
- OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap,
- dev->rx_offload_capa, dev->tx_offload_capa);
- return 0;
-
-sec_ctx_destroy:
- otx2_eth_sec_ctx_destroy(eth_dev);
-free_mac_addrs:
- rte_free(eth_dev->data->mac_addrs);
-unregister_irq:
- otx2_nix_unregister_irqs(eth_dev);
-mbox_detach:
- otx2_eth_dev_lf_detach(dev->mbox);
-otx2_npa_uninit:
- otx2_npa_lf_fini();
-otx2_dev_uninit:
- otx2_dev_fini(pci_dev, dev);
-error:
- otx2_err("Failed to init nix eth_dev rc=%d", rc);
- return rc;
-}
-
-static int
-otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_pci_device *pci_dev;
- int rc, i;
-
- /* Nothing to be done for secondary processes */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* Clear the flag since we are closing down */
- dev->configured = 0;
-
- /* Disable nix bpid config */
- otx2_nix_rxchan_bpid_cfg(eth_dev, false);
-
- npc_rx_disable(dev);
-
- /* Disable vlan offloads */
- otx2_nix_vlan_fini(eth_dev);
-
- /* Disable other rte_flow entries */
- otx2_flow_fini(dev);
-
- /* Free multicast filter list */
- otx2_nix_mc_filter_fini(dev);
-
- /* Disable PTP if already enabled */
- if (otx2_ethdev_is_ptp_en(dev))
- otx2_nix_timesync_disable(eth_dev);
-
- nix_cgx_stop_link_event(dev);
-
- /* Unregister the dev ops, this is required to stop VFs from
- * receiving link status updates on exit path.
- */
- dev->ops = NULL;
-
- /* Free up SQs */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_release(eth_dev, i);
- eth_dev->data->nb_tx_queues = 0;
-
- /* Free up RQ's and CQ's */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
- otx2_nix_rx_queue_release(eth_dev, i);
- eth_dev->data->nb_rx_queues = 0;
-
- /* Free tm resources */
- rc = otx2_nix_tm_fini(eth_dev);
- if (rc)
- otx2_err("Failed to cleanup tm, rc=%d", rc);
-
- /* Unregister queue irqs */
- oxt2_nix_unregister_queue_irqs(eth_dev);
-
- /* Unregister cq irqs */
- if (eth_dev->data->dev_conf.intr_conf.rxq)
- oxt2_nix_unregister_cq_irqs(eth_dev);
-
- rc = nix_lf_free(dev);
- if (rc)
- otx2_err("Failed to free nix lf, rc=%d", rc);
-
- rc = otx2_npa_lf_fini();
- if (rc)
- otx2_err("Failed to cleanup npa lf, rc=%d", rc);
-
- /* Disable security */
- otx2_eth_sec_fini(eth_dev);
-
- /* Destroy security ctx */
- otx2_eth_sec_ctx_destroy(eth_dev);
-
- rte_free(eth_dev->data->mac_addrs);
- eth_dev->data->mac_addrs = NULL;
- dev->drv_inited = false;
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- otx2_nix_unregister_irqs(eth_dev);
-
- rc = otx2_eth_dev_lf_detach(dev->mbox);
- if (rc)
- otx2_err("Failed to detach resources, rc=%d", rc);
-
- /* Check if mbox close is needed */
- if (!mbox_close)
- return 0;
-
- if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) {
- /* Will be freed later by PMD */
- eth_dev->data->dev_private = NULL;
- return 0;
- }
-
- otx2_dev_fini(pci_dev, dev);
- return 0;
-}
-
-static int
-otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
-{
- otx2_eth_dev_uninit(eth_dev, true);
- return 0;
-}
-
-static int
-otx2_nix_dev_reset(struct rte_eth_dev *eth_dev)
-{
- int rc;
-
- rc = otx2_eth_dev_uninit(eth_dev, false);
- if (rc)
- return rc;
-
- return otx2_eth_dev_init(eth_dev);
-}
-
-static int
-nix_remove(struct rte_pci_device *pci_dev)
-{
- struct rte_eth_dev *eth_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_dev *otx2_dev;
- int rc;
-
- eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
- if (eth_dev) {
- /* Cleanup eth dev */
- rc = otx2_eth_dev_uninit(eth_dev, true);
- if (rc)
- return rc;
-
- rte_eth_dev_release_port(eth_dev);
- }
-
- /* Nothing to be done for secondary processes */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* Check for common resources */
- idev = otx2_intra_dev_get_cfg();
- if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev)
- return 0;
-
- otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf);
-
- if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev))
- goto exit;
-
- /* Safe to cleanup mbox as no more users */
- otx2_dev_fini(pci_dev, otx2_dev);
- rte_free(otx2_dev);
- return 0;
-
-exit:
- otx2_info("%s: common resource in use by other devices", pci_dev->name);
- return -EAGAIN;
-}
-
-static int
-nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- int rc;
-
- RTE_SET_USED(pci_drv);
-
- rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev),
- otx2_eth_dev_init);
-
- /* On error on secondary, recheck if port exists in primary or
- * in mid of detach state.
- */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc)
- if (!rte_eth_dev_allocated(pci_dev->device.name))
- return 0;
- return rc;
-}
-
-static const struct rte_pci_id pci_nix_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_AF_VF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SDP_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SDP_VF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_nix = {
- .id_table = pci_nix_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA |
- RTE_PCI_DRV_INTR_LSC,
- .probe = nix_probe,
- .remove = nix_remove,
-};
-
-RTE_PMD_REGISTER_PCI(OCTEONTX2_PMD, pci_nix);
-RTE_PMD_REGISTER_PCI_TABLE(OCTEONTX2_PMD, pci_nix_map);
-RTE_PMD_REGISTER_KMOD_DEP(OCTEONTX2_PMD, "vfio-pci");
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
deleted file mode 100644
index a5282c6c12..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ /dev/null
@@ -1,619 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_H__
-#define __OTX2_ETHDEV_H__
-
-#include <math.h>
-#include <stdint.h>
-
-#include <rte_common.h>
-#include <rte_ethdev.h>
-#include <rte_kvargs.h>
-#include <rte_mbuf.h>
-#include <rte_mempool.h>
-#include <rte_security_driver.h>
-#include <rte_spinlock.h>
-#include <rte_string_fns.h>
-#include <rte_time.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_flow.h"
-#include "otx2_irq.h"
-#include "otx2_mempool.h"
-#include "otx2_rx.h"
-#include "otx2_tm.h"
-#include "otx2_tx.h"
-
-#define OTX2_ETH_DEV_PMD_VERSION "1.0"
-
-/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */
-
-/* Minimum CQ size should be 4K */
-#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63)
-#define otx2_ethdev_fixup_is_min_4k_q(dev) \
- ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q)
-/* Limit CQ being full */
-#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62)
-#define otx2_ethdev_fixup_is_limit_cq_full(dev) \
- ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL)
-
-/* Used for struct otx2_eth_dev::flags */
-#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
-
-/* VLAN tag inserted by NIX_TX_VTAG_ACTION.
- * In Tx space is always reserved for this in FRS.
- */
-#define NIX_MAX_VTAG_INS 2
-#define NIX_MAX_VTAG_ACT_SIZE (4 * NIX_MAX_VTAG_INS)
-
-/* ETH_HLEN+ETH_FCS+2*VLAN_HLEN */
-#define NIX_L2_OVERHEAD \
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 8)
-#define NIX_L2_MAX_LEN \
- (RTE_ETHER_MTU + NIX_L2_OVERHEAD)
-
-/* HW config of frame size doesn't include FCS */
-#define NIX_MAX_HW_FRS 9212
-#define NIX_MIN_HW_FRS 60
-
-/* Since HW FRS includes NPC VTAG insertion space, user has reduced FRS */
-#define NIX_MAX_FRS \
- (NIX_MAX_HW_FRS + RTE_ETHER_CRC_LEN - NIX_MAX_VTAG_ACT_SIZE)
-
-#define NIX_MIN_FRS \
- (NIX_MIN_HW_FRS + RTE_ETHER_CRC_LEN)
-
-#define NIX_MAX_MTU \
- (NIX_MAX_FRS - NIX_L2_OVERHEAD)
-
-#define NIX_MAX_SQB 512
-#define NIX_DEF_SQB 16
-#define NIX_MIN_SQB 8
-#define NIX_SQB_LIST_SPACE 2
-#define NIX_RSS_RETA_SIZE_MAX 256
-/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
-#define NIX_RSS_GRPS 8
-#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
-#define NIX_RSS_RETA_SIZE 64
-#define NIX_RX_MIN_DESC 16
-#define NIX_RX_MIN_DESC_ALIGN 16
-#define NIX_RX_NB_SEG_MAX 6
-#define NIX_CQ_ENTRY_SZ 128
-#define NIX_CQ_ALIGN 512
-#define NIX_SQB_LOWER_THRESH 70
-#define LMT_SLOT_MASK 0x7f
-#define NIX_RX_DEFAULT_RING_SZ 4096
-
-/* If PTP is enabled additional SEND MEM DESC is required which
- * takes 2 words, hence max 7 iova address are possible
- */
-#if defined(RTE_LIBRTE_IEEE1588)
-#define NIX_TX_NB_SEG_MAX 7
-#else
-#define NIX_TX_NB_SEG_MAX 9
-#endif
-
-#define NIX_TX_MSEG_SG_DWORDS \
- ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \
- + NIX_TX_NB_SEG_MAX)
-
-/* Apply BP/DROP when CQ is 95% full */
-#define NIX_CQ_THRESH_LEVEL (5 * 256 / 100)
-#define NIX_CQ_FULL_ERRATA_SKID (1024ull * 256)
-
-#define CQ_OP_STAT_OP_ERR 63
-#define CQ_OP_STAT_CQ_ERR 46
-
-#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
-#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
-
-#define CQ_CQE_THRESH_DEFAULT 0x1ULL /* IRQ triggered when
- * NIX_LF_CINTX_CNT[QCOUNT]
- * crosses this value
- */
-#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
-#define CQ_TIMER_THRESH_MAX 255
-
-#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
- | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-
-#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
- RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
- RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
- RTE_ETH_RSS_C_VLAN)
-
-#define NIX_TX_OFFLOAD_CAPA ( \
- RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
- RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
- RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
- RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_TCP_TSO | \
- RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
- RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
-
-#define NIX_RX_OFFLOAD_CAPA ( \
- RTE_ETH_RX_OFFLOAD_CHECKSUM | \
- RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
- RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- RTE_ETH_RX_OFFLOAD_SCATTER | \
- RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
- RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
- RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
- RTE_ETH_RX_OFFLOAD_RSS_HASH)
-
-#define NIX_DEFAULT_RSS_CTX_GROUP 0
-#define NIX_DEFAULT_RSS_MCAM_IDX -1
-
-#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en)
-
-#define NIX_TIMESYNC_TX_CMD_LEN 8
-/* Additional timesync values. */
-#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL
-
-#define OCTEONTX2_PMD net_octeontx2
-
-#define otx2_ethdev_is_same_driver(dev) \
- (strcmp((dev)->device->driver->name, RTE_STR(OCTEONTX2_PMD)) == 0)
-
-enum nix_q_size_e {
- nix_q_size_16, /* 16 entries */
- nix_q_size_64, /* 64 entries */
- nix_q_size_256,
- nix_q_size_1K,
- nix_q_size_4K,
- nix_q_size_16K,
- nix_q_size_64K,
- nix_q_size_256K,
- nix_q_size_1M, /* Million entries */
- nix_q_size_max
-};
-
-enum nix_lso_tun_type {
- NIX_LSO_TUN_V4V4,
- NIX_LSO_TUN_V4V6,
- NIX_LSO_TUN_V6V4,
- NIX_LSO_TUN_V6V6,
- NIX_LSO_TUN_MAX,
-};
-
-struct otx2_qint {
- struct rte_eth_dev *eth_dev;
- uint8_t qintx;
-};
-
-struct otx2_rss_info {
- uint64_t nix_rss;
- uint32_t flowkey_cfg;
- uint16_t rss_size;
- uint8_t rss_grps;
- uint8_t alg_idx; /* Selected algo index */
- uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX];
- uint8_t key[NIX_HASH_KEY_SIZE];
-};
-
-struct otx2_eth_qconf {
- union {
- struct rte_eth_txconf tx;
- struct rte_eth_rxconf rx;
- } conf;
- void *mempool;
- uint32_t socket_id;
- uint16_t nb_desc;
- uint8_t valid;
-};
-
-struct otx2_fc_info {
- enum rte_eth_fc_mode mode; /**< Link flow control mode */
- uint8_t rx_pause;
- uint8_t tx_pause;
- uint8_t chan_cnt;
- uint16_t bpid[NIX_MAX_CHAN];
-};
-
-struct vlan_mkex_info {
- struct npc_xtract_info la_xtract;
- struct npc_xtract_info lb_xtract;
- uint64_t lb_lt_offset;
-};
-
-struct mcast_entry {
- struct rte_ether_addr mcast_mac;
- uint16_t mcam_index;
- TAILQ_ENTRY(mcast_entry) next;
-};
-
-TAILQ_HEAD(otx2_nix_mc_filter_tbl, mcast_entry);
-
-struct vlan_entry {
- uint32_t mcam_idx;
- uint16_t vlan_id;
- TAILQ_ENTRY(vlan_entry) next;
-};
-
-TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry);
-
-struct otx2_vlan_info {
- struct otx2_vlan_filter_tbl fltr_tbl;
- /* MKEX layer info */
- struct mcam_entry def_tx_mcam_ent;
- struct mcam_entry def_rx_mcam_ent;
- struct vlan_mkex_info mkex;
- /* Default mcam entry that matches vlan packets */
- uint32_t def_rx_mcam_idx;
- uint32_t def_tx_mcam_idx;
- /* MCAM entry that matches double vlan packets */
- uint32_t qinq_mcam_idx;
- /* Indices of tx_vtag def registers */
- uint32_t outer_vlan_idx;
- uint32_t inner_vlan_idx;
- uint16_t outer_vlan_tpid;
- uint16_t inner_vlan_tpid;
- uint16_t pvid;
- /* QinQ entry allocated before default one */
- uint8_t qinq_before_def;
- uint8_t pvid_insert_on;
- /* Rx vtag action type */
- uint8_t vtag_type_idx;
- uint8_t filter_on;
- uint8_t strip_on;
- uint8_t qinq_on;
- uint8_t promisc_on;
-};
-
-struct otx2_eth_dev {
- OTX2_DEV; /* Base class */
- RTE_MARKER otx2_eth_dev_data_start;
- uint16_t sqb_size;
- uint16_t rx_chan_base;
- uint16_t tx_chan_base;
- uint8_t rx_chan_cnt;
- uint8_t tx_chan_cnt;
- uint8_t lso_tsov4_idx;
- uint8_t lso_tsov6_idx;
- uint8_t lso_udp_tun_idx[NIX_LSO_TUN_MAX];
- uint8_t lso_tun_idx[NIX_LSO_TUN_MAX];
- uint64_t lso_tun_fmt;
- uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
- uint8_t mkex_pfl_name[MKEX_NAME_LEN];
- uint8_t max_mac_entries;
- bool dmac_filter_enable;
- uint8_t lf_tx_stats;
- uint8_t lf_rx_stats;
- uint8_t lock_rx_ctx;
- uint8_t lock_tx_ctx;
- uint16_t flags;
- uint16_t cints;
- uint16_t qints;
- uint8_t configured;
- uint8_t configured_qints;
- uint8_t configured_cints;
- uint8_t configured_nb_rx_qs;
- uint8_t configured_nb_tx_qs;
- uint8_t ptype_disable;
- uint16_t nix_msixoff;
- uintptr_t base;
- uintptr_t lmt_addr;
- uint16_t scalar_ena;
- uint16_t rss_tag_as_xor;
- uint16_t max_sqb_count;
- uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
- uint64_t rx_offloads;
- uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
- uint64_t tx_offloads;
- uint64_t rx_offload_capa;
- uint64_t tx_offload_capa;
- struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
- struct otx2_qint cints_mem[RTE_MAX_QUEUES_PER_PORT];
- uint16_t txschq[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT];
- /* Dis-contiguous queues */
- uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- /* Contiguous queues */
- uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- uint16_t otx2_tm_root_lvl;
- uint16_t link_cfg_lvl;
- uint16_t tm_flags;
- uint16_t tm_leaf_cnt;
- uint64_t tm_rate_min;
- struct otx2_nix_tm_node_list node_list;
- struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
- struct otx2_rss_info rss_info;
- struct otx2_fc_info fc_info;
- uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
- uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
- struct otx2_npc_flow_info npc_flow;
- struct otx2_vlan_info vlan_info;
- struct otx2_eth_qconf *tx_qconf;
- struct otx2_eth_qconf *rx_qconf;
- struct rte_eth_dev *eth_dev;
- eth_rx_burst_t rx_pkt_burst_no_offload;
- /* PTP counters */
- bool ptp_en;
- struct otx2_timesync_info tstamp;
- struct rte_timecounter systime_tc;
- struct rte_timecounter rx_tstamp_tc;
- struct rte_timecounter tx_tstamp_tc;
- double clk_freq_mult;
- uint64_t clk_delta;
- bool mc_tbl_set;
- struct otx2_nix_mc_filter_tbl mc_fltr_tbl;
- bool sdp_link; /* SDP flag */
- /* Inline IPsec params */
- uint16_t ipsec_in_max_spi;
- rte_spinlock_t ipsec_tbl_lock;
- uint8_t duplex;
- uint32_t speed;
-} __rte_cache_aligned;
-
-struct otx2_eth_txq {
- uint64_t cmd[8];
- int64_t fc_cache_pkts;
- uint64_t *fc_mem;
- void *lmt_addr;
- rte_iova_t io_addr;
- rte_iova_t fc_iova;
- uint16_t sqes_per_sqb_log2;
- int16_t nb_sqb_bufs_adj;
- uint64_t lso_tun_fmt;
- RTE_MARKER slow_path_start;
- uint16_t nb_sqb_bufs;
- uint16_t sq;
- uint64_t offloads;
- struct otx2_eth_dev *dev;
- struct rte_mempool *sqb_pool;
- struct otx2_eth_qconf qconf;
-} __rte_cache_aligned;
-
-struct otx2_eth_rxq {
- uint64_t mbuf_initializer;
- uint64_t data_off;
- uintptr_t desc;
- void *lookup_mem;
- uintptr_t cq_door;
- uint64_t wdata;
- int64_t *cq_status;
- uint32_t head;
- uint32_t qmask;
- uint32_t available;
- uint16_t rq;
- struct otx2_timesync_info *tstamp;
- RTE_MARKER slow_path_start;
- uint64_t aura;
- uint64_t offloads;
- uint32_t qlen;
- struct rte_mempool *pool;
- enum nix_q_size_e qsize;
- struct rte_eth_dev *eth_dev;
- struct otx2_eth_qconf qconf;
- uint16_t cq_drop;
-} __rte_cache_aligned;
-
-static inline struct otx2_eth_dev *
-otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
-{
- return eth_dev->data->dev_private;
-}
-
-/* Ops */
-int otx2_nix_info_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_info *dev_info);
-int otx2_nix_dev_flow_ops_get(struct rte_eth_dev *eth_dev,
- const struct rte_flow_ops **ops);
-int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
- size_t fw_size);
-int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_module_info *modinfo);
-int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
- struct rte_dev_eeprom_info *info);
-int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
-void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_rxq_info *qinfo);
-void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_txq_info *qinfo);
-int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
- struct rte_eth_burst_mode *mode);
-int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
- struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(void *rx_queue);
-int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
-int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
-int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
-
-void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
-int otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
-int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
-uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
-
-/* Multicast filter APIs */
-void otx2_nix_mc_filter_init(struct otx2_eth_dev *dev);
-void otx2_nix_mc_filter_fini(struct otx2_eth_dev *dev);
-int otx2_nix_mc_addr_list_install(struct rte_eth_dev *eth_dev);
-int otx2_nix_mc_addr_list_uninstall(struct rte_eth_dev *eth_dev);
-int otx2_nix_set_mc_addr_list(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *mc_addr_set,
- uint32_t nb_mc_addr);
-
-/* MTU */
-int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
-int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev);
-void otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq);
-
-
-/* Link */
-void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
-int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
-void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-void otx2_eth_dev_link_status_get(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-int otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev);
-int otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev);
-int otx2_apply_link_speed(struct rte_eth_dev *eth_dev);
-
-/* IRQ */
-int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
-int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
-int oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev);
-void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
-void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
-void oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev);
-void otx2_nix_err_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb);
-void otx2_nix_ras_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb);
-
-int otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id);
-int otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id);
-
-/* Debug */
-int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
-int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
- struct rte_dev_reg_info *regs);
-int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
-void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
-void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
-
-/* Stats */
-int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats);
-int otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
- uint16_t queue_id, uint8_t stat_idx,
- uint8_t is_rx);
-int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat *xstats, unsigned int n);
-int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit);
-int otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- uint64_t *values, unsigned int n);
-int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit);
-
-/* RSS */
-void otx2_nix_rss_set_key(struct otx2_eth_dev *dev,
- uint8_t *key, uint32_t key_len);
-uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev,
- uint64_t ethdev_rss, uint8_t rss_level);
-int otx2_rss_set_hf(struct otx2_eth_dev *dev,
- uint32_t flowkey_cfg, uint8_t *alg_idx,
- uint8_t group, int mcam_index);
-int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group,
- uint16_t *ind_tbl);
-int otx2_nix_rss_config(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size);
-int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size);
-int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf);
-
-int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf);
-
-/* CGX */
-int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
-int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
-int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr);
-
-/* Flow Control */
-int otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf);
-
-int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf);
-
-int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
-
-int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
-
-/* VLAN */
-int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
-int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
-int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
-void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
-int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
- int on);
-void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
- uint16_t queue, int on);
-int otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, uint16_t tpid);
-int otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on);
-
-/* Lookup configuration */
-void *otx2_nix_fastpath_lookup_mem_get(void);
-
-/* PTYPES */
-const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev);
-int otx2_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask);
-
-/* Mac address handling */
-int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr);
-int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
-int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr,
- uint32_t index, uint32_t pool);
-void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index);
-int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
-
-/* Devargs */
-int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
- struct otx2_eth_dev *dev);
-
-/* Rx and Tx routines */
-void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
-void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev);
-void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
-
-/* Timesync - PTP routines */
-int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp,
- uint32_t flags);
-int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp);
-int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta);
-int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
- const struct timespec *ts);
-int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev,
- struct timespec *ts);
-int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
-int otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *time);
-int otx2_nix_raw_clock_tsc_conv(struct otx2_eth_dev *dev);
-void otx2_nix_ptp_enable_vf(struct rte_eth_dev *eth_dev);
-
-#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
deleted file mode 100644
index 6d951bc7e2..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ /dev/null
@@ -1,811 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
-#define NIX_REG_INFO(reg) {reg, #reg}
-#define NIX_REG_NAME_SZ 48
-
-struct nix_lf_reg_info {
- uint32_t offset;
- const char *name;
-};
-
-static const struct
-nix_lf_reg_info nix_lf_reg[] = {
- NIX_REG_INFO(NIX_LF_RX_SECRETX(0)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(1)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(2)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(3)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(4)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(5)),
- NIX_REG_INFO(NIX_LF_CFG),
- NIX_REG_INFO(NIX_LF_GINT),
- NIX_REG_INFO(NIX_LF_GINT_W1S),
- NIX_REG_INFO(NIX_LF_GINT_ENA_W1C),
- NIX_REG_INFO(NIX_LF_GINT_ENA_W1S),
- NIX_REG_INFO(NIX_LF_ERR_INT),
- NIX_REG_INFO(NIX_LF_ERR_INT_W1S),
- NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C),
- NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S),
- NIX_REG_INFO(NIX_LF_RAS),
- NIX_REG_INFO(NIX_LF_RAS_W1S),
- NIX_REG_INFO(NIX_LF_RAS_ENA_W1C),
- NIX_REG_INFO(NIX_LF_RAS_ENA_W1S),
- NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG),
- NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG),
- NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
-};
-
-static int
-nix_lf_get_reg_count(struct otx2_eth_dev *dev)
-{
- int reg_count = 0;
-
- reg_count = RTE_DIM(nix_lf_reg);
- /* NIX_LF_TX_STATX */
- reg_count += dev->lf_tx_stats;
- /* NIX_LF_RX_STATX */
- reg_count += dev->lf_rx_stats;
- /* NIX_LF_QINTX_CNT*/
- reg_count += dev->qints;
- /* NIX_LF_QINTX_INT */
- reg_count += dev->qints;
- /* NIX_LF_QINTX_ENA_W1S */
- reg_count += dev->qints;
- /* NIX_LF_QINTX_ENA_W1C */
- reg_count += dev->qints;
- /* NIX_LF_CINTX_CNT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_WAIT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_INT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_INT_W1S */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_ENA_W1S */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_ENA_W1C */
- reg_count += dev->cints;
-
- return reg_count;
-}
-
-int
-otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data)
-{
- uintptr_t nix_lf_base = dev->base;
- bool dump_stdout;
- uint64_t reg;
- uint32_t i;
-
- dump_stdout = data ? 0 : 1;
-
- for (i = 0; i < RTE_DIM(nix_lf_reg); i++) {
- reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset);
- if (dump_stdout && reg)
- nix_dump("%32s = 0x%" PRIx64,
- nix_lf_reg[i].name, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_TX_STATX */
- for (i = 0; i < dev->lf_tx_stats; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_TX_STATX", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_RX_STATX */
- for (i = 0; i < dev->lf_rx_stats; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_RX_STATX", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_CNT*/
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_CNT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_INT */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_INT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_ENA_W1S */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_ENA_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_ENA_W1C */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_ENA_W1C", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_CNT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_CNT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_WAIT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_WAIT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_INT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_INT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_INT_W1S */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_INT_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_ENA_W1S */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_ENA_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_ENA_W1C */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_ENA_W1C", i, reg);
- if (data)
- *data++ = reg;
- }
- return 0;
-}
-
-int
-otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t *data = regs->data;
-
- if (data == NULL) {
- regs->length = nix_lf_get_reg_count(dev);
- regs->width = 8;
- return 0;
- }
-
- if (!regs->length ||
- regs->length == (uint32_t)nix_lf_get_reg_count(dev)) {
- otx2_nix_reg_dump(dev, data);
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline void
-nix_lf_sq_dump(__otx2_io struct nix_sq_ctx_s *ctx)
-{
- nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
- ctx->sqe_way_mask, ctx->cq);
- nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
- ctx->sdp_mcast, ctx->substream);
- nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n",
- ctx->qint_idx, ctx->ena);
-
- nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
- ctx->sqb_count, ctx->default_chan);
- nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d",
- ctx->smq_rr_quantum, ctx->sso_ena);
- nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
- ctx->xoff, ctx->cq_ena, ctx->smq);
-
- nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
- ctx->sqe_stype, ctx->sq_int_ena);
- nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d",
- ctx->sq_int, ctx->sqb_aura);
- nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count);
-
- nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
- ctx->smq_next_sq_vld, ctx->smq_pend);
- nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
- ctx->smenq_next_sqb_vld, ctx->head_offset);
- nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
- ctx->smenq_offset, ctx->tail_offset);
- nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
- ctx->smq_lso_segnum, ctx->smq_next_sq);
- nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d",
- ctx->mnq_dis, ctx->lmt_dis);
- nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
- ctx->cq_limit, ctx->max_sqe_size);
-
- nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
- nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
- nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
- nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
- nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
-
- nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
- ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
- nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
- ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
- nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
- ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
- nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
-
- nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
- (uint64_t)ctx->scm_lso_rem);
- nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
- nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
- nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
- (uint64_t)ctx->drop_octs);
- nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
- (uint64_t)ctx->drop_pkts);
-}
-
-static inline void
-nix_lf_rq_dump(__otx2_io struct nix_rq_ctx_s *ctx)
-{
- nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x",
- ctx->wqe_aura, ctx->substream);
- nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d",
- ctx->cq, ctx->ena_wqwd);
- nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
- ctx->ipsech_ena, ctx->sso_ena);
- nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
-
- nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
- ctx->lpb_drop_ena, ctx->spb_drop_ena);
- nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
- ctx->xqe_drop_ena, ctx->wqe_caching);
- nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
- ctx->pb_caching, ctx->sso_tt);
- nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d",
- ctx->sso_grp, ctx->lpb_aura);
- nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
-
- nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
- ctx->xqe_hdr_split, ctx->xqe_imm_copy);
- nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
- ctx->xqe_imm_size, ctx->later_skip);
- nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
- ctx->first_skip, ctx->lpb_sizem1);
- nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d",
- ctx->spb_ena, ctx->wqe_skip);
- nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1);
-
- nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
- ctx->spb_pool_pass, ctx->spb_pool_drop);
- nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
- ctx->spb_aura_pass, ctx->spb_aura_drop);
- nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
- ctx->wqe_pool_pass, ctx->wqe_pool_drop);
- nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
- ctx->xqe_pass, ctx->xqe_drop);
-
- nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
- ctx->qint_idx, ctx->rq_int_ena);
- nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d",
- ctx->rq_int, ctx->lpb_pool_pass);
- nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
- ctx->lpb_pool_drop, ctx->lpb_aura_pass);
- nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
-
- nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
- ctx->flow_tagw, ctx->bad_utag);
- nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n",
- ctx->good_utag, ctx->ltag);
-
- nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
- nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
- nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
- nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
- nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
-}
-
-static inline void
-nix_lf_cq_dump(__otx2_io struct nix_cq_ctx_s *ctx)
-{
- nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base);
-
- nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr);
- nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d",
- ctx->avg_con, ctx->cint_idx);
- nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d",
- ctx->cq_err, ctx->qint_idx);
- nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n",
- ctx->bpid, ctx->bp_ena);
-
- nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
- ctx->update_time, ctx->avg_level);
- nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n",
- ctx->head, ctx->tail);
-
- nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d",
- ctx->cq_err_int_ena, ctx->cq_err_int);
- nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d",
- ctx->qsize, ctx->caching);
- nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d",
- ctx->substream, ctx->ena);
- nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d",
- ctx->drop_ena, ctx->drop);
- nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp);
-}
-
-int
-otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, q, rq = eth_dev->data->nb_rx_queues;
- int sq = eth_dev->data->nb_tx_queues;
- struct otx2_mbox *mbox = dev->mbox;
- struct npa_aq_enq_rsp *npa_rsp;
- struct npa_aq_enq_req *npa_aq;
- struct otx2_npa_lf *npa_lf;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
-
- npa_lf = otx2_npa_lf_obj_get();
-
- for (q = 0; q < rq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get cq context");
- goto fail;
- }
- nix_dump("============== port=%d cq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_cq_dump(&rsp->cq);
- }
-
- for (q = 0; q < rq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
- if (rc) {
- otx2_err("Failed to get rq context");
- goto fail;
- }
- nix_dump("============== port=%d rq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_rq_dump(&rsp->rq);
- }
- for (q = 0; q < sq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get sq context");
- goto fail;
- }
- nix_dump("============== port=%d sq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_sq_dump(&rsp->sq);
-
- if (!npa_lf) {
- otx2_err("NPA LF doesn't exist");
- continue;
- }
-
- /* Dump SQB Aura minimal info */
- npa_aq = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- npa_aq->aura_id = rsp->sq.sqb_aura;
- npa_aq->ctype = NPA_AQ_CTYPE_AURA;
- npa_aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(npa_lf->mbox, (void *)&npa_rsp);
- if (rc) {
- otx2_err("Failed to get sq's sqb_aura context");
- continue;
- }
-
- nix_dump("\nSQB Aura W0: Pool addr\t\t0x%"PRIx64"",
- npa_rsp->aura.pool_addr);
- nix_dump("SQB Aura W1: ena\t\t\t%d",
- npa_rsp->aura.ena);
- nix_dump("SQB Aura W2: count\t\t%"PRIx64"",
- (uint64_t)npa_rsp->aura.count);
- nix_dump("SQB Aura W3: limit\t\t%"PRIx64"",
- (uint64_t)npa_rsp->aura.limit);
- nix_dump("SQB Aura W3: fc_ena\t\t%d",
- npa_rsp->aura.fc_ena);
- nix_dump("SQB Aura W4: fc_addr\t\t0x%"PRIx64"\n",
- npa_rsp->aura.fc_addr);
- }
-
-fail:
- return rc;
-}
-
-/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
-void
-otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
-
- nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
- cq->tag, cq->q, cq->node, cq->cqe_type);
-
- nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
- rx->chan, rx->desc_sizem1);
- nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
- rx->imm_copy, rx->express);
- nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
- rx->wqwd, rx->errlev, rx->errcode);
- nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
- rx->latype, rx->lbtype, rx->lctype);
- nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
- rx->ldtype, rx->letype, rx->lftype);
- nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
- rx->lgtype, rx->lhtype);
-
- nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
- nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
- rx->l2m, rx->l2b, rx->l3m, rx->l3b);
- nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
- rx->vtag0_valid, rx->vtag0_gone);
- nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
- rx->vtag1_valid, rx->vtag1_gone);
- nix_dump("W1: pkind \t%d", rx->pkind);
- nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
- rx->vtag0_tci, rx->vtag1_tci);
-
- nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
- rx->laflags, rx->lbflags, rx->lcflags);
- nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
- rx->ldflags, rx->leflags, rx->lfflags);
- nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
- rx->lgflags, rx->lhflags);
-
- nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
- rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
- nix_dump("W3: match_id \t%d", rx->match_id);
-
- nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
- rx->laptr, rx->lbptr, rx->lcptr);
- nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
- rx->ldptr, rx->leptr, rx->lfptr);
- nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
-
- nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
- rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
-}
-
-static uint8_t
-prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
- uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
-{
- uint8_t k = 0;
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- reg[k] = NIX_AF_SMQX_CFG(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_SMQ[%u]_CFG", schq);
-
- reg[k] = NIX_AF_MDQX_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_MDQX_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_PIR", schq);
-
- reg[k] = NIX_AF_MDQX_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_CIR", schq);
-
- reg[k] = NIX_AF_MDQX_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq);
-
- reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL4X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL4X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL4X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
-
- reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL3X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL3X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL3X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
-
- reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL2X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL2X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL2X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL1:
-
- reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL1X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_SW_XOFF", schq);
-
- reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq);
- break;
- default:
- break;
- }
-
- if (k > MAX_REGS_PER_MBOX_MSG) {
- nix_dump("\t!!!NIX TM Registers request overflow!!!");
- return 0;
- }
- return k;
-}
-
-/* Dump TM hierarchy and registers */
-void
-otx2_nix_tm_dump(struct otx2_eth_dev *dev)
-{
- char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ];
- struct otx2_nix_tm_node *tm_node, *root_node, *parent;
- uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2];
- struct nix_txschq_config *req;
- const char *lvlstr, *parent_lvlstr;
- struct nix_txschq_config *rsp;
- uint32_t schq, parent_schq;
- int hw_lvl, j, k, rc;
-
- nix_dump("===TM hierarchy and registers dump of %s===",
- dev->eth_dev->data->name);
-
- root_node = NULL;
-
- for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++) {
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != hw_lvl)
- continue;
-
- parent = tm_node->parent;
- if (hw_lvl == NIX_TXSCH_LVL_CNT) {
- lvlstr = "SQ";
- schq = tm_node->id;
- } else {
- lvlstr = nix_hwlvl2str(tm_node->hw_lvl);
- schq = tm_node->hw_id;
- }
-
- if (parent) {
- parent_schq = parent->hw_id;
- parent_lvlstr =
- nix_hwlvl2str(parent->hw_lvl);
- } else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
- parent_schq = otx2_nix_get_link(dev);
- parent_lvlstr = "LINK";
- } else {
- parent_schq = tm_node->parent_hw_id;
- parent_lvlstr =
- nix_hwlvl2str(tm_node->hw_lvl + 1);
- }
-
- nix_dump("%s_%d->%s_%d", lvlstr, schq,
- parent_lvlstr, parent_schq);
-
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- /* Need to dump TL1 when root is TL2 */
- if (tm_node->hw_lvl == dev->otx2_tm_root_lvl)
- root_node = tm_node;
-
- /* Dump registers only when HWRES is present */
- k = prepare_nix_tm_reg_dump(tm_node->hw_lvl, schq,
- otx2_nix_get_link(dev), reg,
- regstr);
- if (!k)
- continue;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->read = 1;
- req->lvl = tm_node->hw_lvl;
- req->num_regs = k;
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (!rc) {
- for (j = 0; j < k; j++)
- nix_dump("\t%s=0x%016"PRIx64,
- regstr[j], rsp->regval[j]);
- } else {
- nix_dump("\t!!!Failed to dump registers!!!");
- }
- }
- nix_dump("\n");
- }
-
- /* Dump TL1 node data when root level is TL2 */
- if (root_node && root_node->hw_lvl == NIX_TXSCH_LVL_TL2) {
- k = prepare_nix_tm_reg_dump(NIX_TXSCH_LVL_TL1,
- root_node->parent_hw_id,
- otx2_nix_get_link(dev),
- reg, regstr);
- if (!k)
- return;
-
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->read = 1;
- req->lvl = NIX_TXSCH_LVL_TL1;
- req->num_regs = k;
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (!rc) {
- for (j = 0; j < k; j++)
- nix_dump("\t%s=0x%016"PRIx64,
- regstr[j], rsp->regval[j]);
- } else {
- nix_dump("\t!!!Failed to dump registers!!!");
- }
- }
-
- otx2_nix_queues_ctx_dump(dev->eth_dev);
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
deleted file mode 100644
index 60bf6c3f5f..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ /dev/null
@@ -1,215 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-#include <math.h>
-
-#include "otx2_ethdev.h"
-
-static int
-parse_flow_max_priority(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint16_t val;
-
- val = atoi(value);
-
- /* Limit the max priority to 32 */
- if (val < 1 || val > 32)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_flow_prealloc_size(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint16_t val;
-
- val = atoi(value);
-
- /* Limit the prealloc size to 32 */
- if (val < 1 || val > 32)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_reta_size(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- if (val <= RTE_ETH_RSS_RETA_SIZE_64)
- val = RTE_ETH_RSS_RETA_SIZE_64;
- else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
- val = RTE_ETH_RSS_RETA_SIZE_128;
- else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
- val = RTE_ETH_RSS_RETA_SIZE_256;
- else
- val = NIX_RSS_RETA_SIZE;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_flag(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
-
- *(uint16_t *)extra_args = atoi(value);
-
- return 0;
-}
-
-static int
-parse_sqb_count(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- if (val < NIX_MIN_SQB || val > NIX_MAX_SQB)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_switch_header_type(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
-
- if (strcmp(value, "higig2") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_HIGIG;
-
- if (strcmp(value, "dsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_EDSA;
-
- if (strcmp(value, "chlen90b") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_CH_LEN_90B;
-
- if (strcmp(value, "chlen24b") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_CH_LEN_24B;
-
- if (strcmp(value, "exdsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_EXDSA;
-
- if (strcmp(value, "vlan_exdsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_VLAN_EXDSA;
-
- return 0;
-}
-
-#define OTX2_RSS_RETA_SIZE "reta_size"
-#define OTX2_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
-#define OTX2_SCL_ENABLE "scalar_enable"
-#define OTX2_MAX_SQB_COUNT "max_sqb_count"
-#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size"
-#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
-#define OTX2_SWITCH_HEADER_TYPE "switch_header"
-#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
-#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
-#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
-
-int
-otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
-{
- uint16_t rss_size = NIX_RSS_RETA_SIZE;
- uint16_t sqb_count = NIX_MAX_SQB;
- uint16_t flow_prealloc_size = 8;
- uint16_t switch_header_type = 0;
- uint16_t flow_max_priority = 3;
- uint16_t ipsec_in_max_spi = 1;
- uint16_t rss_tag_as_xor = 0;
- uint16_t scalar_enable = 0;
- struct rte_kvargs *kvlist;
- uint16_t lock_rx_ctx = 0;
- uint16_t lock_tx_ctx = 0;
-
- if (devargs == NULL)
- goto null_devargs;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- goto exit;
-
- rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE,
- &parse_reta_size, &rss_size);
- rte_kvargs_process(kvlist, OTX2_IPSEC_IN_MAX_SPI,
- &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
- rte_kvargs_process(kvlist, OTX2_SCL_ENABLE,
- &parse_flag, &scalar_enable);
- rte_kvargs_process(kvlist, OTX2_MAX_SQB_COUNT,
- &parse_sqb_count, &sqb_count);
- rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE,
- &parse_flow_prealloc_size, &flow_prealloc_size);
- rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY,
- &parse_flow_max_priority, &flow_max_priority);
- rte_kvargs_process(kvlist, OTX2_SWITCH_HEADER_TYPE,
- &parse_switch_header_type, &switch_header_type);
- rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
- &parse_flag, &rss_tag_as_xor);
- rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
- &parse_flag, &lock_rx_ctx);
- rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
- &parse_flag, &lock_tx_ctx);
- otx2_parse_common_devargs(kvlist);
- rte_kvargs_free(kvlist);
-
-null_devargs:
- dev->ipsec_in_max_spi = ipsec_in_max_spi;
- dev->scalar_ena = scalar_enable;
- dev->rss_tag_as_xor = rss_tag_as_xor;
- dev->max_sqb_count = sqb_count;
- dev->lock_rx_ctx = lock_rx_ctx;
- dev->lock_tx_ctx = lock_tx_ctx;
- dev->rss_info.rss_size = rss_size;
- dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
- dev->npc_flow.flow_max_priority = flow_max_priority;
- dev->npc_flow.switch_header_type = switch_header_type;
- return 0;
-
-exit:
- return -EINVAL;
-}
-
-RTE_PMD_REGISTER_PARAM_STRING(OCTEONTX2_PMD,
- OTX2_RSS_RETA_SIZE "=<64|128|256>"
- OTX2_IPSEC_IN_MAX_SPI "=<1-65535>"
- OTX2_SCL_ENABLE "=1"
- OTX2_MAX_SQB_COUNT "=<8-512>"
- OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
- OTX2_FLOW_MAX_PRIORITY "=<1-32>"
- OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b|chlen24b>"
- OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>"
- OTX2_LOCK_RX_CTX "=1"
- OTX2_LOCK_TX_CTX "=1");
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
deleted file mode 100644
index cc573bb2e8..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ /dev/null
@@ -1,493 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_bus_pci.h>
-#include <rte_malloc.h>
-
-#include "otx2_ethdev.h"
-
-static void
-nix_lf_err_irq(void *param)
-{
- struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_ERR_INT);
- if (intr == 0)
- return;
-
- otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-static int
-nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_nix_err_intr_enb_dis(eth_dev, false);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec);
- /* Enable all dev interrupt except for RQ_DISABLED */
- otx2_nix_err_intr_enb_dis(eth_dev, true);
-
- return rc;
-}
-
-static void
-nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_nix_err_intr_enb_dis(eth_dev, false);
- otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec);
-}
-
-static void
-nix_lf_ras_irq(void *param)
-{
- struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_RAS);
- if (intr == 0)
- return;
-
- otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_RAS);
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-static int
-nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, false);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec);
- /* Enable dev interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, true);
-
- return rc;
-}
-
-static void
-nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, false);
- otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
-}
-
-static inline uint8_t
-nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q,
- uint32_t off, uint64_t mask)
-{
- uint64_t reg, wdata;
- uint8_t qint;
-
- wdata = (uint64_t)q << 44;
- reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off));
-
- if (reg & BIT_ULL(42) /* OP_ERR */) {
- otx2_err("Failed execute irq get off=0x%x", off);
- return 0;
- }
-
- qint = reg & 0xff;
- wdata &= mask;
- otx2_write64(wdata | qint, dev->base + off);
-
- return qint;
-}
-
-static inline uint8_t
-nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq)
-{
- return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq)
-{
- return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq)
-{
- return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
-}
-
-static inline void
-nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
-{
- uint64_t reg;
-
- reg = otx2_read64(dev->base + off);
- if (reg & BIT_ULL(44))
- otx2_err("SQ=%d err_code=0x%x",
- (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
-}
-
-static void
-nix_lf_cq_irq(void *param)
-{
- struct otx2_qint *cint = (struct otx2_qint *)param;
- struct rte_eth_dev *eth_dev = cint->eth_dev;
- struct otx2_eth_dev *dev;
-
- dev = otx2_eth_pmd_priv(eth_dev);
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_INT(cint->qintx));
-}
-
-static void
-nix_lf_q_irq(void *param)
-{
- struct otx2_qint *qint = (struct otx2_qint *)param;
- struct rte_eth_dev *eth_dev = qint->eth_dev;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t irq, qintx = qint->qintx;
- int q, cq, rq, sq;
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx));
- if (intr == 0)
- return;
-
- otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d",
- intr, qintx, dev->pf, dev->vf);
-
- /* Handle RQ interrupts */
- for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
- rq = q % dev->qints;
- irq = nix_lf_rq_irq_get_and_clear(dev, rq);
-
- if (irq & BIT_ULL(NIX_RQINT_DROP))
- otx2_err("RQ=%d NIX_RQINT_DROP", rq);
-
- if (irq & BIT_ULL(NIX_RQINT_RED))
- otx2_err("RQ=%d NIX_RQINT_RED", rq);
- }
-
- /* Handle CQ interrupts */
- for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
- cq = q % dev->qints;
- irq = nix_lf_cq_irq_get_and_clear(dev, cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
- otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL))
- otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
- otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq);
- }
-
- /* Handle SQ interrupts */
- for (q = 0; q < eth_dev->data->nb_tx_queues; q++) {
- sq = q % dev->qints;
- irq = nix_lf_sq_irq_get_and_clear(dev, sq);
-
- if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
- otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
- }
- }
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-int
-oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q, sqs, rqs, qs, rc = 0;
-
- /* Figure out max qintx required */
- rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues);
- sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues);
- qs = RTE_MAX(rqs, sqs);
-
- dev->configured_qints = qs;
-
- for (q = 0; q < qs; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
-
- dev->qints_mem[q].eth_dev = eth_dev;
- dev->qints_mem[q].qintx = q;
-
- /* Sync qints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, nix_lf_q_irq,
- &dev->qints_mem[q], vec);
- if (rc)
- break;
-
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
- otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
- /* Enable QINT interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q));
- }
-
- return rc;
-}
-
-void
-oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q;
-
- for (q = 0; q < dev->configured_qints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
- otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, nix_lf_q_irq,
- &dev->qints_mem[q], vec);
- }
-}
-
-int
-oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t rc = 0, vec, q;
-
- dev->configured_cints = RTE_MIN(dev->cints,
- eth_dev->data->nb_rx_queues);
-
- for (q = 0; q < dev->configured_cints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
-
- /* Clear CINT CNT */
- otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
-
- dev->cints_mem[q].eth_dev = eth_dev;
- dev->cints_mem[q].qintx = q;
-
- /* Sync cints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, nix_lf_cq_irq,
- &dev->cints_mem[q], vec);
- if (rc) {
- otx2_err("Fail to register CQ irq, rc=%d", rc);
- return rc;
- }
-
- rc = rte_intr_vec_list_alloc(handle, "intr_vec",
- dev->configured_cints);
- if (rc) {
- otx2_err("Fail to allocate intr vec list, "
- "rc=%d", rc);
- return rc;
- }
- /* VFIO vector zero is resereved for misc interrupt so
- * doing required adjustment. (b13bfab4cd)
- */
- if (rte_intr_vec_list_index_set(handle, q,
- RTE_INTR_VEC_RXTX_OFFSET + vec))
- return -1;
-
- /* Configure CQE interrupt coalescing parameters */
- otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
- (CQ_CQE_THRESH_DEFAULT << 32) |
- (CQ_TIMER_THRESH_DEFAULT << 48)),
- dev->base + NIX_LF_CINTX_WAIT((q)));
-
- /* Keeping the CQ interrupt disabled as the rx interrupt
- * feature needs to be enabled/disabled on demand.
- */
- }
-
- return rc;
-}
-
-void
-oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q;
-
- for (q = 0; q < dev->configured_cints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
-
- /* Clear CINT CNT */
- otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, nix_lf_cq_irq,
- &dev->cints_mem[q], vec);
- }
-}
-
-int
-otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
-
- if (dev->nix_msixoff == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x",
- dev->nix_msixoff);
- return -EINVAL;
- }
-
- /* Register lf err interrupt */
- rc = nix_lf_register_err_irq(eth_dev);
- /* Register RAS interrupt */
- rc |= nix_lf_register_ras_irq(eth_dev);
-
- return rc;
-}
-
-void
-otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
-{
- nix_lf_unregister_err_irq(eth_dev);
- nix_lf_unregister_ras_irq(eth_dev);
-}
-
-int
-otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Enable CINT interrupt */
- otx2_write64(BIT_ULL(0), dev->base +
- NIX_LF_CINTX_ENA_W1S(rx_queue_id));
-
- return 0;
-}
-
-int
-otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Clear and disable CINT interrupt */
- otx2_write64(BIT_ULL(0), dev->base +
- NIX_LF_CINTX_ENA_W1C(rx_queue_id));
-
- return 0;
-}
-
-void
-otx2_nix_err_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Enable all nix lf error interrupts except
- * RQ_DISABLED and CQ_DISABLED.
- */
- if (enb)
- otx2_write64(~(BIT_ULL(11) | BIT_ULL(24)),
- dev->base + NIX_LF_ERR_INT_ENA_W1S);
- else
- otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
-}
-
-void
-otx2_nix_ras_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (enb)
- otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S);
- else
- otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
deleted file mode 100644
index 48781514c3..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ /dev/null
@@ -1,589 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_ethdev.h>
-#include <rte_mbuf_pool_ops.h>
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
-{
- uint32_t buffsz, frame_size = mtu + NIX_L2_OVERHEAD;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_frs_cfg *req;
- int rc;
-
- if (dev->configured && otx2_ethdev_is_ptp_en(dev))
- frame_size += NIX_TIMESYNC_RX_OFFSET;
-
- buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
-
- /* Refuse MTU that requires the support of scattered packets
- * when this feature has not been enabled before.
- */
- if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
- return -EINVAL;
-
- /* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
- (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
- return -EINVAL;
-
- req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
- req->update_smq = true;
- if (otx2_dev_is_sdp(dev))
- req->sdp_link = true;
- /* FRS HW config should exclude FCS but include NPC VTAG insert size */
- req->maxlen = frame_size - RTE_ETHER_CRC_LEN + NIX_MAX_VTAG_ACT_SIZE;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- /* Now just update Rx MAXLEN */
- req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
- req->maxlen = frame_size - RTE_ETHER_CRC_LEN;
- if (otx2_dev_is_sdp(dev))
- req->sdp_link = true;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- return rc;
-}
-
-int
-otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_rxq *rxq;
- int rc;
-
- rxq = data->rx_queues[0];
-
- /* Setup scatter mode if needed by jumbo */
- otx2_nix_enable_mseg_on_jumbo(rxq);
-
- rc = otx2_nix_mtu_set(eth_dev, data->mtu);
- if (rc)
- otx2_err("Failed to set default MTU size %d", rc);
-
- return rc;
-}
-
-static void
-nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return;
-
- if (en)
- otx2_mbox_alloc_msg_cgx_promisc_enable(mbox);
- else
- otx2_mbox_alloc_msg_cgx_promisc_disable(mbox);
-
- otx2_mbox_process(mbox);
-}
-
-void
-otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rx_mode *req;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
-
- if (en)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
-
- otx2_mbox_process(mbox);
- eth_dev->data->promiscuous = en;
- otx2_nix_vlan_update_promisc(eth_dev, en);
-}
-
-int
-otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev)
-{
- otx2_nix_promisc_config(eth_dev, 1);
- nix_cgx_promisc_config(eth_dev, 1);
-
- return 0;
-}
-
-int
-otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- otx2_nix_promisc_config(eth_dev, dev->dmac_filter_enable);
- nix_cgx_promisc_config(eth_dev, 0);
- dev->dmac_filter_enable = false;
-
- return 0;
-}
-
-static void
-nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rx_mode *req;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
-
- if (en)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI;
- else if (eth_dev->data->promiscuous)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
-
- otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev)
-{
- nix_allmulticast_config(eth_dev, 1);
-
- return 0;
-}
-
-int
-otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
-{
- nix_allmulticast_config(eth_dev, 0);
-
- return 0;
-}
-
-void
-otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_rxq_info *qinfo)
-{
- struct otx2_eth_rxq *rxq;
-
- rxq = eth_dev->data->rx_queues[queue_id];
-
- qinfo->mp = rxq->pool;
- qinfo->scattered_rx = eth_dev->data->scattered_rx;
- qinfo->nb_desc = rxq->qconf.nb_desc;
-
- qinfo->conf.rx_free_thresh = 0;
- qinfo->conf.rx_drop_en = 0;
- qinfo->conf.rx_deferred_start = 0;
- qinfo->conf.offloads = rxq->offloads;
-}
-
-void
-otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_txq_info *qinfo)
-{
- struct otx2_eth_txq *txq;
-
- txq = eth_dev->data->tx_queues[queue_id];
-
- qinfo->nb_desc = txq->qconf.nb_desc;
-
- qinfo->conf.tx_thresh.pthresh = 0;
- qinfo->conf.tx_thresh.hthresh = 0;
- qinfo->conf.tx_thresh.wthresh = 0;
-
- qinfo->conf.tx_free_thresh = 0;
- qinfo->conf.tx_rs_thresh = 0;
- qinfo->conf.offloads = txq->offloads;
- qinfo->conf.tx_deferred_start = 0;
-}
-
-int
-otx2_rx_burst_mode_get(struct rte_eth_dev *eth_dev,
- __rte_unused uint16_t queue_id,
- struct rte_eth_burst_mode *mode)
-{
- ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct burst_info {
- uint16_t flags;
- const char *output;
- } rx_offload_map[] = {
- {NIX_RX_OFFLOAD_RSS_F, "RSS,"},
- {NIX_RX_OFFLOAD_PTYPE_F, " Ptype,"},
- {NIX_RX_OFFLOAD_CHECKSUM_F, " Checksum,"},
- {NIX_RX_OFFLOAD_VLAN_STRIP_F, " VLAN Strip,"},
- {NIX_RX_OFFLOAD_MARK_UPDATE_F, " Mark Update,"},
- {NIX_RX_OFFLOAD_TSTAMP_F, " Timestamp,"},
- {NIX_RX_MULTI_SEG_F, " Scattered,"}
- };
- static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
- "Scalar, Rx Offloads:"
- };
- uint32_t i;
-
- /* Update burst mode info */
- rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena],
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
-
- /* Update Rx offload info */
- for (i = 0; i < RTE_DIM(rx_offload_map); i++) {
- if (dev->rx_offload_flags & rx_offload_map[i].flags) {
- rc = rte_strscpy(mode->info + bytes,
- rx_offload_map[i].output,
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
- }
- }
-
-done:
- return 0;
-}
-
-int
-otx2_tx_burst_mode_get(struct rte_eth_dev *eth_dev,
- __rte_unused uint16_t queue_id,
- struct rte_eth_burst_mode *mode)
-{
- ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct burst_info {
- uint16_t flags;
- const char *output;
- } tx_offload_map[] = {
- {NIX_TX_OFFLOAD_L3_L4_CSUM_F, " Inner L3/L4 csum,"},
- {NIX_TX_OFFLOAD_OL3_OL4_CSUM_F, " Outer L3/L4 csum,"},
- {NIX_TX_OFFLOAD_VLAN_QINQ_F, " VLAN Insertion,"},
- {NIX_TX_OFFLOAD_MBUF_NOFF_F, " MBUF free disable,"},
- {NIX_TX_OFFLOAD_TSTAMP_F, " Timestamp,"},
- {NIX_TX_OFFLOAD_TSO_F, " TSO,"},
- {NIX_TX_MULTI_SEG_F, " Scattered,"}
- };
- static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
- "Scalar, Tx Offloads:"
- };
- uint32_t i;
-
- /* Update burst mode info */
- rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena],
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
-
- /* Update Tx offload info */
- for (i = 0; i < RTE_DIM(tx_offload_map); i++) {
- if (dev->tx_offload_flags & tx_offload_map[i].flags) {
- rc = rte_strscpy(mode->info + bytes,
- tx_offload_map[i].output,
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
- }
- }
-
-done:
- return 0;
-}
-
-static void
-nix_rx_head_tail_get(struct otx2_eth_dev *dev,
- uint32_t *head, uint32_t *tail, uint16_t queue_idx)
-{
- uint64_t reg, val;
-
- if (head == NULL || tail == NULL)
- return;
-
- reg = (((uint64_t)queue_idx) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)
- (dev->base + NIX_LF_CQ_OP_STATUS));
- if (val & (OP_ERR | CQ_ERR))
- val = 0;
-
- *tail = (uint32_t)(val & 0xFFFFF);
- *head = (uint32_t)((val >> 20) & 0xFFFFF);
-}
-
-uint32_t
-otx2_nix_rx_queue_count(void *rx_queue)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
- uint32_t head, tail;
-
- nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
- return (tail - head) % rxq->qlen;
-}
-
-static inline int
-nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
-{
- /* Check given offset(queue index) has packet filled by HW */
- if (tail > head && offset <= tail && offset >= head)
- return 1;
- /* Wrap around case */
- if (head > tail && (offset >= head || offset <= tail))
- return 1;
-
- return 0;
-}
-
-int
-otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- uint32_t head, tail;
-
- if (rxq->qlen <= offset)
- return -EINVAL;
-
- nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
- &head, &tail, rxq->rq);
-
- if (nix_offset_has_packet(head, tail, offset))
- return RTE_ETH_RX_DESC_DONE;
- else
- return RTE_ETH_RX_DESC_AVAIL;
-}
-
-static void
-nix_tx_head_tail_get(struct otx2_eth_dev *dev,
- uint32_t *head, uint32_t *tail, uint16_t queue_idx)
-{
- uint64_t reg, val;
-
- if (head == NULL || tail == NULL)
- return;
-
- reg = (((uint64_t)queue_idx) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)
- (dev->base + NIX_LF_SQ_OP_STATUS));
- if (val & OP_ERR)
- val = 0;
-
- *tail = (uint32_t)((val >> 28) & 0x3F);
- *head = (uint32_t)((val >> 20) & 0x3F);
-}
-
-int
-otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset)
-{
- struct otx2_eth_txq *txq = tx_queue;
- uint32_t head, tail;
-
- if (txq->qconf.nb_desc <= offset)
- return -EINVAL;
-
- nix_tx_head_tail_get(txq->dev, &head, &tail, txq->sq);
-
- if (nix_offset_has_packet(head, tail, offset))
- return RTE_ETH_TX_DESC_DONE;
- else
- return RTE_ETH_TX_DESC_FULL;
-}
-
-/* It is a NOP for octeontx2 as HW frees the buffer on xmit */
-int
-otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
-{
- RTE_SET_USED(txq);
- RTE_SET_USED(free_cnt);
-
- return 0;
-}
-
-int
-otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
- size_t fw_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc = (int)fw_size;
-
- if (fw_size > sizeof(dev->mkex_pfl_name))
- rc = sizeof(dev->mkex_pfl_name);
-
- rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
-
- rc += 1; /* Add the size of '\0' */
- if (fw_size < (size_t)rc)
- return rc;
-
- return 0;
-}
-
-int
-otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
-{
- RTE_SET_USED(eth_dev);
-
- if (!strcmp(pool, rte_mbuf_platform_mempool_ops()))
- return 0;
-
- return -ENOTSUP;
-}
-
-int
-otx2_nix_dev_flow_ops_get(struct rte_eth_dev *eth_dev __rte_unused,
- const struct rte_flow_ops **ops)
-{
- *ops = &otx2_flow_ops;
- return 0;
-}
-
-static struct cgx_fw_data *
-nix_get_fwdata(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_fw_data *rsp = NULL;
- int rc;
-
- otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get fw data: %d", rc);
- return NULL;
- }
-
- return rsp;
-}
-
-int
-otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_module_info *modinfo)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_fw_data *rsp;
-
- rsp = nix_get_fwdata(dev);
- if (rsp == NULL)
- return -EIO;
-
- modinfo->type = rsp->fwdata.sfp_eeprom.sff_id;
- modinfo->eeprom_len = SFP_EEPROM_SIZE;
-
- return 0;
-}
-
-int
-otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
- struct rte_dev_eeprom_info *info)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_fw_data *rsp;
-
- if (info->offset + info->length > SFP_EEPROM_SIZE)
- return -EINVAL;
-
- rsp = nix_get_fwdata(dev);
- if (rsp == NULL)
- return -EIO;
-
- otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset,
- info->length);
-
- return 0;
-}
-
-int
-otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- devinfo->min_rx_bufsize = NIX_MIN_FRS;
- devinfo->max_rx_pktlen = NIX_MAX_FRS;
- devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT;
- devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT;
- devinfo->max_mac_addrs = dev->max_mac_entries;
- devinfo->max_vfs = pci_dev->max_vfs;
- devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_L2_OVERHEAD;
- devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_L2_OVERHEAD;
- if (dev->configured && otx2_ethdev_is_ptp_en(dev)) {
- devinfo->max_mtu -= NIX_TIMESYNC_RX_OFFSET;
- devinfo->min_mtu -= NIX_TIMESYNC_RX_OFFSET;
- devinfo->max_rx_pktlen -= NIX_TIMESYNC_RX_OFFSET;
- }
-
- devinfo->rx_offload_capa = dev->rx_offload_capa;
- devinfo->tx_offload_capa = dev->tx_offload_capa;
- devinfo->rx_queue_offload_capa = 0;
- devinfo->tx_queue_offload_capa = 0;
-
- devinfo->reta_size = dev->rss_info.rss_size;
- devinfo->hash_key_size = NIX_HASH_KEY_SIZE;
- devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD;
-
- devinfo->default_rxconf = (struct rte_eth_rxconf) {
- .rx_drop_en = 0,
- .offloads = 0,
- };
-
- devinfo->default_txconf = (struct rte_eth_txconf) {
- .offloads = 0,
- };
-
- devinfo->default_rxportconf = (struct rte_eth_dev_portconf) {
- .ring_size = NIX_RX_DEFAULT_RING_SZ,
- };
-
- devinfo->rx_desc_lim = (struct rte_eth_desc_lim) {
- .nb_max = UINT16_MAX,
- .nb_min = NIX_RX_MIN_DESC,
- .nb_align = NIX_RX_MIN_DESC_ALIGN,
- .nb_seg_max = NIX_RX_NB_SEG_MAX,
- .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX,
- };
- devinfo->rx_desc_lim.nb_max =
- RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max,
- NIX_RX_MIN_DESC_ALIGN);
-
- devinfo->tx_desc_lim = (struct rte_eth_desc_lim) {
- .nb_max = UINT16_MAX,
- .nb_min = 1,
- .nb_align = 1,
- .nb_seg_max = NIX_TX_NB_SEG_MAX,
- .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX,
- };
-
- /* Auto negotiation disabled */
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
- if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
- RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
-
- /* 50G and 100G to be supported for board version C0
- * and above.
- */
- if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
- RTE_ETH_LINK_SPEED_100G;
- }
-
- devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
- RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
- devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
deleted file mode 100644
index 4d40184de4..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ /dev/null
@@ -1,923 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_esp.h>
-#include <rte_ethdev.h>
-#include <rte_eventdev.h>
-#include <rte_ip.h>
-#include <rte_malloc.h>
-#include <rte_memzone.h>
-#include <rte_security.h>
-#include <rte_security_driver.h>
-#include <rte_udp.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev_qp.h"
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_ipsec_fp.h"
-#include "otx2_sec_idev.h"
-#include "otx2_security.h"
-
-#define ERR_STR_SZ 256
-
-struct eth_sec_tag_const {
- RTE_STD_C11
- union {
- struct {
- uint32_t rsvd_11_0 : 12;
- uint32_t port : 8;
- uint32_t event_type : 4;
- uint32_t rsvd_31_24 : 8;
- };
- uint32_t u32;
- };
-};
-
-static struct rte_cryptodev_capabilities otx2_eth_sec_crypto_caps[] = {
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 8,
- .max = 12,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 20,
- .max = 64,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- },
- }, }
- }, }
- },
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability otx2_eth_sec_capabilities[] = {
- { /* IPsec Inline Protocol ESP Tunnel Ingress */
- .action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_eth_sec_crypto_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- { /* IPsec Inline Protocol ESP Tunnel Egress */
- .action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_eth_sec_crypto_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- {
- .action = RTE_SECURITY_ACTION_TYPE_NONE
- }
-};
-
-static void
-lookup_mem_sa_tbl_clear(struct rte_eth_dev *eth_dev)
-{
- static const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- uint16_t port = eth_dev->data->port_id;
- const struct rte_memzone *mz;
- uint64_t **sa_tbl;
- uint8_t *mem;
-
- mz = rte_memzone_lookup(name);
- if (mz == NULL)
- return;
-
- mem = mz->addr;
-
- sa_tbl = (uint64_t **)RTE_PTR_ADD(mem, OTX2_NIX_SA_TBL_START);
- if (sa_tbl[port] == NULL)
- return;
-
- rte_free(sa_tbl[port]);
- sa_tbl[port] = NULL;
-}
-
-static int
-lookup_mem_sa_index_update(struct rte_eth_dev *eth_dev, int spi, void *sa,
- char *err_str)
-{
- static const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- const struct rte_memzone *mz;
- uint64_t **sa_tbl;
- uint8_t *mem;
-
- mz = rte_memzone_lookup(name);
- if (mz == NULL) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not find fastpath lookup table");
- return -EINVAL;
- }
-
- mem = mz->addr;
-
- sa_tbl = (uint64_t **)RTE_PTR_ADD(mem, OTX2_NIX_SA_TBL_START);
-
- if (sa_tbl[port] == NULL) {
- sa_tbl[port] = rte_malloc(NULL, dev->ipsec_in_max_spi *
- sizeof(uint64_t), 0);
- }
-
- sa_tbl[port][spi] = (uint64_t)sa;
-
- return 0;
-}
-
-static inline void
-in_sa_mz_name_get(char *name, int size, uint16_t port)
-{
- snprintf(name, size, "otx2_ipsec_in_sadb_%u", port);
-}
-
-static struct otx2_ipsec_fp_in_sa *
-in_sa_get(uint16_t port, int sa_index)
-{
- char name[RTE_MEMZONE_NAMESIZE];
- struct otx2_ipsec_fp_in_sa *sa;
- const struct rte_memzone *mz;
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_lookup(name);
- if (mz == NULL) {
- otx2_err("Could not get the memzone reserved for IN SA DB");
- return NULL;
- }
-
- sa = mz->addr;
-
- return sa + sa_index;
-}
-
-static int
-ipsec_sa_const_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_sec_session_ipsec_ip *sess)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
-
- sess->partial_len = sizeof(struct rte_ipv4_hdr);
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
- sess->partial_len += sizeof(struct rte_esp_hdr);
- sess->roundup_len = sizeof(struct rte_esp_tail);
- } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) {
- sess->partial_len += OTX2_SEC_AH_HDR_LEN;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->options.udp_encap)
- sess->partial_len += sizeof(struct rte_udp_hdr);
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- sess->partial_len += OTX2_SEC_AES_GCM_IV_LEN;
- sess->partial_len += OTX2_SEC_AES_GCM_MAC_LEN;
- sess->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN;
- }
- return 0;
- }
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
- if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- sess->partial_len += OTX2_SEC_AES_CBC_IV_LEN;
- sess->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN;
- } else {
- return -EINVAL;
- }
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- sess->partial_len += OTX2_SEC_SHA1_HMAC_LEN;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int
-hmac_init(struct otx2_ipsec_fp_sa_ctl *ctl, struct otx2_cpt_qp *qp,
- const uint8_t *auth_key, int len, uint8_t *hmac_key)
-{
- struct inst_data {
- struct otx2_cpt_res cpt_res;
- uint8_t buffer[64];
- } *md;
-
- volatile struct otx2_cpt_res *res;
- uint64_t timeout, lmt_status;
- struct otx2_cpt_inst_s inst;
- rte_iova_t md_iova;
- int ret;
-
- memset(&inst, 0, sizeof(struct otx2_cpt_inst_s));
-
- md = rte_zmalloc(NULL, sizeof(struct inst_data), OTX2_CPT_RES_ALIGN);
- if (md == NULL)
- return -ENOMEM;
-
- memcpy(md->buffer, auth_key, len);
-
- md_iova = rte_malloc_virt2iova(md);
- if (md_iova == RTE_BAD_IOVA) {
- ret = -EINVAL;
- goto free_md;
- }
-
- inst.res_addr = md_iova + offsetof(struct inst_data, cpt_res);
- inst.opcode = OTX2_CPT_OP_WRITE_HMAC_IPAD_OPAD;
- inst.param2 = ctl->auth_type;
- inst.dlen = len;
- inst.dptr = md_iova + offsetof(struct inst_data, buffer);
- inst.rptr = inst.dptr;
- inst.egrp = OTX2_CPT_EGRP_INLINE_IPSEC;
-
- md->cpt_res.compcode = 0;
- md->cpt_res.uc_compcode = 0xff;
-
- timeout = rte_get_timer_cycles() + 5 * rte_get_timer_hz();
-
- rte_io_wmb();
-
- do {
- otx2_lmt_mov(qp->lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- res = (volatile struct otx2_cpt_res *)&md->cpt_res;
-
- /* Wait until instruction completes or times out */
- while (res->uc_compcode == 0xff) {
- if (rte_get_timer_cycles() > timeout)
- break;
- }
-
- if (res->u16[0] != OTX2_SEC_COMP_GOOD) {
- ret = -EIO;
- goto free_md;
- }
-
- /* Retrieve the ipad and opad from rptr */
- memcpy(hmac_key, md->buffer, 48);
-
- ret = 0;
-
-free_md:
- rte_free(md);
- return ret;
-}
-
-static int
-eth_sec_ipsec_out_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_sec_session_ipsec_ip *sess;
- uint16_t port = eth_dev->data->port_id;
- int cipher_key_len, auth_key_len, ret;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_ipsec_fp_sa_ctl *ctl;
- struct otx2_ipsec_fp_out_sa *sa;
- struct otx2_sec_session *priv;
- struct otx2_cpt_inst_s inst;
- struct otx2_cpt_qp *qp;
-
- priv = get_sec_session_private_data(sec_sess);
- priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
- sess = &priv->ipsec.ip;
-
- sa = &sess->out_sa;
- ctl = &sa->ctl;
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sess, 0, sizeof(struct otx2_sec_session_ipsec_ip));
-
- sess->seq = 1;
-
- ret = ipsec_sa_const_set(ipsec, crypto_xform, sess);
- if (ret < 0)
- return ret;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- memcpy(sa->nonce, &ipsec->salt, 4);
-
- if (ipsec->options.udp_encap == 1) {
- sa->udp_src = 4500;
- sa->udp_dst = 4500;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- /* Start ip id from 1 */
- sess->ip_id = 1;
-
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- memcpy(&sa->ip_src, &ipsec->tunnel.ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&sa->ip_dst, &ipsec->tunnel.ipv4.dst_ip,
- sizeof(struct in_addr));
- } else {
- return -EINVAL;
- }
- } else {
- return -EINVAL;
- }
-
- cipher_xform = crypto_xform;
- auth_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
- auth_key = NULL;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- /* Determine word 7 of CPT instruction */
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_INLINE_IPSEC;
- inst.cptr = rte_mempool_virt2iova(sa);
- sess->inst_w7 = inst.u64[7];
-
- /* Get CPT QP to be used for this SA */
- ret = otx2_sec_idev_tx_cpt_qp_get(port, &qp);
- if (ret)
- return ret;
-
- sess->qp = qp;
-
- sess->cpt_lmtline = qp->lmtline;
- sess->cpt_nq_reg = qp->lf_nq_reg;
-
- /* Populate control word */
- ret = ipsec_fp_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- goto cpt_put;
-
- if (auth_key_len && auth_key) {
- ret = hmac_init(ctl, qp, auth_key, auth_key_len, sa->hmac_key);
- if (ret)
- goto cpt_put;
- }
-
- rte_io_wmb();
- ctl->valid = 1;
-
- return 0;
-cpt_put:
- otx2_sec_idev_tx_cpt_qp_put(sess->qp);
- return ret;
-}
-
-static int
-eth_sec_ipsec_in_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_sec_session_ipsec_ip *sess;
- uint16_t port = eth_dev->data->port_id;
- int cipher_key_len, auth_key_len, ret;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_ipsec_fp_sa_ctl *ctl;
- struct otx2_ipsec_fp_in_sa *sa;
- struct otx2_sec_session *priv;
- char err_str[ERR_STR_SZ];
- struct otx2_cpt_qp *qp;
-
- memset(err_str, 0, ERR_STR_SZ);
-
- if (ipsec->spi >= dev->ipsec_in_max_spi) {
- otx2_err("SPI exceeds max supported");
- return -EINVAL;
- }
-
- sa = in_sa_get(port, ipsec->spi);
- if (sa == NULL)
- return -ENOMEM;
-
- ctl = &sa->ctl;
-
- priv = get_sec_session_private_data(sec_sess);
- priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
- sess = &priv->ipsec.ip;
-
- rte_spinlock_lock(&dev->ipsec_tbl_lock);
-
- if (ctl->valid) {
- snprintf(err_str, ERR_STR_SZ, "SA already registered");
- ret = -EEXIST;
- goto tbl_unlock;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_fp_in_sa));
-
- auth_xform = crypto_xform;
- cipher_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
- auth_key = NULL;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
- }
-
- if (cipher_key_len != 0) {
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- } else {
- snprintf(err_str, ERR_STR_SZ, "Invalid cipher key len");
- ret = -EINVAL;
- goto sa_clear;
- }
-
- sess->in_sa = sa;
-
- sa->userdata = priv->userdata;
-
- sa->replay_win_sz = ipsec->replay_win_sz;
-
- if (lookup_mem_sa_index_update(eth_dev, ipsec->spi, sa, err_str)) {
- ret = -EINVAL;
- goto sa_clear;
- }
-
- ret = ipsec_fp_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not set SA CTL word (err: %d)", ret);
- goto sa_clear;
- }
-
- if (auth_key_len && auth_key) {
- /* Get a queue pair for HMAC init */
- ret = otx2_sec_idev_tx_cpt_qp_get(port, &qp);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ, "Could not get CPT QP");
- goto sa_clear;
- }
-
- ret = hmac_init(ctl, qp, auth_key, auth_key_len, sa->hmac_key);
- otx2_sec_idev_tx_cpt_qp_put(qp);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ, "Could not put CPT QP");
- goto sa_clear;
- }
- }
-
- if (sa->replay_win_sz) {
- if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) {
- snprintf(err_str, ERR_STR_SZ,
- "Replay window size is not supported");
- ret = -ENOTSUP;
- goto sa_clear;
- }
- sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay),
- 0);
- if (sa->replay == NULL) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not allocate memory");
- ret = -ENOMEM;
- goto sa_clear;
- }
-
- rte_spinlock_init(&sa->replay->lock);
- /*
- * Set window bottom to 1, base and top to size of
- * window
- */
- sa->replay->winb = 1;
- sa->replay->wint = sa->replay_win_sz;
- sa->replay->base = sa->replay_win_sz;
- sa->esn_low = 0;
- sa->esn_hi = 0;
- }
-
- rte_io_wmb();
- ctl->valid = 1;
-
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
- return 0;
-
-sa_clear:
- memset(sa, 0, sizeof(struct otx2_ipsec_fp_in_sa));
-
-tbl_unlock:
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
-
- otx2_err("%s", err_str);
-
- return ret;
-}
-
-static int
-eth_sec_ipsec_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sess)
-{
- int ret;
-
- ret = ipsec_fp_xform_verify(ipsec, crypto_xform);
- if (ret)
- return ret;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
- return eth_sec_ipsec_in_sess_create(eth_dev, ipsec,
- crypto_xform, sess);
- else
- return eth_sec_ipsec_out_sess_create(eth_dev, ipsec,
- crypto_xform, sess);
-}
-
-static int
-otx2_eth_sec_session_create(void *device,
- struct rte_security_session_conf *conf,
- struct rte_security_session *sess,
- struct rte_mempool *mempool)
-{
- struct otx2_sec_session *priv;
- int ret;
-
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
- return -ENOTSUP;
-
- if (rte_mempool_get(mempool, (void **)&priv)) {
- otx2_err("Could not allocate security session private data");
- return -ENOMEM;
- }
-
- set_sec_session_private_data(sess, priv);
-
- /*
- * Save userdata provided by the application. For ingress packets, this
- * could be used to identify the SA.
- */
- priv->userdata = conf->userdata;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
- ret = eth_sec_ipsec_sess_create(device, &conf->ipsec,
- conf->crypto_xform,
- sess);
- else
- ret = -ENOTSUP;
-
- if (ret)
- goto mempool_put;
-
- return 0;
-
-mempool_put:
- rte_mempool_put(mempool, priv);
- set_sec_session_private_data(sess, NULL);
- return ret;
-}
-
-static void
-otx2_eth_sec_free_anti_replay(struct otx2_ipsec_fp_in_sa *sa)
-{
- if (sa != NULL) {
- if (sa->replay_win_sz && sa->replay)
- rte_free(sa->replay);
- }
-}
-
-static int
-otx2_eth_sec_session_destroy(void *device,
- struct rte_security_session *sess)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(device);
- struct otx2_sec_session_ipsec_ip *sess_ip;
- struct otx2_ipsec_fp_in_sa *sa;
- struct otx2_sec_session *priv;
- struct rte_mempool *sess_mp;
- int ret;
-
- priv = get_sec_session_private_data(sess);
- if (priv == NULL)
- return -EINVAL;
-
- sess_ip = &priv->ipsec.ip;
-
- if (priv->ipsec.dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- rte_spinlock_lock(&dev->ipsec_tbl_lock);
- sa = sess_ip->in_sa;
-
- /* Release the anti replay window */
- otx2_eth_sec_free_anti_replay(sa);
-
- /* Clear SA table entry */
- if (sa != NULL) {
- sa->ctl.valid = 0;
- rte_io_wmb();
- }
-
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
- }
-
- /* Release CPT LF used for this session */
- if (sess_ip->qp != NULL) {
- ret = otx2_sec_idev_tx_cpt_qp_put(sess_ip->qp);
- if (ret)
- return ret;
- }
-
- sess_mp = rte_mempool_from_obj(priv);
-
- set_sec_session_private_data(sess, NULL);
- rte_mempool_put(sess_mp, priv);
-
- return 0;
-}
-
-static unsigned int
-otx2_eth_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct otx2_sec_session);
-}
-
-static const struct rte_security_capability *
-otx2_eth_sec_capabilities_get(void *device __rte_unused)
-{
- return otx2_eth_sec_capabilities;
-}
-
-static struct rte_security_ops otx2_eth_sec_ops = {
- .session_create = otx2_eth_sec_session_create,
- .session_destroy = otx2_eth_sec_session_destroy,
- .session_get_size = otx2_eth_sec_session_get_size,
- .capabilities_get = otx2_eth_sec_capabilities_get
-};
-
-int
-otx2_eth_sec_ctx_create(struct rte_eth_dev *eth_dev)
-{
- struct rte_security_ctx *ctx;
- int ret;
-
- ctx = rte_malloc("otx2_eth_sec_ctx",
- sizeof(struct rte_security_ctx), 0);
- if (ctx == NULL)
- return -ENOMEM;
-
- ret = otx2_sec_idev_cfg_init(eth_dev->data->port_id);
- if (ret) {
- rte_free(ctx);
- return ret;
- }
-
- /* Populate ctx */
-
- ctx->device = eth_dev;
- ctx->ops = &otx2_eth_sec_ops;
- ctx->sess_cnt = 0;
- ctx->flags =
- (RTE_SEC_CTX_F_FAST_SET_MDATA | RTE_SEC_CTX_F_FAST_GET_UDATA);
-
- eth_dev->security_ctx = ctx;
-
- return 0;
-}
-
-void
-otx2_eth_sec_ctx_destroy(struct rte_eth_dev *eth_dev)
-{
- rte_free(eth_dev->security_ctx);
-}
-
-static int
-eth_sec_ipsec_cfg(struct rte_eth_dev *eth_dev, uint8_t tt)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- struct nix_inline_ipsec_lf_cfg *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct eth_sec_tag_const tag_const;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_lookup(name);
- if (mz == NULL)
- return -EINVAL;
-
- req = otx2_mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox);
- req->enable = 1;
- req->sa_base_addr = mz->iova;
-
- req->ipsec_cfg0.tt = tt;
-
- tag_const.u32 = 0;
- tag_const.event_type = RTE_EVENT_TYPE_ETHDEV;
- tag_const.port = port;
- req->ipsec_cfg0.tag_const = tag_const.u32;
-
- req->ipsec_cfg0.sa_pow2_size =
- rte_log2_u32(sizeof(struct otx2_ipsec_fp_in_sa));
- req->ipsec_cfg0.lenm1_max = NIX_MAX_FRS - 1;
-
- req->ipsec_cfg1.sa_idx_w = rte_log2_u32(dev->ipsec_in_max_spi);
- req->ipsec_cfg1.sa_idx_max = dev->ipsec_in_max_spi - 1;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_eth_sec_update_tag_type(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- int ret;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = 0; /* Read RQ:0 context */
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret < 0) {
- otx2_err("Could not read RQ context");
- return ret;
- }
-
- /* Update tag type */
- ret = eth_sec_ipsec_cfg(eth_dev, rsp->rq.sso_tt);
- if (ret < 0)
- otx2_err("Could not update sec eth tag type");
-
- return ret;
-}
-
-int
-otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
-{
- const size_t sa_width = sizeof(struct otx2_ipsec_fp_in_sa);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int mz_sz, ret;
- uint16_t nb_sa;
-
- RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
- !RTE_IS_POWER_OF_2(sa_width));
-
- if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
- return 0;
-
- if (rte_security_dynfield_register() < 0)
- return -rte_errno;
-
- nb_sa = dev->ipsec_in_max_spi;
- mz_sz = nb_sa * sa_width;
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_reserve_aligned(name, mz_sz, rte_socket_id(),
- RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN);
-
- if (mz == NULL) {
- otx2_err("Could not allocate inbound SA DB");
- return -ENOMEM;
- }
-
- memset(mz->addr, 0, mz_sz);
-
- ret = eth_sec_ipsec_cfg(eth_dev, SSO_TT_ORDERED);
- if (ret < 0) {
- otx2_err("Could not configure inline IPsec");
- goto sec_fini;
- }
-
- rte_spinlock_init(&dev->ipsec_tbl_lock);
-
- return 0;
-
-sec_fini:
- otx2_err("Could not configure device for security");
- otx2_eth_sec_fini(eth_dev);
- return ret;
-}
-
-void
-otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- char name[RTE_MEMZONE_NAMESIZE];
-
- if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
- return;
-
- lookup_mem_sa_tbl_clear(eth_dev);
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- rte_memzone_free(rte_memzone_lookup(name));
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.h b/drivers/net/octeontx2/otx2_ethdev_sec.h
deleted file mode 100644
index 298b00bf89..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec.h
+++ /dev/null
@@ -1,130 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_SEC_H__
-#define __OTX2_ETHDEV_SEC_H__
-
-#include <rte_ethdev.h>
-
-#include "otx2_ipsec_fp.h"
-#include "otx2_ipsec_po.h"
-
-#define OTX2_CPT_RES_ALIGN 16
-#define OTX2_NIX_SEND_DESC_ALIGN 16
-#define OTX2_CPT_INST_SIZE 64
-
-#define OTX2_CPT_EGRP_INLINE_IPSEC 1
-
-#define OTX2_CPT_OP_INLINE_IPSEC_OUTB (0x40 | 0x25)
-#define OTX2_CPT_OP_INLINE_IPSEC_INB (0x40 | 0x26)
-#define OTX2_CPT_OP_WRITE_HMAC_IPAD_OPAD (0x40 | 0x27)
-
-#define OTX2_SEC_CPT_COMP_GOOD 0x1
-#define OTX2_SEC_UC_COMP_GOOD 0x0
-#define OTX2_SEC_COMP_GOOD (OTX2_SEC_UC_COMP_GOOD << 8 | \
- OTX2_SEC_CPT_COMP_GOOD)
-
-/* CPT Result */
-struct otx2_cpt_res {
- union {
- struct {
- uint64_t compcode:8;
- uint64_t uc_compcode:8;
- uint64_t doneint:1;
- uint64_t reserved_17_63:47;
- uint64_t reserved_64_127;
- };
- uint16_t u16[8];
- };
-};
-
-struct otx2_cpt_inst_s {
- union {
- struct {
- /* W0 */
- uint64_t nixtxl : 3;
- uint64_t doneint : 1;
- uint64_t nixtx_addr : 60;
- /* W1 */
- uint64_t res_addr : 64;
- /* W2 */
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t rsvd_175_172 : 4;
- uint64_t rvu_pf_func : 16;
- /* W3 */
- uint64_t qord : 1;
- uint64_t rsvd_194_193 : 2;
- uint64_t wqe_ptr : 61;
- /* W4 */
- uint64_t dlen : 16;
- uint64_t param2 : 16;
- uint64_t param1 : 16;
- uint64_t opcode : 16;
- /* W5 */
- uint64_t dptr : 64;
- /* W6 */
- uint64_t rptr : 64;
- /* W7 */
- uint64_t cptr : 61;
- uint64_t egrp : 3;
- };
- uint64_t u64[8];
- };
-};
-
-/*
- * Security session for inline IPsec protocol offload. This is private data of
- * inline capable PMD.
- */
-struct otx2_sec_session_ipsec_ip {
- RTE_STD_C11
- union {
- /*
- * Inbound SA would accessed by crypto block. And so the memory
- * is allocated differently and shared with the h/w. Only
- * holding a pointer to this memory in the session private
- * space.
- */
- void *in_sa;
- /* Outbound SA */
- struct otx2_ipsec_fp_out_sa out_sa;
- };
-
- /* Address of CPT LMTLINE */
- void *cpt_lmtline;
- /* CPT LF enqueue register address */
- rte_iova_t cpt_nq_reg;
-
- /* Pre calculated lengths and data for a session */
- uint8_t partial_len;
- uint8_t roundup_len;
- uint8_t roundup_byte;
- uint16_t ip_id;
- union {
- uint64_t esn;
- struct {
- uint32_t seq;
- uint32_t esn_hi;
- };
- };
-
- uint64_t inst_w7;
-
- /* CPT QP used by SA */
- struct otx2_cpt_qp *qp;
-};
-
-int otx2_eth_sec_ctx_create(struct rte_eth_dev *eth_dev);
-
-void otx2_eth_sec_ctx_destroy(struct rte_eth_dev *eth_dev);
-
-int otx2_eth_sec_update_tag_type(struct rte_eth_dev *eth_dev);
-
-int otx2_eth_sec_init(struct rte_eth_dev *eth_dev);
-
-void otx2_eth_sec_fini(struct rte_eth_dev *eth_dev);
-
-#endif /* __OTX2_ETHDEV_SEC_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
deleted file mode 100644
index 021782009f..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ /dev/null
@@ -1,182 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_SEC_TX_H__
-#define __OTX2_ETHDEV_SEC_TX_H__
-
-#include <rte_security.h>
-#include <rte_mbuf.h>
-
-#include "otx2_ethdev_sec.h"
-#include "otx2_security.h"
-
-struct otx2_ipsec_fp_out_hdr {
- uint32_t ip_id;
- uint32_t seq;
- uint8_t iv[16];
-};
-
-static __rte_always_inline int32_t
-otx2_ipsec_fp_out_rlen_get(struct otx2_sec_session_ipsec_ip *sess,
- uint32_t plen)
-{
- uint32_t enc_payload_len;
-
- enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len,
- sess->roundup_byte);
-
- return sess->partial_len + enc_payload_len;
-}
-
-static __rte_always_inline void
-otx2_ssogws_head_wait(uint64_t base);
-
-static __rte_always_inline int
-otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
- const struct otx2_eth_txq *txq, const uint32_t offload_flags)
-{
- uint32_t dlen, rlen, desc_headroom, extend_head, extend_tail;
- struct otx2_sec_session_ipsec_ip *sess;
- struct otx2_ipsec_fp_out_hdr *hdr;
- struct otx2_ipsec_fp_out_sa *sa;
- uint64_t data_addr, desc_addr;
- struct otx2_sec_session *priv;
- struct otx2_cpt_inst_s inst;
- uint64_t lmt_status;
- char *data;
-
- struct desc {
- struct otx2_cpt_res cpt_res __rte_aligned(OTX2_CPT_RES_ALIGN);
- struct nix_send_hdr_s nix_hdr
- __rte_aligned(OTX2_NIX_SEND_DESC_ALIGN);
- union nix_send_sg_s nix_sg;
- struct nix_iova_s nix_iova;
- } *sd;
-
- priv = (struct otx2_sec_session *)(*rte_security_dynfield(m));
- sess = &priv->ipsec.ip;
- sa = &sess->out_sa;
-
- RTE_ASSERT(sess->cpt_lmtline != NULL);
- RTE_ASSERT(!(offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F));
-
- dlen = rte_pktmbuf_pkt_len(m) + sizeof(*hdr) - RTE_ETHER_HDR_LEN;
- rlen = otx2_ipsec_fp_out_rlen_get(sess, dlen - sizeof(*hdr));
-
- RTE_BUILD_BUG_ON(OTX2_CPT_RES_ALIGN % OTX2_NIX_SEND_DESC_ALIGN);
- RTE_BUILD_BUG_ON(sizeof(sd->cpt_res) % OTX2_NIX_SEND_DESC_ALIGN);
-
- extend_head = sizeof(*hdr);
- extend_tail = rlen - dlen;
-
- desc_headroom = (OTX2_CPT_RES_ALIGN - 1) + sizeof(*sd);
-
- if (unlikely(!rte_pktmbuf_is_contiguous(m)) ||
- unlikely(rte_pktmbuf_headroom(m) < extend_head + desc_headroom) ||
- unlikely(rte_pktmbuf_tailroom(m) < extend_tail)) {
- goto drop;
- }
-
- /*
- * Extend mbuf data to point to the expected packet buffer for NIX.
- * This includes the Ethernet header followed by the encrypted IPsec
- * payload
- */
- rte_pktmbuf_append(m, extend_tail);
- data = rte_pktmbuf_prepend(m, extend_head);
- data_addr = rte_pktmbuf_iova(m);
-
- /*
- * Move the Ethernet header, to insert otx2_ipsec_fp_out_hdr prior
- * to the IP header
- */
- memcpy(data, data + sizeof(*hdr), RTE_ETHER_HDR_LEN);
-
- hdr = (struct otx2_ipsec_fp_out_hdr *)(data + RTE_ETHER_HDR_LEN);
-
- if (sa->ctl.enc_type == OTX2_IPSEC_FP_SA_ENC_AES_GCM) {
- /* AES-128-GCM */
- memcpy(hdr->iv, &sa->nonce, 4);
- memset(hdr->iv + 4, 0, 12); //TODO: make it random
- } else {
- /* AES-128-[CBC] + [SHA1] */
- memset(hdr->iv, 0, 16); //TODO: make it random
- }
-
- /* Keep CPT result and NIX send descriptors in headroom */
- sd = (void *)RTE_PTR_ALIGN(data - desc_headroom, OTX2_CPT_RES_ALIGN);
- desc_addr = data_addr - RTE_PTR_DIFF(data, sd);
-
- /* Prepare CPT instruction */
-
- inst.nixtx_addr = (desc_addr + offsetof(struct desc, nix_hdr)) >> 4;
- inst.doneint = 0;
- inst.nixtxl = 1;
- inst.res_addr = desc_addr + offsetof(struct desc, cpt_res);
- inst.u64[2] = 0;
- inst.u64[3] = 0;
- inst.wqe_ptr = desc_addr >> 3; /* FIXME: Handle errors */
- inst.qord = 1;
- inst.opcode = OTX2_CPT_OP_INLINE_IPSEC_OUTB;
- inst.dlen = dlen;
- inst.dptr = data_addr + RTE_ETHER_HDR_LEN;
- inst.u64[7] = sess->inst_w7;
-
- /* First word contains 8 bit completion code & 8 bit uc comp code */
- sd->cpt_res.u16[0] = 0;
-
- /* Prepare NIX send descriptors for output expected from CPT */
-
- sd->nix_hdr.w0.u = 0;
- sd->nix_hdr.w1.u = 0;
- sd->nix_hdr.w0.sq = txq->sq;
- sd->nix_hdr.w0.sizem1 = 1;
- sd->nix_hdr.w0.total = rte_pktmbuf_data_len(m);
- sd->nix_hdr.w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
- if (offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
- sd->nix_hdr.w0.df = otx2_nix_prefree_seg(m);
-
- sd->nix_sg.u = 0;
- sd->nix_sg.subdc = NIX_SUBDC_SG;
- sd->nix_sg.ld_type = NIX_SENDLDTYPE_LDD;
- sd->nix_sg.segs = 1;
- sd->nix_sg.seg1_size = rte_pktmbuf_data_len(m);
-
- sd->nix_iova.addr = rte_mbuf_data_iova(m);
-
- /* Mark mempool object as "put" since it is freed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
-
- if (!ev->sched_type)
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
-
- inst.param1 = sess->esn_hi >> 16;
- inst.param2 = sess->esn_hi & 0xffff;
-
- hdr->seq = rte_cpu_to_be_32(sess->seq);
- hdr->ip_id = rte_cpu_to_be_32(sess->ip_id);
-
- sess->ip_id++;
- sess->esn++;
-
- rte_io_wmb();
-
- do {
- otx2_lmt_mov(sess->cpt_lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(sess->cpt_nq_reg);
- } while (lmt_status == 0);
-
- return 1;
-
-drop:
- if (offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- /* Don't free if reference count > 1 */
- if (rte_pktmbuf_prefree_seg(m) == NULL)
- return 0;
- }
- rte_pktmbuf_free(m);
- return 0;
-}
-
-#endif /* __OTX2_ETHDEV_SEC_TX_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
deleted file mode 100644
index 1d0fe4e950..0000000000
--- a/drivers/net/octeontx2/otx2_flow.c
+++ /dev/null
@@ -1,1189 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_flow.h"
-
-enum flow_vtag_cfg_dir { VTAG_TX, VTAG_RX };
-
-int
-otx2_flow_free_all_resources(struct otx2_eth_dev *hw)
-{
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- struct otx2_mbox *mbox = hw->mbox;
- struct otx2_mcam_ents_info *info;
- struct rte_bitmap *bmap;
- struct rte_flow *flow;
- int entry_count = 0;
- int rc, idx;
-
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- info = &npc->flow_entry_info[idx];
- entry_count += info->live_ent;
- }
-
- if (entry_count == 0)
- return 0;
-
- /* Free all MCAM entries allocated */
- rc = otx2_flow_mcam_free_all_entries(mbox);
-
- /* Free any MCAM counters and delete flow list */
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) {
- if (flow->ctr_id != NPC_COUNTER_NONE)
- rc |= otx2_flow_mcam_free_counter(mbox,
- flow->ctr_id);
-
- TAILQ_REMOVE(&npc->flow_list[idx], flow, next);
- rte_free(flow);
- bmap = npc->live_entries[flow->priority];
- rte_bitmap_clear(bmap, flow->mcam_id);
- }
- info = &npc->flow_entry_info[idx];
- info->free_ent = 0;
- info->live_ent = 0;
- }
- return rc;
-}
-
-
-static int
-flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
- struct otx2_npc_flow_info *flow_info)
-{
- /* This is non-LDATA part in search key */
- uint64_t key_data[2] = {0ULL, 0ULL};
- uint64_t key_mask[2] = {0ULL, 0ULL};
- int intf = pst->flow->nix_intf;
- int key_len, bit = 0, index;
- int off, idx, data_off = 0;
- uint8_t lid, mask, data;
- uint16_t layer_info;
- uint64_t lt, flags;
-
-
- /* Skip till Layer A data start */
- while (bit < NPC_PARSE_KEX_S_LA_OFFSET) {
- if (flow_info->keyx_supp_nmask[intf] & (1 << bit))
- data_off++;
- bit++;
- }
-
- /* Each bit represents 1 nibble */
- data_off *= 4;
-
- index = 0;
- for (lid = 0; lid < NPC_MAX_LID; lid++) {
- /* Offset in key */
- off = NPC_PARSE_KEX_S_LID_OFFSET(lid);
- lt = pst->lt[lid] & 0xf;
- flags = pst->flags[lid] & 0xff;
-
- /* NPC_LAYER_KEX_S */
- layer_info = ((flow_info->keyx_supp_nmask[intf] >> off) & 0x7);
-
- if (layer_info) {
- for (idx = 0; idx <= 2 ; idx++) {
- if (layer_info & (1 << idx)) {
- if (idx == 2)
- data = lt;
- else if (idx == 1)
- data = ((flags >> 4) & 0xf);
- else
- data = (flags & 0xf);
-
- if (data_off >= 64) {
- data_off = 0;
- index++;
- }
- key_data[index] |= ((uint64_t)data <<
- data_off);
- mask = 0xf;
- if (lt == 0)
- mask = 0;
- key_mask[index] |= ((uint64_t)mask <<
- data_off);
- data_off += 4;
- }
- }
- }
- }
-
- otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64,
- key_data[0], key_data[1]);
-
- /* Copy this into mcam string */
- key_len = (pst->npc->keyx_len[intf] + 7) / 8;
- otx2_npc_dbg("Key_len = %d", key_len);
- memcpy(pst->flow->mcam_data, key_data, key_len);
- memcpy(pst->flow->mcam_mask, key_mask, key_len);
-
- otx2_npc_dbg("Final flow data");
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64,
- idx, pst->flow->mcam_data[idx],
- idx, pst->flow->mcam_mask[idx]);
- }
-
- /*
- * Now we have mcam data and mask formatted as
- * [Key_len/4 nibbles][0 or 1 nibble hole][data]
- * hole is present if key_len is odd number of nibbles.
- * mcam data must be split into 64 bits + 48 bits segments
- * for each back W0, W1.
- */
-
- return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info);
-}
-
-static int
-flow_parse_attr(struct rte_eth_dev *eth_dev,
- const struct rte_flow_attr *attr,
- struct rte_flow_error *error,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- const char *errmsg = NULL;
-
- if (attr == NULL)
- errmsg = "Attribute can't be empty";
- else if (attr->group)
- errmsg = "Groups are not supported";
- else if (attr->priority >= dev->npc_flow.flow_max_priority)
- errmsg = "Priority should be with in specified range";
- else if ((!attr->egress && !attr->ingress) ||
- (attr->egress && attr->ingress))
- errmsg = "Exactly one of ingress or egress must be set";
-
- if (errmsg != NULL) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR,
- attr, errmsg);
- return -ENOTSUP;
- }
-
- if (attr->ingress)
- flow->nix_intf = OTX2_INTF_RX;
- else
- flow->nix_intf = OTX2_INTF_TX;
-
- flow->priority = attr->priority;
- return 0;
-}
-
-static inline int
-flow_get_free_rss_grp(struct rte_bitmap *bmap,
- uint32_t size, uint32_t *pos)
-{
- for (*pos = 0; *pos < size; ++*pos) {
- if (!rte_bitmap_get(bmap, *pos))
- break;
- }
-
- return *pos < size ? 0 : -1;
-}
-
-static int
-flow_configure_rss_action(struct otx2_eth_dev *dev,
- const struct rte_flow_action_rss *rss,
- uint8_t *alg_idx, uint32_t *rss_grp,
- int mcam_index)
-{
- struct otx2_npc_flow_info *flow_info = &dev->npc_flow;
- uint16_t reta[NIX_RSS_RETA_SIZE_MAX];
- uint32_t flowkey_cfg, grp_aval, i;
- uint16_t *ind_tbl = NULL;
- uint8_t flowkey_algx;
- int rc;
-
- rc = flow_get_free_rss_grp(flow_info->rss_grp_entries,
- flow_info->rss_grps, &grp_aval);
- /* RSS group :0 is not usable for flow rss action */
- if (rc < 0 || grp_aval == 0)
- return -ENOSPC;
-
- *rss_grp = grp_aval;
-
- otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key,
- rss->key_len);
-
- /* If queue count passed in the rss action is less than
- * HW configured reta size, replicate rss action reta
- * across HW reta table.
- */
- if (dev->rss_info.rss_size > rss->queue_num) {
- ind_tbl = reta;
-
- for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++)
- memcpy(reta + i * rss->queue_num, rss->queue,
- sizeof(uint16_t) * rss->queue_num);
-
- i = dev->rss_info.rss_size % rss->queue_num;
- if (i)
- memcpy(&reta[dev->rss_info.rss_size] - i,
- rss->queue, i * sizeof(uint16_t));
- } else {
- ind_tbl = (uint16_t *)(uintptr_t)rss->queue;
- }
-
- rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl);
- if (rc) {
- otx2_err("Failed to init rss table rc = %d", rc);
- return rc;
- }
-
- flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx,
- *rss_grp, mcam_index);
- if (rc) {
- otx2_err("Failed to set rss hash function rc = %d", rc);
- return rc;
- }
-
- *alg_idx = flowkey_algx;
-
- rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp);
-
- return 0;
-}
-
-
-static int
-flow_program_rss_action(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[],
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- const struct rte_flow_action_rss *rss;
- uint32_t rss_grp;
- uint8_t alg_idx;
- int rc;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
- rss = (const struct rte_flow_action_rss *)actions->conf;
-
- rc = flow_configure_rss_action(dev,
- rss, &alg_idx, &rss_grp,
- flow->mcam_id);
- if (rc)
- return rc;
-
- flow->npc_action &= (~(0xfULL));
- flow->npc_action |= NIX_RX_ACTIONOP_RSS;
- flow->npc_action |=
- ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) <<
- NIX_RSS_ACT_ALG_OFFSET) |
- ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) <<
- NIX_RSS_ACT_GRP_OFFSET);
- }
- }
- return 0;
-}
-
-static int
-flow_free_rss_action(struct rte_eth_dev *eth_dev,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- uint32_t rss_grp;
-
- if (flow->npc_action & NIX_RX_ACTIONOP_RSS) {
- rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) &
- NIX_RSS_ACT_GRP_MASK;
- if (rss_grp == 0 || rss_grp >= npc->rss_grps)
- return -EINVAL;
-
- rte_bitmap_clear(npc->rss_grp_entries, rss_grp);
- }
-
- return 0;
-}
-
-static int
-flow_update_sec_tt(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[])
-{
- int rc = 0;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- rc = otx2_eth_sec_update_tag_type(eth_dev);
- break;
- }
- }
-
- return rc;
-}
-
-static int
-flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
-{
- otx2_npc_dbg("Meta Item");
- return 0;
-}
-
-/*
- * Parse function of each layer:
- * - Consume one or more patterns that are relevant.
- * - Update parse_state
- * - Set parse_state.pattern = last item consumed
- * - Set appropriate error code/message when returning error.
- */
-typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst);
-
-static int
-flow_parse_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- struct rte_flow_error *error,
- struct rte_flow *flow,
- struct otx2_parse_state *pst)
-{
- flow_parse_stage_func_t parse_stage_funcs[] = {
- flow_parse_meta_items,
- otx2_flow_parse_higig2_hdr,
- otx2_flow_parse_la,
- otx2_flow_parse_lb,
- otx2_flow_parse_lc,
- otx2_flow_parse_ld,
- otx2_flow_parse_le,
- otx2_flow_parse_lf,
- otx2_flow_parse_lg,
- otx2_flow_parse_lh,
- };
- struct otx2_eth_dev *hw = dev->data->dev_private;
- uint8_t layer = 0;
- int key_offset;
- int rc;
-
- if (pattern == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL,
- "pattern is NULL");
- return -EINVAL;
- }
-
- memset(pst, 0, sizeof(*pst));
- pst->npc = &hw->npc_flow;
- pst->error = error;
- pst->flow = flow;
-
- /* Use integral byte offset */
- key_offset = pst->npc->keyx_len[flow->nix_intf];
- key_offset = (key_offset + 7) / 8;
-
- /* Location where LDATA would begin */
- pst->mcam_data = (uint8_t *)flow->mcam_data;
- pst->mcam_mask = (uint8_t *)flow->mcam_mask;
-
- while (pattern->type != RTE_FLOW_ITEM_TYPE_END &&
- layer < RTE_DIM(parse_stage_funcs)) {
- otx2_npc_dbg("Pattern type = %d", pattern->type);
-
- /* Skip place-holders */
- pattern = otx2_flow_skip_void_and_any_items(pattern);
-
- pst->pattern = pattern;
- otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer);
- rc = parse_stage_funcs[layer](pst);
- if (rc != 0)
- return -rte_errno;
-
- layer++;
-
- /*
- * Parse stage function sets pst->pattern to
- * 1 past the last item it consumed.
- */
- pattern = pst->pattern;
-
- if (pst->terminate)
- break;
- }
-
- /* Skip trailing place-holders */
- pattern = otx2_flow_skip_void_and_any_items(pattern);
-
- /* Are there more items than what we can handle? */
- if (pattern->type != RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM, pattern,
- "unsupported item in the sequence");
- return -ENOTSUP;
- }
-
- return 0;
-}
-
-static int
-flow_parse_rule(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow,
- struct otx2_parse_state *pst)
-{
- int err;
-
- /* Check attributes */
- err = flow_parse_attr(dev, attr, error, flow);
- if (err)
- return err;
-
- /* Check actions */
- err = otx2_flow_parse_actions(dev, attr, actions, error, flow);
- if (err)
- return err;
-
- /* Check pattern */
- err = flow_parse_pattern(dev, pattern, error, flow, pst);
- if (err)
- return err;
-
- /* Check for overlaps? */
- return 0;
-}
-
-static int
-otx2_flow_validate(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct otx2_parse_state parse_state;
- struct rte_flow flow;
-
- memset(&flow, 0, sizeof(flow));
- return flow_parse_rule(dev, attr, pattern, actions, error, &flow,
- &parse_state);
-}
-
-static int
-flow_program_vtag_action(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[],
- struct rte_flow *flow)
-{
- uint16_t vlan_id = 0, vlan_ethtype = RTE_ETHER_TYPE_VLAN;
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- union {
- uint64_t reg;
- struct nix_tx_vtag_action_s act;
- } tx_vtag_action;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- struct nix_vtag_config_rsp *rsp;
- bool vlan_insert_action = false;
- uint64_t rx_vtag_action = 0;
- uint8_t vlan_pcp = 0;
- int rc;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_OF_POP_VLAN) {
- if (dev->npc_flow.vtag_actions == 1) {
- vtag_cfg =
- otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- vtag_cfg->cfg_type = VTAG_RX;
- vtag_cfg->rx.strip_vtag = 1;
- /* Always capture */
- vtag_cfg->rx.capture_vtag = 1;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- vtag_cfg->rx.vtag_type = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
- }
-
- rx_vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- rx_vtag_action |= (NPC_LID_LB << 8);
- rx_vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
- flow->vtag_action = rx_vtag_action;
- } else if (actions->type ==
- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) {
- const struct rte_flow_action_of_set_vlan_vid *vtag =
- (const struct rte_flow_action_of_set_vlan_vid *)
- actions->conf;
- vlan_id = rte_be_to_cpu_16(vtag->vlan_vid);
- if (vlan_id > 0xfff) {
- otx2_err("Invalid vlan_id for set vlan action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- } else if (actions->type == RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN) {
- const struct rte_flow_action_of_push_vlan *ethtype =
- (const struct rte_flow_action_of_push_vlan *)
- actions->conf;
- vlan_ethtype = rte_be_to_cpu_16(ethtype->ethertype);
- if (vlan_ethtype != RTE_ETHER_TYPE_VLAN &&
- vlan_ethtype != RTE_ETHER_TYPE_QINQ) {
- otx2_err("Invalid ethtype specified for push"
- " vlan action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- } else if (actions->type ==
- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP) {
- const struct rte_flow_action_of_set_vlan_pcp *pcp =
- (const struct rte_flow_action_of_set_vlan_pcp *)
- actions->conf;
- vlan_pcp = pcp->vlan_pcp;
- if (vlan_pcp > 0x7) {
- otx2_err("Invalid PCP value for pcp action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- }
- }
-
- if (vlan_insert_action) {
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- vtag_cfg->tx.vtag0 =
- ((vlan_ethtype << 16) | (vlan_pcp << 13) | vlan_id);
- vtag_cfg->tx.cfg_vtag0 = 1;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- tx_vtag_action.reg = 0;
- tx_vtag_action.act.vtag0_def = rsp->vtag0_idx;
- if (tx_vtag_action.act.vtag0_def < 0) {
- otx2_err("Failed to config TX VTAG action");
- return -EINVAL;
- }
- tx_vtag_action.act.vtag0_lid = NPC_LID_LA;
- tx_vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
- tx_vtag_action.act.vtag0_relptr =
- NIX_TX_VTAGACTION_VTAG0_RELPTR;
- flow->vtag_action = tx_vtag_action.reg;
- }
- return 0;
-}
-
-static struct rte_flow *
-otx2_flow_create(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_parse_state parse_state;
- struct otx2_mbox *mbox = hw->mbox;
- struct rte_flow *flow, *flow_iter;
- struct otx2_flow_list *list;
- int rc;
-
- flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0);
- if (flow == NULL) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Memory allocation failed");
- return NULL;
- }
- memset(flow, 0, sizeof(*flow));
-
- rc = flow_parse_rule(dev, attr, pattern, actions, error, flow,
- &parse_state);
- if (rc != 0)
- goto err_exit;
-
- rc = flow_program_vtag_action(dev, actions, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to program vlan action");
- goto err_exit;
- }
-
- parse_state.is_vf = otx2_dev_is_vf(hw);
-
- rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to insert filter");
- goto err_exit;
- }
-
- rc = flow_program_rss_action(dev, actions, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to program rss action");
- goto err_exit;
- }
-
- if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
- rc = flow_update_sec_tt(dev, actions);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to update tt with sec act");
- goto err_exit;
- }
- }
-
- list = &hw->npc_flow.flow_list[flow->priority];
- /* List in ascending order of mcam entries */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id > flow->mcam_id) {
- TAILQ_INSERT_BEFORE(flow_iter, flow, next);
- return flow;
- }
- }
-
- TAILQ_INSERT_TAIL(list, flow, next);
- return flow;
-
-err_exit:
- rte_free(flow);
- return NULL;
-}
-
-static int
-otx2_flow_destroy(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- struct otx2_mbox *mbox = hw->mbox;
- struct rte_bitmap *bmap;
- uint16_t match_id;
- int rc;
-
- match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) &
- NIX_RX_ACT_MATCH_MASK;
-
- if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) {
- if (rte_atomic32_read(&npc->mark_actions) == 0)
- return -EINVAL;
-
- /* Clear mark offload flag if there are no more mark actions */
- if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) {
- hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
- otx2_eth_set_rx_function(dev);
- }
- }
-
- if (flow->nix_intf == OTX2_INTF_RX && flow->vtag_action) {
- npc->vtag_actions--;
- if (npc->vtag_actions == 0) {
- if (hw->vlan_info.strip_on == 0) {
- hw->rx_offload_flags &=
- ~NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(dev);
- }
- }
- }
-
- rc = flow_free_rss_action(dev, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to free rss action");
- }
-
- rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to destroy filter");
- }
-
- TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next);
-
- bmap = npc->live_entries[flow->priority];
- rte_bitmap_clear(bmap, flow->mcam_id);
-
- rte_free(flow);
- return 0;
-}
-
-static int
-otx2_flow_flush(struct rte_eth_dev *dev,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- int rc;
-
- rc = otx2_flow_free_all_resources(hw);
- if (rc) {
- otx2_err("Error when deleting NPC MCAM entries "
- ", counters");
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to flush filter");
- return -rte_errno;
- }
-
- return 0;
-}
-
-static int
-otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused,
- int enable __rte_unused,
- struct rte_flow_error *error)
-{
- /*
- * If we support, we need to un-install the default mcam
- * entry for this port.
- */
-
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Flow isolation not supported");
-
- return -rte_errno;
-}
-
-static int
-otx2_flow_query(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- const struct rte_flow_action *action,
- void *data,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct rte_flow_query_count *query = data;
- struct otx2_mbox *mbox = hw->mbox;
- const char *errmsg = NULL;
- int errcode = ENOTSUP;
- int rc;
-
- if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) {
- errmsg = "Only COUNT is supported in query";
- goto err_exit;
- }
-
- if (flow->ctr_id == NPC_COUNTER_NONE) {
- errmsg = "Counter is not available";
- goto err_exit;
- }
-
- rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits);
- if (rc != 0) {
- errcode = EIO;
- errmsg = "Error reading flow counter";
- goto err_exit;
- }
- query->hits_set = 1;
- query->bytes_set = 0;
-
- if (query->reset)
- rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id);
- if (rc != 0) {
- errcode = EIO;
- errmsg = "Error clearing flow counter";
- goto err_exit;
- }
-
- return 0;
-
-err_exit:
- rte_flow_error_set(error, errcode,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- errmsg);
- return -rte_errno;
-}
-
-static int
-otx2_flow_dev_dump(struct rte_eth_dev *dev,
- struct rte_flow *flow, FILE *file,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_flow_list *list;
- struct rte_flow *flow_iter;
- uint32_t max_prio, i;
-
- if (file == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Invalid file");
- return -EINVAL;
- }
- if (flow != NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_HANDLE,
- NULL,
- "Invalid argument");
- return -EINVAL;
- }
-
- max_prio = hw->npc_flow.flow_max_priority;
-
- for (i = 0; i < max_prio; i++) {
- list = &hw->npc_flow.flow_list[i];
-
- /* List in ascending order of mcam entries */
- TAILQ_FOREACH(flow_iter, list, next) {
- otx2_flow_dump(file, hw, flow_iter);
- }
- }
-
- return 0;
-}
-
-const struct rte_flow_ops otx2_flow_ops = {
- .validate = otx2_flow_validate,
- .create = otx2_flow_create,
- .destroy = otx2_flow_destroy,
- .flush = otx2_flow_flush,
- .query = otx2_flow_query,
- .isolate = otx2_flow_isolate,
- .dev_dump = otx2_flow_dev_dump,
-};
-
-static int
-flow_supp_key_len(uint32_t supp_mask)
-{
- int nib_count = 0;
- while (supp_mask) {
- nib_count++;
- supp_mask &= (supp_mask - 1);
- }
- return nib_count * 4;
-}
-
-/* Refer HRM register:
- * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG
- * and
- * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG
- **/
-#define BYTESM1_SHIFT 16
-#define HDR_OFF_SHIFT 8
-static void
-flow_update_kex_info(struct npc_xtract_info *xtract_info,
- uint64_t val)
-{
- xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1;
- xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff;
- xtract_info->key_off = val & 0x3f;
- xtract_info->enable = ((val >> 7) & 0x1);
- xtract_info->flags_enable = ((val >> 6) & 0x1);
-}
-
-static void
-flow_process_mkex_cfg(struct otx2_npc_flow_info *npc,
- struct npc_get_kex_cfg_rsp *kex_rsp)
-{
- volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]
- [NPC_MAX_LD];
- struct npc_xtract_info *x_info = NULL;
- int lid, lt, ld, fl, ix;
- otx2_dxcfg_t *p;
- uint64_t keyw;
- uint64_t val;
-
- npc->keyx_supp_nmask[NPC_MCAM_RX] =
- kex_rsp->rx_keyx_cfg & 0x7fffffffULL;
- npc->keyx_supp_nmask[NPC_MCAM_TX] =
- kex_rsp->tx_keyx_cfg & 0x7fffffffULL;
- npc->keyx_len[NPC_MCAM_RX] =
- flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]);
- npc->keyx_len[NPC_MCAM_TX] =
- flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]);
-
- keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL;
- npc->keyw[NPC_MCAM_RX] = keyw;
- keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL;
- npc->keyw[NPC_MCAM_TX] = keyw;
-
- /* Update KEX_LD_FLAG */
- for (ix = 0; ix < NPC_MAX_INTF; ix++) {
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- for (fl = 0; fl < NPC_MAX_LFL; fl++) {
- x_info =
- &npc->prx_fxcfg[ix][ld][fl].xtract[0];
- val = kex_rsp->intf_ld_flags[ix][ld][fl];
- flow_update_kex_info(x_info, val);
- }
- }
- }
-
- /* Update LID, LT and LDATA cfg */
- p = &npc->prx_dxcfg;
- q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])
- (&kex_rsp->intf_lid_lt_ld);
- for (ix = 0; ix < NPC_MAX_INTF; ix++) {
- for (lid = 0; lid < NPC_MAX_LID; lid++) {
- for (lt = 0; lt < NPC_MAX_LT; lt++) {
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- x_info = &(*p)[ix][lid][lt].xtract[ld];
- val = (*q)[ix][lid][lt][ld];
- flow_update_kex_info(x_info, val);
- }
- }
- }
- }
- /* Update LDATA Flags cfg */
- npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0];
- npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1];
-}
-
-static struct otx2_idev_kex_cfg *
-flow_intra_dev_kex_cfg(void)
-{
- static const char name[] = "octeontx2_intra_device_kex_conf";
- struct otx2_idev_kex_cfg *idev;
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(name);
- if (mz)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg),
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz) {
- idev = mz->addr;
- rte_atomic16_set(&idev->kex_refcnt, 0);
- return idev;
- }
- return NULL;
-}
-
-static int
-flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
-{
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- struct npc_get_kex_cfg_rsp *kex_rsp;
- struct otx2_mbox *mbox = dev->mbox;
- char mkex_pfl_name[MKEX_NAME_LEN];
- struct otx2_idev_kex_cfg *idev;
- int rc = 0;
-
- idev = flow_intra_dev_kex_cfg();
- if (!idev)
- return -ENOMEM;
-
- /* Is kex_cfg read by any another driver? */
- if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) {
- /* Call mailbox to get key & data size */
- (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox);
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp);
- if (rc) {
- otx2_err("Failed to fetch NPC keyx config");
- goto done;
- }
- memcpy(&idev->kex_cfg, kex_rsp,
- sizeof(struct npc_get_kex_cfg_rsp));
- }
-
- otx2_mbox_memcpy(mkex_pfl_name,
- idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN);
-
- strlcpy((char *)dev->mkex_pfl_name,
- mkex_pfl_name, sizeof(dev->mkex_pfl_name));
-
- flow_process_mkex_cfg(npc, &idev->kex_cfg);
-
-done:
- return rc;
-}
-
-#define OTX2_MCAM_TOT_ENTRIES_96XX (4096)
-#define OTX2_MCAM_TOT_ENTRIES_98XX (16384)
-
-static int otx2_mcam_tot_entries(struct otx2_eth_dev *dev)
-{
- if (otx2_dev_is_98xx(dev))
- return OTX2_MCAM_TOT_ENTRIES_98XX;
- else
- return OTX2_MCAM_TOT_ENTRIES_96XX;
-}
-
-int
-otx2_flow_init(struct otx2_eth_dev *hw)
-{
- uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- uint32_t bmap_sz, tot_mcam_entries = 0;
- int rc = 0, idx;
-
- rc = flow_fetch_kex_cfg(hw);
- if (rc) {
- otx2_err("Failed to fetch NPC keyx config from idev");
- return rc;
- }
-
- rte_atomic32_init(&npc->mark_actions);
- npc->vtag_actions = 0;
-
- tot_mcam_entries = otx2_mcam_tot_entries(hw);
- npc->mcam_entries = tot_mcam_entries >> npc->keyw[NPC_MCAM_RX];
- /* Free, free_rev, live and live_rev entries */
- bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries);
- mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority,
- RTE_CACHE_LINE_SIZE);
- if (mem == NULL) {
- otx2_err("Bmap alloc failed");
- rc = -ENOMEM;
- return rc;
- }
-
- npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct otx2_mcam_ents_info),
- 0);
- if (npc->flow_entry_info == NULL) {
- otx2_err("flow_entry_info alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->free_entries == NULL) {
- otx2_err("free_entries alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->free_entries_rev == NULL) {
- otx2_err("free_entries_rev alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->live_entries == NULL) {
- otx2_err("live_entries alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->live_entries_rev == NULL) {
- otx2_err("live_entries_rev alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct otx2_flow_list),
- 0);
- if (npc->flow_list == NULL) {
- otx2_err("flow_list alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc_mem = mem;
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- TAILQ_INIT(&npc->flow_list[idx]);
-
- npc->free_entries[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->free_entries_rev[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->live_entries[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->live_entries_rev[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->flow_entry_info[idx].free_ent = 0;
- npc->flow_entry_info[idx].live_ent = 0;
- npc->flow_entry_info[idx].max_id = 0;
- npc->flow_entry_info[idx].min_id = ~(0);
- }
-
- npc->rss_grps = NIX_RSS_GRPS;
-
- bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps);
- nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE);
- if (nix_mem == NULL) {
- otx2_err("Bmap alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz);
-
- /* Group 0 will be used for RSS,
- * 1 -7 will be used for rte_flow RSS action
- */
- rte_bitmap_set(npc->rss_grp_entries, 0);
-
- return 0;
-
-err:
- if (npc->flow_list)
- rte_free(npc->flow_list);
- if (npc->live_entries_rev)
- rte_free(npc->live_entries_rev);
- if (npc->live_entries)
- rte_free(npc->live_entries);
- if (npc->free_entries_rev)
- rte_free(npc->free_entries_rev);
- if (npc->free_entries)
- rte_free(npc->free_entries);
- if (npc->flow_entry_info)
- rte_free(npc->flow_entry_info);
- if (npc_mem)
- rte_free(npc_mem);
- return rc;
-}
-
-int
-otx2_flow_fini(struct otx2_eth_dev *hw)
-{
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- int rc;
-
- rc = otx2_flow_free_all_resources(hw);
- if (rc) {
- otx2_err("Error when deleting NPC MCAM entries, counters");
- return rc;
- }
-
- if (npc->flow_list)
- rte_free(npc->flow_list);
- if (npc->live_entries_rev)
- rte_free(npc->live_entries_rev);
- if (npc->live_entries)
- rte_free(npc->live_entries);
- if (npc->free_entries_rev)
- rte_free(npc->free_entries_rev);
- if (npc->free_entries)
- rte_free(npc->free_entries);
- if (npc->flow_entry_info)
- rte_free(npc->flow_entry_info);
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
deleted file mode 100644
index 790e6ef1e8..0000000000
--- a/drivers/net/octeontx2/otx2_flow.h
+++ /dev/null
@@ -1,414 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_FLOW_H__
-#define __OTX2_FLOW_H__
-
-#include <stdint.h>
-
-#include <rte_flow_driver.h>
-#include <rte_malloc.h>
-#include <rte_tailq.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev.h"
-#include "otx2_mbox.h"
-
-struct otx2_eth_dev;
-
-int otx2_flow_init(struct otx2_eth_dev *hw);
-int otx2_flow_fini(struct otx2_eth_dev *hw);
-extern const struct rte_flow_ops otx2_flow_ops;
-
-enum {
- OTX2_INTF_RX = 0,
- OTX2_INTF_TX = 1,
- OTX2_INTF_MAX = 2,
-};
-
-#define NPC_IH_LENGTH 8
-#define NPC_TPID_LENGTH 2
-#define NPC_HIGIG2_LENGTH 16
-#define NPC_MAX_RAW_ITEM_LEN 16
-#define NPC_COUNTER_NONE (-1)
-/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */
-#define NPC_MAX_EXTRACT_DATA_LEN (64)
-#define NPC_LDATA_LFLAG_LEN (16)
-#define NPC_MAX_KEY_NIBBLES (31)
-/* Nibble offsets */
-#define NPC_LAYER_KEYX_SZ (3)
-#define NPC_PARSE_KEX_S_LA_OFFSET (7)
-#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \
- ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \
- + NPC_PARSE_KEX_S_LA_OFFSET)
-
-
-/* supported flow actions flags */
-#define OTX2_FLOW_ACT_MARK (1 << 0)
-#define OTX2_FLOW_ACT_FLAG (1 << 1)
-#define OTX2_FLOW_ACT_DROP (1 << 2)
-#define OTX2_FLOW_ACT_QUEUE (1 << 3)
-#define OTX2_FLOW_ACT_RSS (1 << 4)
-#define OTX2_FLOW_ACT_DUP (1 << 5)
-#define OTX2_FLOW_ACT_SEC (1 << 6)
-#define OTX2_FLOW_ACT_COUNT (1 << 7)
-#define OTX2_FLOW_ACT_PF (1 << 8)
-#define OTX2_FLOW_ACT_VF (1 << 9)
-#define OTX2_FLOW_ACT_VLAN_STRIP (1 << 10)
-#define OTX2_FLOW_ACT_VLAN_INSERT (1 << 11)
-#define OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT (1 << 12)
-#define OTX2_FLOW_ACT_VLAN_PCP_INSERT (1 << 13)
-
-/* terminating actions */
-#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \
- OTX2_FLOW_ACT_QUEUE | \
- OTX2_FLOW_ACT_RSS | \
- OTX2_FLOW_ACT_DUP | \
- OTX2_FLOW_ACT_SEC)
-
-/* This mark value indicates flag action */
-#define OTX2_FLOW_FLAG_VAL (0xffff)
-
-#define NIX_RX_ACT_MATCH_OFFSET (40)
-#define NIX_RX_ACT_MATCH_MASK (0xFFFF)
-
-#define NIX_RSS_ACT_GRP_OFFSET (20)
-#define NIX_RSS_ACT_ALG_OFFSET (56)
-#define NIX_RSS_ACT_GRP_MASK (0xFFFFF)
-#define NIX_RSS_ACT_ALG_MASK (0x1F)
-
-/* PMD-specific definition of the opaque struct rte_flow */
-#define OTX2_MAX_MCAM_WIDTH_DWORDS 7
-
-enum npc_mcam_intf {
- NPC_MCAM_RX,
- NPC_MCAM_TX
-};
-
-struct npc_xtract_info {
- /* Length in bytes of pkt data extracted. len = 0
- * indicates that extraction is disabled.
- */
- uint8_t len;
- uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */
- uint8_t key_off; /* Byte offset in MCAM key where data is placed */
- uint8_t enable; /* Extraction enabled or disabled */
- uint8_t flags_enable; /* Flags extraction enabled */
-};
-
-/* Information for a given {LAYER, LTYPE} */
-struct npc_lid_lt_xtract_info {
- /* Info derived from parser configuration */
- uint16_t npc_proto; /* Network protocol identified */
- uint8_t valid_flags_mask; /* Flags applicable */
- uint8_t is_terminating:1; /* No more parsing */
- struct npc_xtract_info xtract[NPC_MAX_LD];
-};
-
-union npc_kex_ldata_flags_cfg {
- struct {
- #if defined(__BIG_ENDIAN_BITFIELD)
- uint64_t rvsd_62_1 : 61;
- uint64_t lid : 3;
- #else
- uint64_t lid : 3;
- uint64_t rvsd_62_1 : 61;
- #endif
- } s;
-
- uint64_t i;
-};
-
-typedef struct npc_lid_lt_xtract_info
- otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT];
-typedef struct npc_lid_lt_xtract_info
- otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
-typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD];
-
-
-/* MBOX_MSG_NPC_GET_DATAX_CFG Response */
-struct npc_get_datax_cfg {
- /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
- union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD];
- /* Extract information indexed with [LID][LTYPE] */
- struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT];
- /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE]
- * Fields flags_ena_ld0, flags_ena_ld1 in
- * struct npc_lid_lt_xtract_info indicate if this is applicable
- * for a given {LAYER, LTYPE}
- */
- struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT];
-};
-
-struct otx2_mcam_ents_info {
- /* Current max & min values of mcam index */
- uint32_t max_id;
- uint32_t min_id;
- uint32_t free_ent;
- uint32_t live_ent;
-};
-
-struct otx2_flow_dump_data {
- uint8_t lid;
- uint16_t ltype;
-};
-
-struct rte_flow {
- uint8_t nix_intf;
- uint32_t mcam_id;
- int32_t ctr_id;
- uint32_t priority;
- /* Contiguous match string */
- uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS];
- uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS];
- uint64_t npc_action;
- uint64_t vtag_action;
- struct otx2_flow_dump_data dump_data[32];
- uint16_t num_patterns;
- TAILQ_ENTRY(rte_flow) next;
-};
-
-TAILQ_HEAD(otx2_flow_list, rte_flow);
-
-/* Accessed from ethdev private - otx2_eth_dev */
-struct otx2_npc_flow_info {
- rte_atomic32_t mark_actions;
- uint32_t vtag_actions;
- uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */
- uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */
- uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */
- uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */
- uint32_t mcam_entries; /* mcam entries supported */
- otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */
- otx2_fxcfg_t prx_fxcfg; /* Flag extract */
- otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */
- /* mcam entry info per priority level: both free & in-use */
- struct otx2_mcam_ents_info *flow_entry_info;
- /* Bitmap of free preallocated entries in ascending index &
- * descending priority
- */
- struct rte_bitmap **free_entries;
- /* Bitmap of free preallocated entries in descending index &
- * ascending priority
- */
- struct rte_bitmap **free_entries_rev;
- /* Bitmap of live entries in ascending index & descending priority */
- struct rte_bitmap **live_entries;
- /* Bitmap of live entries in descending index & ascending priority */
- struct rte_bitmap **live_entries_rev;
- /* Priority bucket wise tail queue of all rte_flow resources */
- struct otx2_flow_list *flow_list;
- uint32_t rss_grps; /* rss groups supported */
- struct rte_bitmap *rss_grp_entries;
- uint16_t channel; /*rx channel */
- uint16_t flow_prealloc_size;
- uint16_t flow_max_priority;
- uint16_t switch_header_type;
-};
-
-struct otx2_parse_state {
- struct otx2_npc_flow_info *npc;
- const struct rte_flow_item *pattern;
- const struct rte_flow_item *last_pattern; /* Temp usage */
- struct rte_flow_error *error;
- struct rte_flow *flow;
- uint8_t tunnel;
- uint8_t terminate;
- uint8_t layer_mask;
- uint8_t lt[NPC_MAX_LID];
- uint8_t flags[NPC_MAX_LID];
- uint8_t *mcam_data; /* point to flow->mcam_data + key_len */
- uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */
- bool is_vf;
-};
-
-struct otx2_flow_item_info {
- const void *def_mask; /* rte_flow default mask */
- void *hw_mask; /* hardware supported mask */
- int len; /* length of item */
- const void *spec; /* spec to use, NULL implies match any */
- const void *mask; /* mask to use */
- uint8_t hw_hdr_len; /* Extra data len at each layer*/
-};
-
-struct otx2_idev_kex_cfg {
- struct npc_get_kex_cfg_rsp kex_cfg;
- rte_atomic16_t kex_refcnt;
-};
-
-enum npc_kpu_parser_flag {
- NPC_F_NA = 0,
- NPC_F_PKI,
- NPC_F_PKI_VLAN,
- NPC_F_PKI_ETAG,
- NPC_F_PKI_ITAG,
- NPC_F_PKI_MPLS,
- NPC_F_PKI_NSH,
- NPC_F_ETYPE_UNK,
- NPC_F_ETHER_VLAN,
- NPC_F_ETHER_ETAG,
- NPC_F_ETHER_ITAG,
- NPC_F_ETHER_MPLS,
- NPC_F_ETHER_NSH,
- NPC_F_STAG_CTAG,
- NPC_F_STAG_CTAG_UNK,
- NPC_F_STAG_STAG_CTAG,
- NPC_F_STAG_STAG_STAG,
- NPC_F_QINQ_CTAG,
- NPC_F_QINQ_CTAG_UNK,
- NPC_F_QINQ_QINQ_CTAG,
- NPC_F_QINQ_QINQ_QINQ,
- NPC_F_BTAG_ITAG,
- NPC_F_BTAG_ITAG_STAG,
- NPC_F_BTAG_ITAG_CTAG,
- NPC_F_BTAG_ITAG_UNK,
- NPC_F_ETAG_CTAG,
- NPC_F_ETAG_BTAG_ITAG,
- NPC_F_ETAG_STAG,
- NPC_F_ETAG_QINQ,
- NPC_F_ETAG_ITAG,
- NPC_F_ETAG_ITAG_STAG,
- NPC_F_ETAG_ITAG_CTAG,
- NPC_F_ETAG_ITAG_UNK,
- NPC_F_ITAG_STAG_CTAG,
- NPC_F_ITAG_STAG,
- NPC_F_ITAG_CTAG,
- NPC_F_MPLS_4_LABELS,
- NPC_F_MPLS_3_LABELS,
- NPC_F_MPLS_2_LABELS,
- NPC_F_IP_HAS_OPTIONS,
- NPC_F_IP_IP_IN_IP,
- NPC_F_IP_6TO4,
- NPC_F_IP_MPLS_IN_IP,
- NPC_F_IP_UNK_PROTO,
- NPC_F_IP_IP_IN_IP_HAS_OPTIONS,
- NPC_F_IP_6TO4_HAS_OPTIONS,
- NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS,
- NPC_F_IP_UNK_PROTO_HAS_OPTIONS,
- NPC_F_IP6_HAS_EXT,
- NPC_F_IP6_TUN_IP6,
- NPC_F_IP6_MPLS_IN_IP,
- NPC_F_TCP_HAS_OPTIONS,
- NPC_F_TCP_HTTP,
- NPC_F_TCP_HTTPS,
- NPC_F_TCP_PPTP,
- NPC_F_TCP_UNK_PORT,
- NPC_F_TCP_HTTP_HAS_OPTIONS,
- NPC_F_TCP_HTTPS_HAS_OPTIONS,
- NPC_F_TCP_PPTP_HAS_OPTIONS,
- NPC_F_TCP_UNK_PORT_HAS_OPTIONS,
- NPC_F_UDP_VXLAN,
- NPC_F_UDP_VXLAN_NOVNI,
- NPC_F_UDP_VXLAN_NOVNI_NSH,
- NPC_F_UDP_VXLANGPE,
- NPC_F_UDP_VXLANGPE_NSH,
- NPC_F_UDP_VXLANGPE_MPLS,
- NPC_F_UDP_VXLANGPE_NOVNI,
- NPC_F_UDP_VXLANGPE_NOVNI_NSH,
- NPC_F_UDP_VXLANGPE_NOVNI_MPLS,
- NPC_F_UDP_VXLANGPE_UNK,
- NPC_F_UDP_VXLANGPE_NONP,
- NPC_F_UDP_GTP_GTPC,
- NPC_F_UDP_GTP_GTPU_G_PDU,
- NPC_F_UDP_GTP_GTPU_UNK,
- NPC_F_UDP_UNK_PORT,
- NPC_F_UDP_GENEVE,
- NPC_F_UDP_GENEVE_OAM,
- NPC_F_UDP_GENEVE_CRI_OPT,
- NPC_F_UDP_GENEVE_OAM_CRI_OPT,
- NPC_F_GRE_NVGRE,
- NPC_F_GRE_HAS_SRE,
- NPC_F_GRE_HAS_CSUM,
- NPC_F_GRE_HAS_KEY,
- NPC_F_GRE_HAS_SEQ,
- NPC_F_GRE_HAS_CSUM_KEY,
- NPC_F_GRE_HAS_CSUM_SEQ,
- NPC_F_GRE_HAS_KEY_SEQ,
- NPC_F_GRE_HAS_CSUM_KEY_SEQ,
- NPC_F_GRE_HAS_ROUTE,
- NPC_F_GRE_UNK_PROTO,
- NPC_F_GRE_VER1,
- NPC_F_GRE_VER1_HAS_SEQ,
- NPC_F_GRE_VER1_HAS_ACK,
- NPC_F_GRE_VER1_HAS_SEQ_ACK,
- NPC_F_GRE_VER1_UNK_PROTO,
- NPC_F_TU_ETHER_UNK,
- NPC_F_TU_ETHER_CTAG,
- NPC_F_TU_ETHER_CTAG_UNK,
- NPC_F_TU_ETHER_STAG_CTAG,
- NPC_F_TU_ETHER_STAG_CTAG_UNK,
- NPC_F_TU_ETHER_STAG,
- NPC_F_TU_ETHER_STAG_UNK,
- NPC_F_TU_ETHER_QINQ_CTAG,
- NPC_F_TU_ETHER_QINQ_CTAG_UNK,
- NPC_F_TU_ETHER_QINQ,
- NPC_F_TU_ETHER_QINQ_UNK,
- NPC_F_LAST /* has to be the last item */
-};
-
-
-int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id);
-
-int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
- uint64_t *count);
-
-int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id);
-
-int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry);
-
-int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox);
-
-int otx2_flow_update_parse_state(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- int lid, int lt, uint8_t flags);
-
-int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
- struct otx2_flow_item_info *info,
- struct rte_flow_error *error);
-
-void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
-
-int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
- struct otx2_mbox *mbox,
- struct otx2_parse_state *pst,
- struct otx2_npc_flow_info *flow_info);
-
-void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- int lid, int lt);
-
-const struct rte_flow_item *
-otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern);
-
-int otx2_flow_parse_lh(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lg(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lf(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_le(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_ld(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lc(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lb(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_la(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_higig2_hdr(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_actions(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow);
-
-int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
-
-int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
-
-void otx2_flow_dump(FILE *file, struct otx2_eth_dev *hw,
- struct rte_flow *flow);
-#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
deleted file mode 100644
index 071740de86..0000000000
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ /dev/null
@@ -1,252 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_bp_cfg_req *req;
- struct nix_bp_cfg_rsp *rsp;
- int rc;
-
- if (otx2_dev_is_sdp(dev))
- return 0;
-
- if (enb) {
- req = otx2_mbox_alloc_msg_nix_bp_enable(mbox);
- req->chan_base = 0;
- req->chan_cnt = 1;
- req->bpid_per_chan = 0;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || req->chan_cnt != rsp->chan_cnt) {
- otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d",
- rsp->chan_cnt, req->chan_cnt, rc);
- return rc;
- }
-
- fc->bpid[0] = rsp->chan_bpid[0];
- } else {
- req = otx2_mbox_alloc_msg_nix_bp_disable(mbox);
- req->chan_base = 0;
- req->chan_cnt = 1;
-
- rc = otx2_mbox_process(mbox);
-
- memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN);
- }
-
- return rc;
-}
-
-int
-otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_pause_frm_cfg *req, *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_ETH_FC_NONE;
- return 0;
- }
-
- req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- req->set = 0;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto done;
-
- if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_ETH_FC_FULL;
- else if (rsp->rx_pause)
- fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
- else if (rsp->tx_pause)
- fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
- else
- fc_conf->mode = RTE_ETH_FC_NONE;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- struct otx2_eth_rxq *rxq;
- int i, rc;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq)
- return -ENOMEM;
- }
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- if (enb) {
- aq->cq.bpid = fc->bpid[0];
- aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
- aq->cq.bp = rxq->cq_drop;
- aq->cq_mask.bp = ~(aq->cq_mask.bp);
- }
-
- aq->cq.bp_ena = !!enb;
- aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- return 0;
-}
-
-static int
-otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- return otx2_nix_cq_bp_cfg(eth_dev, enb);
-}
-
-int
-otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_pause_frm_cfg *req;
- uint8_t tx_pause, rx_pause;
- int rc = 0;
-
- if (otx2_dev_is_lbk(dev)) {
- otx2_info("No flow control support for LBK bound ethports");
- return -ENOTSUP;
- }
-
- if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time ||
- fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) {
- otx2_info("Flowctrl parameter is not supported");
- return -EINVAL;
- }
-
- if (fc_conf->mode == fc->mode)
- return 0;
-
- rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
-
- /* Check if TX pause frame is already enabled or not */
- if (fc->tx_pause ^ tx_pause) {
- if (otx2_dev_is_Ax(dev) && eth_dev->data->dev_started) {
- /* on Ax, CQ should be in disabled state
- * while setting flow control configuration.
- */
- otx2_info("Stop the port=%d for setting flow control\n",
- eth_dev->data->port_id);
- return 0;
- }
- /* TX pause frames, enable/disable flowctrl on RX side. */
- rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause);
- if (rc)
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- req->set = 1;
- req->rx_pause = rx_pause;
- req->tx_pause = tx_pause;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- fc->tx_pause = tx_pause;
- fc->rx_pause = rx_pause;
- fc->mode = fc_conf->mode;
-
- return rc;
-}
-
-int
-otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct rte_eth_fc_conf fc_conf;
-
- if (otx2_dev_is_lbk(dev) || otx2_dev_is_sdp(dev))
- return 0;
-
- memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- fc_conf.mode = fc->mode;
-
- /* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
- if (otx2_dev_is_Ax(dev) &&
- (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
- fc_conf.mode =
- (fc_conf.mode == RTE_ETH_FC_FULL ||
- fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
- RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
- }
-
- return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
-}
-
-int
-otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct rte_eth_fc_conf fc_conf;
- int rc;
-
- if (otx2_dev_is_lbk(dev) || otx2_dev_is_sdp(dev))
- return 0;
-
- memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
- * by AF driver, update those info in PMD structure.
- */
- rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
- if (rc)
- goto exit;
-
- fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
- (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
- (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
-
-exit:
- return rc;
-}
diff --git a/drivers/net/octeontx2/otx2_flow_dump.c b/drivers/net/octeontx2/otx2_flow_dump.c
deleted file mode 100644
index 3f86071300..0000000000
--- a/drivers/net/octeontx2/otx2_flow_dump.c
+++ /dev/null
@@ -1,595 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_flow.h"
-
-#define NPC_MAX_FIELD_NAME_SIZE 80
-#define NPC_RX_ACTIONOP_MASK GENMASK(3, 0)
-#define NPC_RX_ACTION_PFFUNC_MASK GENMASK(19, 4)
-#define NPC_RX_ACTION_INDEX_MASK GENMASK(39, 20)
-#define NPC_RX_ACTION_MATCH_MASK GENMASK(55, 40)
-#define NPC_RX_ACTION_FLOWKEY_MASK GENMASK(60, 56)
-
-#define NPC_TX_ACTION_INDEX_MASK GENMASK(31, 12)
-#define NPC_TX_ACTION_MATCH_MASK GENMASK(47, 32)
-
-#define NIX_RX_VTAGACT_VTAG0_RELPTR_MASK GENMASK(7, 0)
-#define NIX_RX_VTAGACT_VTAG0_LID_MASK GENMASK(10, 8)
-#define NIX_RX_VTAGACT_VTAG0_TYPE_MASK GENMASK(14, 12)
-#define NIX_RX_VTAGACT_VTAG0_VALID_MASK BIT_ULL(15)
-
-#define NIX_RX_VTAGACT_VTAG1_RELPTR_MASK GENMASK(39, 32)
-#define NIX_RX_VTAGACT_VTAG1_LID_MASK GENMASK(42, 40)
-#define NIX_RX_VTAGACT_VTAG1_TYPE_MASK GENMASK(46, 44)
-#define NIX_RX_VTAGACT_VTAG1_VALID_MASK BIT_ULL(47)
-
-#define NIX_TX_VTAGACT_VTAG0_RELPTR_MASK GENMASK(7, 0)
-#define NIX_TX_VTAGACT_VTAG0_LID_MASK GENMASK(10, 8)
-#define NIX_TX_VTAGACT_VTAG0_OP_MASK GENMASK(13, 12)
-#define NIX_TX_VTAGACT_VTAG0_DEF_MASK GENMASK(25, 16)
-
-#define NIX_TX_VTAGACT_VTAG1_RELPTR_MASK GENMASK(39, 32)
-#define NIX_TX_VTAGACT_VTAG1_LID_MASK GENMASK(42, 40)
-#define NIX_TX_VTAGACT_VTAG1_OP_MASK GENMASK(45, 44)
-#define NIX_TX_VTAGACT_VTAG1_DEF_MASK GENMASK(57, 48)
-
-struct npc_rx_parse_nibble_s {
- uint16_t chan : 3;
- uint16_t errlev : 1;
- uint16_t errcode : 2;
- uint16_t l2l3bm : 1;
- uint16_t laflags : 2;
- uint16_t latype : 1;
- uint16_t lbflags : 2;
- uint16_t lbtype : 1;
- uint16_t lcflags : 2;
- uint16_t lctype : 1;
- uint16_t ldflags : 2;
- uint16_t ldtype : 1;
- uint16_t leflags : 2;
- uint16_t letype : 1;
- uint16_t lfflags : 2;
- uint16_t lftype : 1;
- uint16_t lgflags : 2;
- uint16_t lgtype : 1;
- uint16_t lhflags : 2;
- uint16_t lhtype : 1;
-} __rte_packed;
-
-const char *intf_str[] = {
- "NIX-RX",
- "NIX-TX",
-};
-
-const char *ltype_str[NPC_MAX_LID][NPC_MAX_LT] = {
- [NPC_LID_LA][0] = "NONE",
- [NPC_LID_LA][NPC_LT_LA_ETHER] = "LA_ETHER",
- [NPC_LID_LA][NPC_LT_LA_IH_NIX_ETHER] = "LA_IH_NIX_ETHER",
- [NPC_LID_LA][NPC_LT_LA_HIGIG2_ETHER] = "LA_HIGIG2_ETHER",
- [NPC_LID_LA][NPC_LT_LA_IH_NIX_HIGIG2_ETHER] = "LA_IH_NIX_HIGIG2_ETHER",
- [NPC_LID_LB][0] = "NONE",
- [NPC_LID_LB][NPC_LT_LB_CTAG] = "LB_CTAG",
- [NPC_LID_LB][NPC_LT_LB_STAG_QINQ] = "LB_STAG_QINQ",
- [NPC_LID_LB][NPC_LT_LB_ETAG] = "LB_ETAG",
- [NPC_LID_LB][NPC_LT_LB_EXDSA] = "LB_EXDSA",
- [NPC_LID_LB][NPC_LT_LB_VLAN_EXDSA] = "LB_VLAN_EXDSA",
- [NPC_LID_LC][0] = "NONE",
- [NPC_LID_LC][NPC_LT_LC_IP] = "LC_IP",
- [NPC_LID_LC][NPC_LT_LC_IP6] = "LC_IP6",
- [NPC_LID_LC][NPC_LT_LC_ARP] = "LC_ARP",
- [NPC_LID_LC][NPC_LT_LC_IP6_EXT] = "LC_IP6_EXT",
- [NPC_LID_LC][NPC_LT_LC_NGIO] = "LC_NGIO",
- [NPC_LID_LD][0] = "NONE",
- [NPC_LID_LD][NPC_LT_LD_ICMP] = "LD_ICMP",
- [NPC_LID_LD][NPC_LT_LD_ICMP6] = "LD_ICMP6",
- [NPC_LID_LD][NPC_LT_LD_UDP] = "LD_UDP",
- [NPC_LID_LD][NPC_LT_LD_TCP] = "LD_TCP",
- [NPC_LID_LD][NPC_LT_LD_SCTP] = "LD_SCTP",
- [NPC_LID_LD][NPC_LT_LD_GRE] = "LD_GRE",
- [NPC_LID_LD][NPC_LT_LD_NVGRE] = "LD_NVGRE",
- [NPC_LID_LE][0] = "NONE",
- [NPC_LID_LE][NPC_LT_LE_VXLAN] = "LE_VXLAN",
- [NPC_LID_LE][NPC_LT_LE_ESP] = "LE_ESP",
- [NPC_LID_LE][NPC_LT_LE_GTPC] = "LE_GTPC",
- [NPC_LID_LE][NPC_LT_LE_GTPU] = "LE_GTPU",
- [NPC_LID_LE][NPC_LT_LE_GENEVE] = "LE_GENEVE",
- [NPC_LID_LE][NPC_LT_LE_VXLANGPE] = "LE_VXLANGPE",
- [NPC_LID_LF][0] = "NONE",
- [NPC_LID_LF][NPC_LT_LF_TU_ETHER] = "LF_TU_ETHER",
- [NPC_LID_LG][0] = "NONE",
- [NPC_LID_LG][NPC_LT_LG_TU_IP] = "LG_TU_IP",
- [NPC_LID_LG][NPC_LT_LG_TU_IP6] = "LG_TU_IP6",
- [NPC_LID_LH][0] = "NONE",
- [NPC_LID_LH][NPC_LT_LH_TU_UDP] = "LH_TU_UDP",
- [NPC_LID_LH][NPC_LT_LH_TU_TCP] = "LH_TU_TCP",
- [NPC_LID_LH][NPC_LT_LH_TU_SCTP] = "LH_TU_SCTP",
- [NPC_LID_LH][NPC_LT_LH_TU_ESP] = "LH_TU_ESP",
-};
-
-static uint16_t
-otx2_get_nibbles(struct rte_flow *flow, uint16_t size, uint32_t bit_offset)
-{
- uint32_t byte_index, noffset;
- uint16_t data, mask;
- uint8_t *bytes;
-
- bytes = (uint8_t *)flow->mcam_data;
- mask = (1ULL << (size * 4)) - 1;
- byte_index = bit_offset / 8;
- noffset = bit_offset % 8;
- data = *(uint16_t *)&bytes[byte_index];
- data >>= noffset;
- data &= mask;
-
- return data;
-}
-
-static void
-otx2_flow_print_parse_nibbles(FILE *file, struct rte_flow *flow,
- uint64_t parse_nibbles)
-{
- struct npc_rx_parse_nibble_s *rx_parse;
- uint32_t data, offset = 0;
-
- rx_parse = (struct npc_rx_parse_nibble_s *)&parse_nibbles;
-
- if (rx_parse->chan) {
- data = otx2_get_nibbles(flow, 3, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_CHAN:%#03X\n", data);
- offset += 12;
- }
-
- if (rx_parse->errlev) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_ERRLEV:%#X\n", data);
- offset += 4;
- }
-
- if (rx_parse->errcode) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_ERRCODE:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->l2l3bm) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_L2L3_BCAST:%#X\n", data);
- offset += 4;
- }
-
- if (rx_parse->latype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LA_LTYPE:%s\n",
- ltype_str[NPC_LID_LA][data]);
- offset += 4;
- }
-
- if (rx_parse->laflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LA_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lbtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LB_LTYPE:%s\n",
- ltype_str[NPC_LID_LB][data]);
- offset += 4;
- }
-
- if (rx_parse->lbflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LB_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lctype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LC_LTYPE:%s\n",
- ltype_str[NPC_LID_LC][data]);
- offset += 4;
- }
-
- if (rx_parse->lcflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LC_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->ldtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LD_LTYPE:%s\n",
- ltype_str[NPC_LID_LD][data]);
- offset += 4;
- }
-
- if (rx_parse->ldflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LD_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->letype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LE_LTYPE:%s\n",
- ltype_str[NPC_LID_LE][data]);
- offset += 4;
- }
-
- if (rx_parse->leflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LE_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lftype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LF_LTYPE:%s\n",
- ltype_str[NPC_LID_LF][data]);
- offset += 4;
- }
-
- if (rx_parse->lfflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LF_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lgtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LG_LTYPE:%s\n",
- ltype_str[NPC_LID_LG][data]);
- offset += 4;
- }
-
- if (rx_parse->lgflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LG_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lhtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LH_LTYPE:%s\n",
- ltype_str[NPC_LID_LH][data]);
- offset += 4;
- }
-
- if (rx_parse->lhflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LH_FLAGS:%#02X\n", data);
- }
-}
-
-static void
-otx2_flow_print_xtractinfo(FILE *file, struct npc_xtract_info *lfinfo,
- struct rte_flow *flow, int lid, int lt)
-{
- uint8_t *datastart, *maskstart;
- int i;
-
- datastart = (uint8_t *)&flow->mcam_data + lfinfo->key_off;
- maskstart = (uint8_t *)&flow->mcam_mask + lfinfo->key_off;
-
- fprintf(file, "\t%s, hdr offset:%#X, len:%#X, key offset:%#X, ",
- ltype_str[lid][lt], lfinfo->hdr_off,
- lfinfo->len, lfinfo->key_off);
-
- fprintf(file, "Data:0X");
- for (i = lfinfo->len - 1; i >= 0; i--)
- fprintf(file, "%02X", datastart[i]);
-
- fprintf(file, ", ");
-
- fprintf(file, "Mask:0X");
-
- for (i = lfinfo->len - 1; i >= 0; i--)
- fprintf(file, "%02X", maskstart[i]);
-
- fprintf(file, "\n");
-}
-
-static void
-otx2_flow_print_item(FILE *file, struct otx2_eth_dev *hw,
- struct npc_xtract_info *xinfo, struct rte_flow *flow,
- int intf, int lid, int lt, int ld)
-{
- struct otx2_npc_flow_info *npc_flow = &hw->npc_flow;
- struct npc_xtract_info *lflags_info;
- int i, lf_cfg;
-
- otx2_flow_print_xtractinfo(file, xinfo, flow, lid, lt);
-
- if (xinfo->flags_enable) {
- lf_cfg = npc_flow->prx_lfcfg[ld].i;
-
- if (lf_cfg == lid) {
- for (i = 0; i < NPC_MAX_LFL; i++) {
- lflags_info = npc_flow->prx_fxcfg[intf]
- [ld][i].xtract;
-
- otx2_flow_print_xtractinfo(file, lflags_info,
- flow, lid, lt);
- }
- }
- }
-}
-
-static void
-otx2_flow_dump_patterns(FILE *file, struct otx2_eth_dev *hw,
- struct rte_flow *flow)
-{
- struct otx2_npc_flow_info *npc_flow = &hw->npc_flow;
- struct npc_lid_lt_xtract_info *lt_xinfo;
- struct npc_xtract_info *xinfo;
- uint32_t intf, lid, ld, i;
- uint64_t parse_nibbles;
- uint16_t ltype;
-
- intf = flow->nix_intf;
- parse_nibbles = npc_flow->keyx_supp_nmask[intf];
- otx2_flow_print_parse_nibbles(file, flow, parse_nibbles);
-
- for (i = 0; i < flow->num_patterns; i++) {
- lid = flow->dump_data[i].lid;
- ltype = flow->dump_data[i].ltype;
- lt_xinfo = &npc_flow->prx_dxcfg[intf][lid][ltype];
-
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- xinfo = <_xinfo->xtract[ld];
- if (!xinfo->enable)
- continue;
- otx2_flow_print_item(file, hw, xinfo, flow, intf, lid,
- ltype, ld);
- }
- }
-}
-
-static void
-otx2_flow_dump_tx_action(FILE *file, uint64_t npc_action)
-{
- char index_name[NPC_MAX_FIELD_NAME_SIZE] = "Index:";
- uint32_t tx_op, index, match_id;
-
- tx_op = npc_action & NPC_RX_ACTIONOP_MASK;
-
- fprintf(file, "\tActionOp:");
-
- switch (tx_op) {
- case NIX_TX_ACTIONOP_DROP:
- fprintf(file, "NIX_TX_ACTIONOP_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_DROP);
- break;
- case NIX_TX_ACTIONOP_UCAST_DEFAULT:
- fprintf(file, "NIX_TX_ACTIONOP_UCAST_DEFAULT (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_UCAST_DEFAULT);
- break;
- case NIX_TX_ACTIONOP_UCAST_CHAN:
- fprintf(file, "NIX_TX_ACTIONOP_UCAST_DEFAULT (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_UCAST_CHAN);
- strncpy(index_name, "Transmit Channel:",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_TX_ACTIONOP_MCAST:
- fprintf(file, "NIX_TX_ACTIONOP_MCAST (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_MCAST);
- strncpy(index_name, "Multicast Table Index:",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_TX_ACTIONOP_DROP_VIOL:
- fprintf(file, "NIX_TX_ACTIONOP_DROP_VIOL (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_DROP_VIOL);
- break;
- }
-
- index = ((npc_action & NPC_TX_ACTION_INDEX_MASK) >> 12) & 0xFFFFF;
-
- fprintf(file, "\t%s:%#05X\n", index_name, index);
-
- match_id = ((npc_action & NPC_TX_ACTION_MATCH_MASK) >> 32) & 0xFFFF;
-
- fprintf(file, "\tMatch Id:%#04X\n", match_id);
-}
-
-static void
-otx2_flow_dump_rx_action(FILE *file, uint64_t npc_action)
-{
- uint32_t rx_op, pf_func, index, match_id, flowkey_alg;
- char index_name[NPC_MAX_FIELD_NAME_SIZE] = "Index:";
-
- rx_op = npc_action & NPC_RX_ACTIONOP_MASK;
-
- fprintf(file, "\tActionOp:");
-
- switch (rx_op) {
- case NIX_RX_ACTIONOP_DROP:
- fprintf(file, "NIX_RX_ACTIONOP_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_DROP);
- break;
- case NIX_RX_ACTIONOP_UCAST:
- fprintf(file, "NIX_RX_ACTIONOP_UCAST (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_UCAST);
- strncpy(index_name, "RQ Index", NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_UCAST_IPSEC:
- fprintf(file, "NIX_RX_ACTIONOP_UCAST_IPSEC (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_UCAST_IPSEC);
- strncpy(index_name, "RQ Index:", NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_MCAST:
- fprintf(file, "NIX_RX_ACTIONOP_MCAST (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_MCAST);
- strncpy(index_name, "Multicast/mirror table index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_RSS:
- fprintf(file, "NIX_RX_ACTIONOP_RSS (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_RSS);
- strncpy(index_name, "RSS Group Index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_PF_FUNC_DROP:
- fprintf(file, "NIX_RX_ACTIONOP_PF_FUNC_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_PF_FUNC_DROP);
- break;
- case NIX_RX_ACTIONOP_MIRROR:
- fprintf(file, "NIX_RX_ACTIONOP_MIRROR (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_MIRROR);
- strncpy(index_name, "Multicast/mirror table index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- }
-
- pf_func = ((npc_action & NPC_RX_ACTION_PFFUNC_MASK) >> 4) & 0xFFFF;
-
- fprintf(file, "\tPF_FUNC: %#04X\n", pf_func);
-
- index = ((npc_action & NPC_RX_ACTION_INDEX_MASK) >> 20) & 0xFFFFF;
-
- fprintf(file, "\t%s:%#05X\n", index_name, index);
-
- match_id = ((npc_action & NPC_RX_ACTION_MATCH_MASK) >> 40) & 0xFFFF;
-
- fprintf(file, "\tMatch Id:%#04X\n", match_id);
-
- flowkey_alg = ((npc_action & NPC_RX_ACTION_FLOWKEY_MASK) >> 56) & 0x1F;
-
- fprintf(file, "\tFlow Key Alg:%#X\n", flowkey_alg);
-}
-
-static void
-otx2_flow_dump_parsed_action(FILE *file, uint64_t npc_action, bool is_rx)
-{
- if (is_rx) {
- fprintf(file, "NPC RX Action:%#016lX\n", npc_action);
- otx2_flow_dump_rx_action(file, npc_action);
- } else {
- fprintf(file, "NPC TX Action:%#016lX\n", npc_action);
- otx2_flow_dump_tx_action(file, npc_action);
- }
-}
-
-static void
-otx2_flow_dump_rx_vtag_action(FILE *file, uint64_t vtag_action)
-{
- uint32_t type, lid, relptr;
-
- if (vtag_action & NIX_RX_VTAGACT_VTAG0_VALID_MASK) {
- relptr = vtag_action & NIX_RX_VTAGACT_VTAG0_RELPTR_MASK;
- lid = ((vtag_action & NIX_RX_VTAGACT_VTAG0_LID_MASK) >> 8)
- & 0x7;
- type = ((vtag_action & NIX_RX_VTAGACT_VTAG0_TYPE_MASK) >> 12)
- & 0x7;
-
- fprintf(file, "\tVTAG0:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\ttype:%#X\n", type);
- }
-
- if (vtag_action & NIX_RX_VTAGACT_VTAG1_VALID_MASK) {
- relptr = ((vtag_action & NIX_RX_VTAGACT_VTAG1_RELPTR_MASK)
- >> 32) & 0xFF;
- lid = ((vtag_action & NIX_RX_VTAGACT_VTAG1_LID_MASK) >> 40)
- & 0x7;
- type = ((vtag_action & NIX_RX_VTAGACT_VTAG1_TYPE_MASK) >> 44)
- & 0x7;
-
- fprintf(file, "\tVTAG1:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\ttype:%#X\n", type);
- }
-}
-
-static void
-otx2_get_vtag_opname(uint32_t op, char *opname, int len)
-{
- switch (op) {
- case 0x0:
- strncpy(opname, "NOP", len - 1);
- break;
- case 0x1:
- strncpy(opname, "INSERT", len - 1);
- break;
- case 0x2:
- strncpy(opname, "REPLACE", len - 1);
- break;
- }
-}
-
-static void
-otx2_flow_dump_tx_vtag_action(FILE *file, uint64_t vtag_action)
-{
- uint32_t relptr, lid, op, vtag_def;
- char opname[10];
-
- relptr = vtag_action & NIX_TX_VTAGACT_VTAG0_RELPTR_MASK;
- lid = ((vtag_action & NIX_TX_VTAGACT_VTAG0_LID_MASK) >> 8) & 0x7;
- op = ((vtag_action & NIX_TX_VTAGACT_VTAG0_OP_MASK) >> 12) & 0x3;
- vtag_def = ((vtag_action & NIX_TX_VTAGACT_VTAG0_DEF_MASK) >> 16)
- & 0x3FF;
-
- otx2_get_vtag_opname(op, opname, sizeof(opname));
-
- fprintf(file, "\tVTAG0 relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\top:%s\n", opname);
- fprintf(file, "\tvtag_def:%#X\n", vtag_def);
-
- relptr = ((vtag_action & NIX_TX_VTAGACT_VTAG1_RELPTR_MASK) >> 32)
- & 0xFF;
- lid = ((vtag_action & NIX_TX_VTAGACT_VTAG1_LID_MASK) >> 40) & 0x7;
- op = ((vtag_action & NIX_TX_VTAGACT_VTAG1_OP_MASK) >> 44) & 0x3;
- vtag_def = ((vtag_action & NIX_TX_VTAGACT_VTAG1_DEF_MASK) >> 48)
- & 0x3FF;
-
- otx2_get_vtag_opname(op, opname, sizeof(opname));
-
- fprintf(file, "\tVTAG1:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\top:%s\n", opname);
- fprintf(file, "\tvtag_def:%#X\n", vtag_def);
-}
-
-static void
-otx2_flow_dump_vtag_action(FILE *file, uint64_t vtag_action, bool is_rx)
-{
- if (is_rx) {
- fprintf(file, "NPC RX VTAG Action:%#016lX\n", vtag_action);
- otx2_flow_dump_rx_vtag_action(file, vtag_action);
- } else {
- fprintf(file, "NPC TX VTAG Action:%#016lX\n", vtag_action);
- otx2_flow_dump_tx_vtag_action(file, vtag_action);
- }
-}
-
-void
-otx2_flow_dump(FILE *file, struct otx2_eth_dev *hw, struct rte_flow *flow)
-{
- bool is_rx = 0;
- int i;
-
- fprintf(file, "MCAM Index:%d\n", flow->mcam_id);
- fprintf(file, "Interface :%s (%d)\n", intf_str[flow->nix_intf],
- flow->nix_intf);
- fprintf(file, "Priority :%d\n", flow->priority);
-
- if (flow->nix_intf == NIX_INTF_RX)
- is_rx = 1;
-
- otx2_flow_dump_parsed_action(file, flow->npc_action, is_rx);
- otx2_flow_dump_vtag_action(file, flow->vtag_action, is_rx);
- fprintf(file, "Patterns:\n");
- otx2_flow_dump_patterns(file, hw, flow);
-
- fprintf(file, "MCAM Raw Data :\n");
-
- for (i = 0; i < OTX2_MAX_MCAM_WIDTH_DWORDS; i++) {
- fprintf(file, "\tDW%d :%016lX\n", i, flow->mcam_data[i]);
- fprintf(file, "\tDW%d_Mask:%016lX\n", i, flow->mcam_mask[i]);
- }
-
- fprintf(file, "\n");
-}
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
deleted file mode 100644
index 91267bbb81..0000000000
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ /dev/null
@@ -1,1239 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-const struct rte_flow_item *
-otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern)
-{
- while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) ||
- (pattern->type == RTE_FLOW_ITEM_TYPE_ANY))
- pattern++;
-
- return pattern;
-}
-
-/*
- * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP,
- * Tunnel+SCTP
- */
-int
-otx2_flow_parse_lh(struct otx2_parse_state *pst)
-{
- struct otx2_flow_item_info info;
- char hw_mask[64];
- int lid, lt;
- int rc;
-
- if (!pst->tunnel)
- return 0;
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LH;
-
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_UDP:
- lt = NPC_LT_LH_TU_UDP;
- info.def_mask = &rte_flow_item_udp_mask;
- info.len = sizeof(struct rte_flow_item_udp);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- lt = NPC_LT_LH_TU_TCP;
- info.def_mask = &rte_flow_item_tcp_mask;
- info.len = sizeof(struct rte_flow_item_tcp);
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- lt = NPC_LT_LH_TU_SCTP;
- info.def_mask = &rte_flow_item_sctp_mask;
- info.len = sizeof(struct rte_flow_item_sctp);
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- lt = NPC_LT_LH_TU_ESP;
- info.def_mask = &rte_flow_item_esp_mask;
- info.len = sizeof(struct rte_flow_item_esp);
- break;
- default:
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* Tunnel+IPv4, Tunnel+IPv6 */
-int
-otx2_flow_parse_lg(struct otx2_parse_state *pst)
-{
- struct otx2_flow_item_info info;
- char hw_mask[64];
- int lid, lt;
- int rc;
-
- if (!pst->tunnel)
- return 0;
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LG;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
- lt = NPC_LT_LG_TU_IP;
- info.def_mask = &rte_flow_item_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_ipv4);
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) {
- lt = NPC_LT_LG_TU_IP6;
- info.def_mask = &rte_flow_item_ipv6_mask;
- info.len = sizeof(struct rte_flow_item_ipv6);
- } else {
- /* There is no tunneled IP header */
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* Tunnel+Ether */
-int
-otx2_flow_parse_lf(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern, *last_pattern;
- struct rte_flow_item_eth hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- int nr_vlans = 0;
- int rc;
-
- /* We hit this layer if there is a tunneling protocol */
- if (!pst->tunnel)
- return 0;
-
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
- return 0;
-
- lid = NPC_LID_LF;
- lt = NPC_LT_LF_TU_ETHER;
- lflags = 0;
-
- info.def_mask = &rte_flow_item_vlan_mask;
- /* No match support for vlan tags */
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- /* Look ahead and find out any VLAN tags. These can be
- * detected but no data matching is available.
- */
- last_pattern = pst->pattern;
- pattern = pst->pattern + 1;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- nr_vlans++;
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc != 0)
- return rc;
- last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
- otx2_npc_dbg("Nr_vlans = %d", nr_vlans);
- switch (nr_vlans) {
- case 0:
- break;
- case 1:
- lflags = NPC_F_TU_ETHER_CTAG;
- break;
- case 2:
- lflags = NPC_F_TU_ETHER_STAG_CTAG;
- break;
- default:
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- last_pattern,
- "more than 2 vlans with tunneled Ethernet "
- "not supported");
- return -rte_errno;
- }
-
- info.def_mask = &rte_flow_item_eth_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_eth);
- info.hw_hdr_len = 0;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- pst->pattern = last_pattern;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-int
-otx2_flow_parse_le(struct otx2_parse_state *pst)
-{
- /*
- * We are positioned at UDP. Scan ahead and look for
- * UDP encapsulated tunnel protocols. If available,
- * parse them. In that case handle this:
- * - RTE spec assumes we point to tunnel header.
- * - NPC parser provides offset from UDP header.
- */
-
- /*
- * Note: Add support to GENEVE, VXLAN_GPE when we
- * upgrade DPDK
- *
- * Note: Better to split flags into two nibbles:
- * - Higher nibble can have flags
- * - Lower nibble to further enumerate protocols
- * and have flags based extraction
- */
- const struct rte_flow_item *pattern = pst->pattern;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- char hw_mask[64];
- int rc;
-
- if (pst->tunnel)
- return 0;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LE);
-
- info.spec = NULL;
- info.mask = NULL;
- info.hw_mask = NULL;
- info.def_mask = NULL;
- info.len = 0;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LE;
- lflags = 0;
-
- /* Ensure we are not matching anything in UDP */
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc)
- return rc;
-
- info.hw_mask = &hw_mask;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- otx2_npc_dbg("Pattern->type = %d", pattern->type);
- switch (pattern->type) {
- case RTE_FLOW_ITEM_TYPE_VXLAN:
- lflags = NPC_F_UDP_VXLAN;
- info.def_mask = &rte_flow_item_vxlan_mask;
- info.len = sizeof(struct rte_flow_item_vxlan);
- lt = NPC_LT_LE_VXLAN;
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- lt = NPC_LT_LE_ESP;
- info.def_mask = &rte_flow_item_esp_mask;
- info.len = sizeof(struct rte_flow_item_esp);
- break;
- case RTE_FLOW_ITEM_TYPE_GTPC:
- lflags = NPC_F_UDP_GTP_GTPC;
- info.def_mask = &rte_flow_item_gtp_mask;
- info.len = sizeof(struct rte_flow_item_gtp);
- lt = NPC_LT_LE_GTPC;
- break;
- case RTE_FLOW_ITEM_TYPE_GTPU:
- lflags = NPC_F_UDP_GTP_GTPU_G_PDU;
- info.def_mask = &rte_flow_item_gtp_mask;
- info.len = sizeof(struct rte_flow_item_gtp);
- lt = NPC_LT_LE_GTPU;
- break;
- case RTE_FLOW_ITEM_TYPE_GENEVE:
- lflags = NPC_F_UDP_GENEVE;
- info.def_mask = &rte_flow_item_geneve_mask;
- info.len = sizeof(struct rte_flow_item_geneve);
- lt = NPC_LT_LE_GENEVE;
- break;
- case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
- lflags = NPC_F_UDP_VXLANGPE;
- info.def_mask = &rte_flow_item_vxlan_gpe_mask;
- info.len = sizeof(struct rte_flow_item_vxlan_gpe);
- lt = NPC_LT_LE_VXLANGPE;
- break;
- default:
- return 0;
- }
-
- pst->tunnel = 1;
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-static int
-flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag)
-{
- int nr_labels = 0;
- const struct rte_flow_item *pattern = pst->pattern;
- struct otx2_flow_item_info info;
- int rc;
- uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS,
- NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS};
-
- /*
- * pst->pattern points to first MPLS label. We only check
- * that subsequent labels do not have anything to match.
- */
- info.def_mask = &rte_flow_item_mpls_mask;
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_mpls);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) {
- nr_labels++;
-
- /* Basic validation of 2nd/3rd/4th mpls item */
- if (nr_labels > 1) {
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
- }
- pst->last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
-
- if (nr_labels > 4) {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->last_pattern,
- "more than 4 mpls labels not supported");
- return -rte_errno;
- }
-
- *flag = flag_list[nr_labels - 1];
- return 0;
-}
-
-int
-otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid)
-{
- /* Find number of MPLS labels */
- struct rte_flow_item_mpls hw_mask;
- struct otx2_flow_item_info info;
- int lt, lflags;
- int rc;
-
- lflags = 0;
-
- if (lid == NPC_LID_LC)
- lt = NPC_LT_LC_MPLS;
- else if (lid == NPC_LID_LD)
- lt = NPC_LT_LD_TU_MPLS_IN_IP;
- else
- lt = NPC_LT_LE_TU_MPLS_IN_UDP;
-
- /* Prepare for parsing the first item */
- info.def_mask = &rte_flow_item_mpls_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_mpls);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- /*
- * Parse for more labels.
- * This sets lflags and pst->last_pattern correctly.
- */
- rc = flow_parse_mpls_label_stack(pst, &lflags);
- if (rc != 0)
- return rc;
-
- pst->tunnel = 1;
- pst->pattern = pst->last_pattern;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-/*
- * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE,
- * GTP, GTPC, GTPU, ESP
- *
- * Note: UDP tunnel protocols are identified by flags.
- * LPTR for these protocol still points to UDP
- * header. Need flag based extraction to support
- * this.
- */
-int
-otx2_flow_parse_ld(struct otx2_parse_state *pst)
-{
- char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- uint32_t gre_key_mask = 0xffffffff;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- int rc;
-
- if (pst->tunnel) {
- /* We have already parsed MPLS or IPv4/v6 followed
- * by MPLS or IPv4/v6. Subsequent TCP/UDP etc
- * would be parsed as tunneled versions. Skip
- * this layer, except for tunneled MPLS. If LC is
- * MPLS, we have anyway skipped all stacked MPLS
- * labels.
- */
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LD);
- return 0;
- }
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.def_mask = NULL;
- info.len = 0;
- info.hw_hdr_len = 0;
-
- lid = NPC_LID_LD;
- lflags = 0;
-
- otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type);
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_ICMP:
- if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6)
- lt = NPC_LT_LD_ICMP6;
- else
- lt = NPC_LT_LD_ICMP;
- info.def_mask = &rte_flow_item_icmp_mask;
- info.len = sizeof(struct rte_flow_item_icmp);
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- lt = NPC_LT_LD_UDP;
- info.def_mask = &rte_flow_item_udp_mask;
- info.len = sizeof(struct rte_flow_item_udp);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- lt = NPC_LT_LD_TCP;
- info.def_mask = &rte_flow_item_tcp_mask;
- info.len = sizeof(struct rte_flow_item_tcp);
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- lt = NPC_LT_LD_SCTP;
- info.def_mask = &rte_flow_item_sctp_mask;
- info.len = sizeof(struct rte_flow_item_sctp);
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- lt = NPC_LT_LD_GRE;
- info.def_mask = &rte_flow_item_gre_mask;
- info.len = sizeof(struct rte_flow_item_gre);
- break;
- case RTE_FLOW_ITEM_TYPE_GRE_KEY:
- lt = NPC_LT_LD_GRE;
- info.def_mask = &gre_key_mask;
- info.len = sizeof(gre_key_mask);
- info.hw_hdr_len = 4;
- break;
- case RTE_FLOW_ITEM_TYPE_NVGRE:
- lt = NPC_LT_LD_NVGRE;
- lflags = NPC_F_GRE_NVGRE;
- info.def_mask = &rte_flow_item_nvgre_mask;
- info.len = sizeof(struct rte_flow_item_nvgre);
- /* Further IP/Ethernet are parsed as tunneled */
- pst->tunnel = 1;
- break;
- default:
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-static inline void
-flow_check_lc_ip_tunnel(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern = pst->pattern + 1;
-
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS ||
- pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
- pattern->type == RTE_FLOW_ITEM_TYPE_IPV6)
- pst->tunnel = 1;
-}
-
-static int
-otx2_flow_raw_item_prepare(const struct rte_flow_item_raw *raw_spec,
- const struct rte_flow_item_raw *raw_mask,
- struct otx2_flow_item_info *info,
- uint8_t *spec_buf, uint8_t *mask_buf)
-{
- uint32_t custom_hdr_size = 0;
-
- memset(spec_buf, 0, NPC_MAX_RAW_ITEM_LEN);
- memset(mask_buf, 0, NPC_MAX_RAW_ITEM_LEN);
- custom_hdr_size = raw_spec->offset + raw_spec->length;
-
- memcpy(spec_buf + raw_spec->offset, raw_spec->pattern,
- raw_spec->length);
-
- if (raw_mask->pattern) {
- memcpy(mask_buf + raw_spec->offset, raw_mask->pattern,
- raw_spec->length);
- } else {
- memset(mask_buf + raw_spec->offset, 0xFF, raw_spec->length);
- }
-
- info->len = custom_hdr_size;
- info->spec = spec_buf;
- info->mask = mask_buf;
-
- return 0;
-}
-
-/* Outer IPv4, Outer IPv6, MPLS, ARP */
-int
-otx2_flow_parse_lc(struct otx2_parse_state *pst)
-{
- uint8_t raw_spec_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t raw_mask_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- const struct rte_flow_item_raw *raw_spec;
- struct otx2_flow_item_info info;
- int lid, lt, len;
- int rc;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LC);
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LC;
-
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_IPV4:
- lt = NPC_LT_LC_IP;
- info.def_mask = &rte_flow_item_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_ipv4);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_IP6;
- info.def_mask = &rte_flow_item_ipv6_mask;
- info.len = sizeof(struct rte_flow_item_ipv6);
- break;
- case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4:
- lt = NPC_LT_LC_ARP;
- info.def_mask = &rte_flow_item_arp_eth_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_arp_eth_ipv4);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6_EXT:
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_IP6_EXT;
- info.def_mask = &rte_flow_item_ipv6_ext_mask;
- info.len = sizeof(struct rte_flow_item_ipv6_ext);
- info.hw_hdr_len = 40;
- break;
- case RTE_FLOW_ITEM_TYPE_RAW:
- raw_spec = pst->pattern->spec;
- if (!raw_spec->relative)
- return 0;
-
- len = raw_spec->length + raw_spec->offset;
- if (len > NPC_MAX_RAW_ITEM_LEN) {
- rte_flow_error_set(pst->error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Spec length too big");
- return -rte_errno;
- }
-
- otx2_flow_raw_item_prepare((const struct rte_flow_item_raw *)
- pst->pattern->spec,
- (const struct rte_flow_item_raw *)
- pst->pattern->mask, &info,
- raw_spec_buf, raw_mask_buf);
-
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_NGIO;
- info.hw_mask = &hw_mask;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- break;
- default:
- /* No match at this layer */
- return 0;
- }
-
- /* Identify if IP tunnels MPLS or IPv4/v6 */
- flow_check_lc_ip_tunnel(pst);
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* VLAN, ETAG */
-int
-otx2_flow_parse_lb(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern = pst->pattern;
- uint8_t raw_spec_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t raw_mask_buf[NPC_MAX_RAW_ITEM_LEN];
- const struct rte_flow_item *last_pattern;
- const struct rte_flow_item_raw *raw_spec;
- char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- struct otx2_flow_item_info info;
- int lid, lt, lflags, len;
- int nr_vlans = 0;
- int rc;
-
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = NPC_TPID_LENGTH;
-
- lid = NPC_LID_LB;
- lflags = 0;
- last_pattern = pattern;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- /* RTE vlan is either 802.1q or 802.1ad,
- * this maps to either CTAG/STAG. We need to decide
- * based on number of VLANS present. Matching is
- * supported on first tag only.
- */
- info.def_mask = &rte_flow_item_vlan_mask;
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
-
- pattern = pst->pattern;
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- nr_vlans++;
-
- /* Basic validation of 2nd/3rd vlan item */
- if (nr_vlans > 1) {
- otx2_npc_dbg("Vlans = %d", nr_vlans);
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
- }
- last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
-
- switch (nr_vlans) {
- case 1:
- lt = NPC_LT_LB_CTAG;
- break;
- case 2:
- lt = NPC_LT_LB_STAG_QINQ;
- lflags = NPC_F_STAG_CTAG;
- break;
- case 3:
- lt = NPC_LT_LB_STAG_QINQ;
- lflags = NPC_F_STAG_STAG_CTAG;
- break;
- default:
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- last_pattern,
- "more than 3 vlans not supported");
- return -rte_errno;
- }
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) {
- /* we can support ETAG and match a subsequent CTAG
- * without any matching support.
- */
- lt = NPC_LT_LB_ETAG;
- lflags = 0;
-
- last_pattern = pst->pattern;
- pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1);
- if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- info.def_mask = &rte_flow_item_vlan_mask;
- /* set supported mask to NULL for vlan tag */
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
-
- lflags = NPC_F_ETAG_CTAG;
- last_pattern = pattern;
- }
-
- info.def_mask = &rte_flow_item_e_tag_mask;
- info.len = sizeof(struct rte_flow_item_e_tag);
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_RAW) {
- raw_spec = pst->pattern->spec;
- if (raw_spec->relative)
- return 0;
- len = raw_spec->length + raw_spec->offset;
- if (len > NPC_MAX_RAW_ITEM_LEN) {
- rte_flow_error_set(pst->error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Spec length too big");
- return -rte_errno;
- }
-
- if (pst->npc->switch_header_type ==
- OTX2_PRIV_FLAGS_VLAN_EXDSA) {
- lt = NPC_LT_LB_VLAN_EXDSA;
- } else if (pst->npc->switch_header_type ==
- OTX2_PRIV_FLAGS_EXDSA) {
- lt = NPC_LT_LB_EXDSA;
- } else {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "exdsa or vlan_exdsa not enabled on"
- " port");
- return -rte_errno;
- }
-
- otx2_flow_raw_item_prepare((const struct rte_flow_item_raw *)
- pst->pattern->spec,
- (const struct rte_flow_item_raw *)
- pst->pattern->mask, &info,
- raw_spec_buf, raw_mask_buf);
-
- info.hw_hdr_len = 0;
- } else {
- return 0;
- }
-
- info.hw_mask = &hw_mask;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
-
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- /* Point pattern to last item consumed */
- pst->pattern = last_pattern;
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-
-int
-otx2_flow_parse_la(struct otx2_parse_state *pst)
-{
- struct rte_flow_item_eth hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt;
- int rc;
-
- /* Identify the pattern type into lid, lt */
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
- return 0;
-
- lid = NPC_LID_LA;
- lt = NPC_LT_LA_ETHER;
- info.hw_hdr_len = 0;
-
- if (pst->flow->nix_intf == NIX_INTF_TX) {
- lt = NPC_LT_LA_IH_NIX_ETHER;
- info.hw_hdr_len = NPC_IH_LENGTH;
- if (pst->npc->switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
- info.hw_hdr_len += NPC_HIGIG2_LENGTH;
- }
- } else {
- if (pst->npc->switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- lt = NPC_LT_LA_HIGIG2_ETHER;
- info.hw_hdr_len = NPC_HIGIG2_LENGTH;
- }
- }
-
- /* Prepare for parsing the item */
- info.def_mask = &rte_flow_item_eth_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_eth);
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- /* Basic validation of item parameters */
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc)
- return rc;
-
- /* Update pst if not validate only? clash check? */
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-int
-otx2_flow_parse_higig2_hdr(struct otx2_parse_state *pst)
-{
- struct rte_flow_item_higig2_hdr hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt;
- int rc;
-
- /* Identify the pattern type into lid, lt */
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_HIGIG2)
- return 0;
-
- lid = NPC_LID_LA;
- lt = NPC_LT_LA_HIGIG2_ETHER;
- info.hw_hdr_len = 0;
-
- if (pst->flow->nix_intf == NIX_INTF_TX) {
- lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
- info.hw_hdr_len = NPC_IH_LENGTH;
- }
-
- /* Prepare for parsing the item */
- info.def_mask = &rte_flow_item_higig2_hdr_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_higig2_hdr);
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- /* Basic validation of item parameters */
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc)
- return rc;
-
- /* Update pst if not validate only? clash check? */
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-static int
-parse_rss_action(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action *act,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_rss_info *rss_info = &hw->rss_info;
- const struct rte_flow_action_rss *rss;
- uint32_t i;
-
- rss = (const struct rte_flow_action_rss *)act->conf;
-
- /* Not supported */
- if (attr->egress) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
- attr, "No support of RSS in egress");
- }
-
- if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "multi-queue mode is disabled");
-
- /* Parse RSS related parameters from configuration */
- if (!rss || !rss->queue_num)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "no valid queues");
-
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions"
- " are not supported");
-
- if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key))
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, act,
- "RSS hash key too large");
-
- if (rss->queue_num > rss_info->rss_size)
- return rte_flow_error_set
- (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "too many queues for RSS context");
-
- for (i = 0; i < rss->queue_num; i++) {
- if (rss->queue[i] >= dev->data->nb_rx_queues)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "queue id > max number"
- " of queues");
- }
-
- return 0;
-}
-
-int
-otx2_flow_parse_actions(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- const struct rte_flow_action_mark *act_mark;
- const struct rte_flow_action_queue *act_q;
- const struct rte_flow_action_vf *vf_act;
- uint16_t pf_func, vf_id, port_id, pf_id;
- char if_name[RTE_ETH_NAME_MAX_LEN];
- bool vlan_insert_action = false;
- struct rte_eth_dev *eth_dev;
- const char *errmsg = NULL;
- int sel_act, req_act = 0;
- int errcode = 0;
- int mark = 0;
- int rq = 0;
-
- /* Initialize actions */
- flow->ctr_id = NPC_COUNTER_NONE;
- pf_func = otx2_pfvf_func(hw->pf, hw->vf);
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- otx2_npc_dbg("Action type = %d", actions->type);
-
- switch (actions->type) {
- case RTE_FLOW_ACTION_TYPE_VOID:
- break;
- case RTE_FLOW_ACTION_TYPE_MARK:
- act_mark =
- (const struct rte_flow_action_mark *)actions->conf;
-
- /* We have only 16 bits. Use highest val for flag */
- if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) {
- errmsg = "mark value must be < 0xfffe";
- errcode = ENOTSUP;
- goto err_exit;
- }
- mark = act_mark->id + 1;
- req_act |= OTX2_FLOW_ACT_MARK;
- rte_atomic32_inc(&npc->mark_actions);
- break;
-
- case RTE_FLOW_ACTION_TYPE_FLAG:
- mark = OTX2_FLOW_FLAG_VAL;
- req_act |= OTX2_FLOW_ACT_FLAG;
- rte_atomic32_inc(&npc->mark_actions);
- break;
-
- case RTE_FLOW_ACTION_TYPE_COUNT:
- /* Indicates, need a counter */
- flow->ctr_id = 1;
- req_act |= OTX2_FLOW_ACT_COUNT;
- break;
-
- case RTE_FLOW_ACTION_TYPE_DROP:
- req_act |= OTX2_FLOW_ACT_DROP;
- break;
-
- case RTE_FLOW_ACTION_TYPE_PF:
- req_act |= OTX2_FLOW_ACT_PF;
- pf_func &= (0xfc00);
- break;
-
- case RTE_FLOW_ACTION_TYPE_VF:
- vf_act = (const struct rte_flow_action_vf *)
- actions->conf;
- req_act |= OTX2_FLOW_ACT_VF;
- if (vf_act->original == 0) {
- vf_id = vf_act->id & RVU_PFVF_FUNC_MASK;
- if (vf_id >= hw->maxvf) {
- errmsg = "invalid vf specified";
- errcode = EINVAL;
- goto err_exit;
- }
- pf_func &= (0xfc00);
- pf_func = (pf_func | (vf_id + 1));
- }
- break;
-
- case RTE_FLOW_ACTION_TYPE_PORT_ID:
- case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR:
- if (actions->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
- const struct rte_flow_action_port_id *port_act;
-
- port_act = actions->conf;
- port_id = port_act->id;
- } else {
- const struct rte_flow_action_ethdev *ethdev_act;
-
- ethdev_act = actions->conf;
- port_id = ethdev_act->port_id;
- }
- if (rte_eth_dev_get_name_by_port(port_id, if_name)) {
- errmsg = "Name not found for output port id";
- errcode = EINVAL;
- goto err_exit;
- }
- eth_dev = rte_eth_dev_allocated(if_name);
- if (!eth_dev) {
- errmsg = "eth_dev not found for output port id";
- errcode = EINVAL;
- goto err_exit;
- }
- if (!otx2_ethdev_is_same_driver(eth_dev)) {
- errmsg = "Output port id unsupported type";
- errcode = ENOTSUP;
- goto err_exit;
- }
- if (!otx2_dev_is_vf(otx2_eth_pmd_priv(eth_dev))) {
- errmsg = "Output port should be VF";
- errcode = ENOTSUP;
- goto err_exit;
- }
- vf_id = otx2_eth_pmd_priv(eth_dev)->vf;
- if (vf_id >= hw->maxvf) {
- errmsg = "Invalid vf for output port";
- errcode = EINVAL;
- goto err_exit;
- }
- pf_id = otx2_eth_pmd_priv(eth_dev)->pf;
- if (pf_id != hw->pf) {
- errmsg = "Output port unsupported PF";
- errcode = ENOTSUP;
- goto err_exit;
- }
- pf_func &= (0xfc00);
- pf_func = (pf_func | (vf_id + 1));
- req_act |= OTX2_FLOW_ACT_VF;
- break;
-
- case RTE_FLOW_ACTION_TYPE_QUEUE:
- /* Applicable only to ingress flow */
- act_q = (const struct rte_flow_action_queue *)
- actions->conf;
- rq = act_q->index;
- if (rq >= dev->data->nb_rx_queues) {
- errmsg = "invalid queue index";
- errcode = EINVAL;
- goto err_exit;
- }
- req_act |= OTX2_FLOW_ACT_QUEUE;
- break;
-
- case RTE_FLOW_ACTION_TYPE_RSS:
- errcode = parse_rss_action(dev, attr, actions, error);
- if (errcode)
- return -rte_errno;
-
- req_act |= OTX2_FLOW_ACT_RSS;
- break;
-
- case RTE_FLOW_ACTION_TYPE_SECURITY:
- /* Assumes user has already configured security
- * session for this flow. Associated conf is
- * opaque. When RTE security is implemented for otx2,
- * we need to verify that for specified security
- * session:
- * action_type ==
- * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
- * session_protocol ==
- * RTE_SECURITY_PROTOCOL_IPSEC
- *
- * RSS is not supported with inline ipsec. Get the
- * rq from associated conf, or make
- * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this
- * action.
- * Currently, rq = 0 is assumed.
- */
- req_act |= OTX2_FLOW_ACT_SEC;
- rq = 0;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
- req_act |= OTX2_FLOW_ACT_VLAN_INSERT;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
- req_act |= OTX2_FLOW_ACT_VLAN_STRIP;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
- req_act |= OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
- req_act |= OTX2_FLOW_ACT_VLAN_PCP_INSERT;
- break;
- default:
- errmsg = "Unsupported action specified";
- errcode = ENOTSUP;
- goto err_exit;
- }
- }
-
- if (req_act &
- (OTX2_FLOW_ACT_VLAN_INSERT | OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT |
- OTX2_FLOW_ACT_VLAN_PCP_INSERT))
- vlan_insert_action = true;
-
- if ((req_act &
- (OTX2_FLOW_ACT_VLAN_INSERT | OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT |
- OTX2_FLOW_ACT_VLAN_PCP_INSERT)) ==
- OTX2_FLOW_ACT_VLAN_PCP_INSERT) {
- errmsg = " PCP insert action can't be supported alone";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- /* Both STRIP and INSERT actions are not supported */
- if (vlan_insert_action && (req_act & OTX2_FLOW_ACT_VLAN_STRIP)) {
- errmsg = "Both VLAN insert and strip actions not supported"
- " together";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- /* Check if actions specified are compatible */
- if (attr->egress) {
- if (req_act & OTX2_FLOW_ACT_VLAN_STRIP) {
- errmsg = "VLAN pop action is not supported on Egress";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_DROP) {
- flow->npc_action = NIX_TX_ACTIONOP_DROP;
- } else if ((req_act & OTX2_FLOW_ACT_COUNT) ||
- vlan_insert_action) {
- flow->npc_action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
- } else {
- errmsg = "Unsupported action for egress";
- errcode = EINVAL;
- goto err_exit;
- }
- goto set_pf_func;
- }
-
- /* We have already verified the attr, this is ingress.
- * - Exactly one terminating action is supported
- * - Exactly one of MARK or FLAG is supported
- * - If terminating action is DROP, only count is valid.
- */
- sel_act = req_act & OTX2_FLOW_ACT_TERM;
- if ((sel_act & (sel_act - 1)) != 0) {
- errmsg = "Only one terminating action supported";
- errcode = EINVAL;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_DROP) {
- sel_act = req_act & ~OTX2_FLOW_ACT_COUNT;
- if ((sel_act & (sel_act - 1)) != 0) {
- errmsg = "Only COUNT action is supported "
- "with DROP ingress action";
- errcode = ENOTSUP;
- goto err_exit;
- }
- }
-
- if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK))
- == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
- errmsg = "Only one of FLAG or MARK action is supported";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (vlan_insert_action) {
- errmsg = "VLAN push/Insert action is not supported on Ingress";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_VLAN_STRIP)
- npc->vtag_actions++;
-
- /* Only VLAN action is provided */
- if (req_act == OTX2_FLOW_ACT_VLAN_STRIP)
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- /* Set NIX_RX_ACTIONOP */
- else if (req_act & (OTX2_FLOW_ACT_PF | OTX2_FLOW_ACT_VF)) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- if (req_act & OTX2_FLOW_ACT_QUEUE)
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & OTX2_FLOW_ACT_DROP) {
- flow->npc_action = NIX_RX_ACTIONOP_DROP;
- } else if (req_act & OTX2_FLOW_ACT_QUEUE) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & OTX2_FLOW_ACT_RSS) {
- /* When user added a rule for rss, first we will add the
- *rule in MCAM and then update the action, once if we have
- *FLOW_KEY_ALG index. So, till we update the action with
- *flow_key_alg index, set the action to drop.
- */
- if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
- flow->npc_action = NIX_RX_ACTIONOP_DROP;
- else
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else if (req_act & OTX2_FLOW_ACT_SEC) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC;
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else if (req_act & OTX2_FLOW_ACT_COUNT) {
- /* Keep OTX2_FLOW_ACT_COUNT always at the end
- * This is default action, when user specify only
- * COUNT ACTION
- */
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else {
- /* Should never reach here */
- errmsg = "Invalid action specified";
- errcode = EINVAL;
- goto err_exit;
- }
-
- if (mark)
- flow->npc_action |= (uint64_t)mark << 40;
-
- if (rte_atomic32_read(&npc->mark_actions) == 1) {
- hw->rx_offload_flags |=
- NIX_RX_OFFLOAD_MARK_UPDATE_F;
- otx2_eth_set_rx_function(dev);
- }
-
- if (npc->vtag_actions == 1) {
- hw->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(dev);
- }
-
-set_pf_func:
- /* Ideally AF must ensure that correct pf_func is set */
- if (attr->egress)
- flow->npc_action |= (uint64_t)pf_func << 48;
- else
- flow->npc_action |= (uint64_t)pf_func << 4;
-
- return 0;
-
-err_exit:
- rte_flow_error_set(error, errcode,
- RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
- errmsg);
- return -rte_errno;
-}
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
deleted file mode 100644
index 35f7d0f4bc..0000000000
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ /dev/null
@@ -1,969 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-static int
-flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr)
-{
- struct npc_mcam_alloc_counter_req *req;
- struct npc_mcam_alloc_counter_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox);
- req->count = 1;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
-
- *ctr = rsp->cntr_list[0];
- return rc;
-}
-
-int
-otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
-{
- struct npc_mcam_oper_counter_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
- uint64_t *count)
-{
- struct npc_mcam_oper_counter_req *req;
- struct npc_mcam_oper_counter_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
-
- *count = rsp->stat;
- return rc;
-}
-
-int
-otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id)
-{
- struct npc_mcam_oper_counter_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry)
-{
- struct npc_mcam_free_entry_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox)
-{
- struct npc_mcam_free_entry_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->all = 1;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-static void
-flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len)
-{
- int idx;
-
- for (idx = 0; idx < len; idx++)
- ptr[idx] = data[len - 1 - idx];
-}
-
-static int
-flow_check_copysz(size_t size, size_t len)
-{
- if (len <= size)
- return len;
- return -1;
-}
-
-static inline int
-flow_mem_is_zero(const void *mem, int len)
-{
- const char *m = mem;
- int i;
-
- for (i = 0; i < len; i++) {
- if (m[i] != 0)
- return 0;
- }
- return 1;
-}
-
-static void
-flow_set_hw_mask(struct otx2_flow_item_info *info,
- struct npc_xtract_info *xinfo,
- char *hw_mask)
-{
- int max_off, offset;
- int j;
-
- if (xinfo->enable == 0)
- return;
-
- if (xinfo->hdr_off < info->hw_hdr_len)
- return;
-
- max_off = xinfo->hdr_off + xinfo->len - info->hw_hdr_len;
-
- if (max_off > info->len)
- max_off = info->len;
-
- offset = xinfo->hdr_off - info->hw_hdr_len;
- for (j = offset; j < max_off; j++)
- hw_mask[j] = 0xff;
-}
-
-void
-otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info, int lid, int lt)
-{
- struct npc_xtract_info *xinfo, *lfinfo;
- char *hw_mask = info->hw_mask;
- int lf_cfg;
- int i, j;
- int intf;
-
- intf = pst->flow->nix_intf;
- xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract;
- memset(hw_mask, 0, info->len);
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- flow_set_hw_mask(info, &xinfo[i], hw_mask);
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
-
- if (xinfo[i].flags_enable == 0)
- continue;
-
- lf_cfg = pst->npc->prx_lfcfg[i].i;
- if (lf_cfg == lid) {
- for (j = 0; j < NPC_MAX_LFL; j++) {
- lfinfo = pst->npc->prx_fxcfg[intf]
- [i][j].xtract;
- flow_set_hw_mask(info, &lfinfo[0], hw_mask);
- }
- }
- }
-}
-
-static int
-flow_update_extraction_data(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- struct npc_xtract_info *xinfo)
-{
- uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN];
- uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN];
- struct npc_xtract_info *x;
- int k, idx, hdr_off;
- int len = 0;
-
- x = xinfo;
- len = x->len;
- hdr_off = x->hdr_off;
-
- if (hdr_off < info->hw_hdr_len)
- return 0;
-
- if (x->enable == 0)
- return 0;
-
- otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d,"
- "x->key_off = %d", x->hdr_off, len, info->len,
- x->key_off);
-
- hdr_off -= info->hw_hdr_len;
-
- if (hdr_off + len > info->len)
- len = info->len - hdr_off;
-
- /* Check for over-write of previous layer */
- if (!flow_mem_is_zero(pst->mcam_mask + x->key_off,
- len)) {
- /* Cannot support this data match */
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->pattern,
- "Extraction unsupported");
- return -rte_errno;
- }
-
- len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8)
- - x->key_off,
- len);
- if (len < 0) {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->pattern,
- "Internal Error");
- return -rte_errno;
- }
-
- /* Need to reverse complete structure so that dest addr is at
- * MSB so as to program the MCAM using mcam_data & mcam_mask
- * arrays
- */
- flow_prep_mcam_ldata(int_info,
- (const uint8_t *)info->spec + hdr_off,
- x->len);
- flow_prep_mcam_ldata(int_info_mask,
- (const uint8_t *)info->mask + hdr_off,
- x->len);
-
- otx2_npc_dbg("Spec: ");
- for (k = 0; k < info->len; k++)
- otx2_npc_dbg("0x%.2x ",
- ((const uint8_t *)info->spec)[k]);
-
- otx2_npc_dbg("Int_info: ");
- for (k = 0; k < info->len; k++)
- otx2_npc_dbg("0x%.2x ", int_info[k]);
-
- memcpy(pst->mcam_mask + x->key_off, int_info_mask, len);
- memcpy(pst->mcam_data + x->key_off, int_info, len);
-
- otx2_npc_dbg("Parse state mcam data & mask");
- for (idx = 0; idx < len ; idx++)
- otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx,
- *(pst->mcam_data + idx + x->key_off), idx,
- *(pst->mcam_mask + idx + x->key_off));
- return 0;
-}
-
-int
-otx2_flow_update_parse_state(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info, int lid, int lt,
- uint8_t flags)
-{
- struct npc_lid_lt_xtract_info *xinfo;
- struct otx2_flow_dump_data *dump;
- struct npc_xtract_info *lfinfo;
- int intf, lf_cfg;
- int i, j, rc = 0;
-
- otx2_npc_dbg("Parse state function info mask total %s",
- (const uint8_t *)info->mask);
-
- pst->layer_mask |= lid;
- pst->lt[lid] = lt;
- pst->flags[lid] = flags;
-
- intf = pst->flow->nix_intf;
- xinfo = &pst->npc->prx_dxcfg[intf][lid][lt];
- otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating);
- if (xinfo->is_terminating)
- pst->terminate = 1;
-
- if (info->spec == NULL) {
- otx2_npc_dbg("Info spec NULL");
- goto done;
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- rc = flow_update_extraction_data(pst, info, &xinfo->xtract[i]);
- if (rc != 0)
- return rc;
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- if (xinfo->xtract[i].flags_enable == 0)
- continue;
-
- lf_cfg = pst->npc->prx_lfcfg[i].i;
- if (lf_cfg == lid) {
- for (j = 0; j < NPC_MAX_LFL; j++) {
- lfinfo = pst->npc->prx_fxcfg[intf]
- [i][j].xtract;
- rc = flow_update_extraction_data(pst, info,
- &lfinfo[0]);
- if (rc != 0)
- return rc;
-
- if (lfinfo[0].enable)
- pst->flags[lid] = j;
- }
- }
- }
-
-done:
- dump = &pst->flow->dump_data[pst->flow->num_patterns++];
- dump->lid = lid;
- dump->ltype = lt;
- /* Next pattern to parse by subsequent layers */
- pst->pattern++;
- return 0;
-}
-
-static inline int
-flow_range_is_valid(const char *spec, const char *last, const char *mask,
- int len)
-{
- /* Mask must be zero or equal to spec as we do not support
- * non-contiguous ranges.
- */
- while (len--) {
- if (last[len] &&
- (spec[len] & mask[len]) != (last[len] & mask[len]))
- return 0; /* False */
- }
- return 1;
-}
-
-
-static inline int
-flow_mask_is_supported(const char *mask, const char *hw_mask, int len)
-{
- /*
- * If no hw_mask, assume nothing is supported.
- * mask is never NULL
- */
- if (hw_mask == NULL)
- return flow_mem_is_zero(mask, len);
-
- while (len--) {
- if ((mask[len] | hw_mask[len]) != hw_mask[len])
- return 0; /* False */
- }
- return 1;
-}
-
-int
-otx2_flow_parse_item_basic(const struct rte_flow_item *item,
- struct otx2_flow_item_info *info,
- struct rte_flow_error *error)
-{
- /* Item must not be NULL */
- if (item == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Item is NULL");
- return -rte_errno;
- }
- /* If spec is NULL, both mask and last must be NULL, this
- * makes it to match ANY value (eq to mask = 0).
- * Setting either mask or last without spec is an error
- */
- if (item->spec == NULL) {
- if (item->last == NULL && item->mask == NULL) {
- info->spec = NULL;
- return 0;
- }
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "mask or last set without spec");
- return -rte_errno;
- }
-
- /* We have valid spec */
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW)
- info->spec = item->spec;
-
- /* If mask is not set, use default mask, err if default mask is
- * also NULL.
- */
- if (item->mask == NULL) {
- otx2_npc_dbg("Item mask null, using default mask");
- if (info->def_mask == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "No mask or default mask given");
- return -rte_errno;
- }
- info->mask = info->def_mask;
- } else {
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW)
- info->mask = item->mask;
- }
-
- /* mask specified must be subset of hw supported mask
- * mask | hw_mask == hw_mask
- */
- if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Unsupported field in the mask");
- return -rte_errno;
- }
-
- /* Now we have spec and mask. OTX2 does not support non-contiguous
- * range. We should have either:
- * - spec & mask == last & mask or,
- * - last == 0 or,
- * - last == NULL
- */
- if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) {
- if (!flow_range_is_valid(item->spec, item->last, info->mask,
- info->len)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "Unsupported range for match");
- return -rte_errno;
- }
- }
-
- return 0;
-}
-
-void
-otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
-{
- uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
- int i, j = 0;
-
- for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
- if (nibble_mask & (1 << i)) {
- nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
- cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
- j += 1;
- }
- }
-
- data[0] = cdata[0];
- data[1] = cdata[1];
-}
-
-static int
-flow_first_set_bit(uint64_t slab)
-{
- int num = 0;
-
- if ((slab & 0xffffffff) == 0) {
- num += 32;
- slab >>= 32;
- }
- if ((slab & 0xffff) == 0) {
- num += 16;
- slab >>= 16;
- }
- if ((slab & 0xff) == 0) {
- num += 8;
- slab >>= 8;
- }
- if ((slab & 0xf) == 0) {
- num += 4;
- slab >>= 4;
- }
- if ((slab & 0x3) == 0) {
- num += 2;
- slab >>= 2;
- }
- if ((slab & 0x1) == 0)
- num += 1;
-
- return num;
-}
-
-static int
-flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- uint32_t old_ent, uint32_t new_ent)
-{
- struct npc_mcam_shift_entry_req *req;
- struct npc_mcam_shift_entry_rsp *rsp;
- struct otx2_flow_list *list;
- struct rte_flow *flow_iter;
- int rc = 0;
-
- otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent,
- flow->priority);
-
- list = &flow_info->flow_list[flow->priority];
-
- /* Old entry is disabled & it's contents are moved to new_entry,
- * new entry is enabled finally.
- */
- req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox);
- req->curr_entry[0] = old_ent;
- req->new_entry[0] = new_ent;
- req->shift_count = 1;
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Remove old node from list */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id == old_ent)
- TAILQ_REMOVE(list, flow_iter, next);
- }
-
- /* Insert node with new mcam id at right place */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id > new_ent)
- TAILQ_INSERT_BEFORE(flow_iter, flow, next);
- }
- return rc;
-}
-
-/* Exchange all required entries with a given priority level */
-static int
-flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl)
-{
- struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp;
- uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries;
- uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0;
- /* Bit position within the slab */
- uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0;
- /* Overall bit position of the start of slab */
- /* free & live entry index */
- int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0;
- struct otx2_mcam_ents_info *ent_info;
- /* free & live bitmap slab */
- uint64_t sl_fr = 0, sl_lv = 0, *sl;
-
- fr_bmp = flow_info->free_entries[prio_lvl];
- fr_bmp_rev = flow_info->free_entries_rev[prio_lvl];
- lv_bmp = flow_info->live_entries[prio_lvl];
- lv_bmp_rev = flow_info->live_entries_rev[prio_lvl];
- ent_info = &flow_info->flow_entry_info[prio_lvl];
- mcam_entries = flow_info->mcam_entries;
-
-
- /* New entries allocated are always contiguous, but older entries
- * already in free/live bitmap can be non-contiguous: so return
- * shifted entries should be in non-contiguous format.
- */
- while (idx <= rsp->count) {
- if (!sl_fr && !sl_lv) {
- /* Lower index elements to be exchanged */
- if (dir < 0) {
- rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr);
- rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv);
- otx2_npc_dbg("Fwd slab rc fr %u rc lv %u "
- "e_fr %u e_lv %u", rc_fr, rc_lv,
- e_fr, e_lv);
- } else {
- rc_fr = rte_bitmap_scan(fr_bmp_rev,
- &sl_fr_bit_off,
- &sl_fr);
- rc_lv = rte_bitmap_scan(lv_bmp_rev,
- &sl_lv_bit_off,
- &sl_lv);
-
- otx2_npc_dbg("Rev slab rc fr %u rc lv %u "
- "e_fr %u e_lv %u", rc_fr, rc_lv,
- e_fr, e_lv);
- }
- }
-
- if (rc_fr) {
- fr_bit_pos = flow_first_set_bit(sl_fr);
- e_fr = sl_fr_bit_off + fr_bit_pos;
- otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos);
- } else {
- e_fr = ~(0);
- }
-
- if (rc_lv) {
- lv_bit_pos = flow_first_set_bit(sl_lv);
- e_lv = sl_lv_bit_off + lv_bit_pos;
- otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos);
- } else {
- e_lv = ~(0);
- }
-
- /* First entry is from free_bmap */
- if (e_fr < e_lv) {
- bmp = fr_bmp;
- e = e_fr;
- sl = &sl_fr;
- bit_pos = fr_bit_pos;
- if (dir > 0)
- e_id = mcam_entries - e - 1;
- else
- e_id = e;
- otx2_npc_dbg("Fr e %u e_id %u", e, e_id);
- } else {
- bmp = lv_bmp;
- e = e_lv;
- sl = &sl_lv;
- bit_pos = lv_bit_pos;
- if (dir > 0)
- e_id = mcam_entries - e - 1;
- else
- e_id = e;
-
- otx2_npc_dbg("Lv e %u e_id %u", e, e_id);
- if (idx < rsp->count)
- rc =
- flow_shift_lv_ent(mbox, flow,
- flow_info, e_id,
- rsp->entry + idx);
- }
-
- rte_bitmap_clear(bmp, e);
- rte_bitmap_set(bmp, rsp->entry + idx);
- /* Update entry list, use non-contiguous
- * list now.
- */
- rsp->entry_list[idx] = e_id;
- *sl &= ~(1 << bit_pos);
-
- /* Update min & max entry identifiers in current
- * priority level.
- */
- if (dir < 0) {
- ent_info->max_id = rsp->entry + idx;
- ent_info->min_id = e_id;
- } else {
- ent_info->max_id = e_id;
- ent_info->min_id = rsp->entry;
- }
-
- idx++;
- }
- return rc;
-}
-
-/* Validate if newly allocated entries lie in the correct priority zone
- * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
- * If not properly aligned, shift entries to do so
- */
-static int
-flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp,
- int req_prio)
-{
- int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority;
- struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
- int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1;
- uint32_t tot_ent = 0;
-
- otx2_npc_dbg("Dir %d, priority = %d", dir, prio);
-
- if (dir < 0)
- prio_idx = flow_info->flow_max_priority - 1;
-
- /* Only live entries needs to be shifted, free entries can just be
- * moved by bits manipulation.
- */
-
- /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting,
- * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority
- * level entries(lower indexes).
- *
- * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift,
- * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority
- * level entries(higher indexes) with highest indexes.
- */
- do {
- tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent;
-
- if (dir < 0 && prio_idx != prio &&
- rsp->entry > info[prio_idx].max_id && tot_ent) {
- otx2_npc_dbg("Rsp entry %u prio idx %u "
- "max id %u", rsp->entry, prio_idx,
- info[prio_idx].max_id);
-
- needs_shift = 1;
- } else if ((dir > 0) && (prio_idx != prio) &&
- (rsp->entry < info[prio_idx].min_id) && tot_ent) {
- otx2_npc_dbg("Rsp entry %u prio idx %u "
- "min id %u", rsp->entry, prio_idx,
- info[prio_idx].min_id);
- needs_shift = 1;
- }
-
- otx2_npc_dbg("Needs_shift = %d", needs_shift);
- if (needs_shift) {
- needs_shift = 0;
- rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir,
- prio_idx);
- } else {
- for (idx = 0; idx < rsp->count; idx++)
- rsp->entry_list[idx] = rsp->entry + idx;
- }
- } while ((prio_idx != prio) && (prio_idx += dir));
-
- return rc;
-}
-
-static int
-flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio,
- int prio_lvl)
-{
- struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
- int step = 1;
-
- while (step < flow_info->flow_max_priority) {
- if (((prio_lvl + step) < flow_info->flow_max_priority) &&
- info[prio_lvl + step].live_ent) {
- *prio = NPC_MCAM_HIGHER_PRIO;
- return info[prio_lvl + step].min_id;
- }
-
- if (((prio_lvl - step) >= 0) &&
- info[prio_lvl - step].live_ent) {
- otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step,
- info[prio_lvl - step].live_ent);
- *prio = NPC_MCAM_LOWER_PRIO;
- return info[prio_lvl - step].max_id;
- }
- step++;
- }
- *prio = NPC_MCAM_ANY_PRIO;
- return 0;
-}
-
-static int
-flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info, uint32_t *free_ent)
-{
- struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev;
- struct npc_mcam_alloc_entry_rsp rsp_local;
- struct npc_mcam_alloc_entry_rsp *rsp_cmd;
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mcam_ents_info *info;
- uint16_t ref_ent, idx;
- int rc, prio;
-
- info = &flow_info->flow_entry_info[flow->priority];
- free_bmp = flow_info->free_entries[flow->priority];
- free_bmp_rev = flow_info->free_entries_rev[flow->priority];
- live_bmp = flow_info->live_entries[flow->priority];
- live_bmp_rev = flow_info->live_entries_rev[flow->priority];
-
- ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority);
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->contig = 1;
- req->count = flow_info->flow_prealloc_size;
- req->priority = prio;
- req->ref_entry = ref_ent;
-
- otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio);
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd);
- if (rc)
- return rc;
-
- rsp = &rsp_local;
- memcpy(rsp, rsp_cmd, sizeof(*rsp));
-
- otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry,
- rsp->count, prio);
-
- /* Non-first ent cache fill */
- if (prio != NPC_MCAM_ANY_PRIO) {
- flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp,
- prio);
- } else {
- /* Copy into response entry list */
- for (idx = 0; idx < rsp->count; idx++)
- rsp->entry_list[idx] = rsp->entry + idx;
- }
-
- otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count);
- /* Update free entries, reverse free entries list,
- * min & max entry ids.
- */
- for (idx = 0; idx < rsp->count; idx++) {
- if (unlikely(rsp->entry_list[idx] < info->min_id))
- info->min_id = rsp->entry_list[idx];
-
- if (unlikely(rsp->entry_list[idx] > info->max_id))
- info->max_id = rsp->entry_list[idx];
-
- /* Skip entry to be returned, not to be part of free
- * list.
- */
- if (prio == NPC_MCAM_HIGHER_PRIO) {
- if (unlikely(idx == (rsp->count - 1))) {
- *free_ent = rsp->entry_list[idx];
- continue;
- }
- } else {
- if (unlikely(!idx)) {
- *free_ent = rsp->entry_list[idx];
- continue;
- }
- }
- info->free_ent++;
- rte_bitmap_set(free_bmp, rsp->entry_list[idx]);
- rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries -
- rsp->entry_list[idx] - 1);
-
- otx2_npc_dbg("Final rsp entry %u rsp entry rev %u",
- rsp->entry_list[idx],
- flow_info->mcam_entries - rsp->entry_list[idx] - 1);
- }
-
- otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent,
- flow_info->mcam_entries - *free_ent - 1);
- info->live_ent++;
- rte_bitmap_set(live_bmp, *free_ent);
- rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1);
-
- return 0;
-}
-
-static int
-flow_check_preallocated_entry_cache(struct otx2_mbox *mbox,
- struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info)
-{
- struct rte_bitmap *free, *free_rev, *live, *live_rev;
- uint32_t pos = 0, free_ent = 0, mcam_entries;
- struct otx2_mcam_ents_info *info;
- uint64_t slab = 0;
- int rc;
-
- otx2_npc_dbg("Flow priority %u", flow->priority);
-
- info = &flow_info->flow_entry_info[flow->priority];
-
- free_rev = flow_info->free_entries_rev[flow->priority];
- free = flow_info->free_entries[flow->priority];
- live_rev = flow_info->live_entries_rev[flow->priority];
- live = flow_info->live_entries[flow->priority];
- mcam_entries = flow_info->mcam_entries;
-
- if (info->free_ent) {
- rc = rte_bitmap_scan(free, &pos, &slab);
- if (rc) {
- /* Get free_ent from free entry bitmap */
- free_ent = pos + __builtin_ctzll(slab);
- otx2_npc_dbg("Allocated from cache entry %u", free_ent);
- /* Remove from free bitmaps and add to live ones */
- rte_bitmap_clear(free, free_ent);
- rte_bitmap_set(live, free_ent);
- rte_bitmap_clear(free_rev,
- mcam_entries - free_ent - 1);
- rte_bitmap_set(live_rev,
- mcam_entries - free_ent - 1);
-
- info->free_ent--;
- info->live_ent++;
- return free_ent;
- }
-
- otx2_npc_dbg("No free entry:its a mess");
- return -1;
- }
-
- rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent);
- if (rc)
- return rc;
-
- return free_ent;
-}
-
-int
-otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox,
- struct otx2_parse_state *pst,
- struct otx2_npc_flow_info *flow_info)
-{
- int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
- struct npc_mcam_read_base_rule_rsp *base_rule_rsp;
- struct npc_mcam_write_entry_req *req;
- struct mcam_entry *base_entry;
- struct mbox_msghdr *rsp;
- uint16_t ctr = ~(0);
- int rc, idx;
- int entry;
-
- if (use_ctr) {
- rc = flow_mcam_alloc_counter(mbox, &ctr);
- if (rc)
- return rc;
- }
-
- entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info);
- if (entry < 0) {
- otx2_err("Prealloc failed");
- otx2_flow_mcam_free_counter(mbox, ctr);
- return NPC_MCAM_ALLOC_FAILED;
- }
-
- if (pst->is_vf) {
- (void)otx2_mbox_alloc_msg_npc_read_base_steer_rule(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&base_rule_rsp);
- if (rc) {
- otx2_err("Failed to fetch VF's base MCAM entry");
- return rc;
- }
- base_entry = &base_rule_rsp->entry_data;
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- flow->mcam_data[idx] |= base_entry->kw[idx];
- flow->mcam_mask[idx] |= base_entry->kw_mask[idx];
- }
- }
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- req->set_cntr = use_ctr;
- req->cntr = ctr;
- req->entry = entry;
- otx2_npc_dbg("Alloc & write entry %u", entry);
-
- req->intf =
- (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX;
- req->enable_entry = 1;
- req->entry_data.action = flow->npc_action;
- req->entry_data.vtag_action = flow->vtag_action;
-
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- req->entry_data.kw[idx] = flow->mcam_data[idx];
- req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
- }
-
- if (flow->nix_intf == OTX2_INTF_RX) {
- req->entry_data.kw[0] |= flow_info->channel;
- req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
- } else {
- uint16_t pf_func = (flow->npc_action >> 48) & 0xffff;
-
- pf_func = htons(pf_func);
- req->entry_data.kw[0] |= ((uint64_t)pf_func << 32);
- req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32);
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc != 0)
- return rc;
-
- flow->mcam_id = entry;
- if (use_ctr)
- flow->ctr_id = ctr;
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
deleted file mode 100644
index 8f5d0eed92..0000000000
--- a/drivers/net/octeontx2/otx2_link.c
+++ /dev/null
@@ -1,287 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-#include <ethdev_pci.h>
-
-#include "otx2_ethdev.h"
-
-void
-otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set)
-{
- if (set)
- dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F;
- else
- dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F;
-
- rte_wmb();
-}
-
-static inline int
-nix_wait_for_link_cfg(struct otx2_eth_dev *dev)
-{
- uint16_t wait = 1000;
-
- do {
- rte_rmb();
- if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F))
- break;
- wait--;
- rte_delay_ms(1);
- } while (wait);
-
- return wait ? 0 : -1;
-}
-
-static void
-nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
-{
- if (link && link->link_status)
- otx2_info("Port %d: Link Up - speed %u Mbps - %s",
- (int)(eth_dev->data->port_id),
- (uint32_t)link->link_speed,
- link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
- "full-duplex" : "half-duplex");
- else
- otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
-}
-
-void
-otx2_eth_dev_link_status_get(struct otx2_dev *dev,
- struct cgx_link_user_info *link)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_link eth_link;
- struct rte_eth_dev *eth_dev;
-
- if (!link || !dev)
- return;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev)
- return;
-
- rte_eth_linkstatus_get(eth_dev, ð_link);
-
- link->link_up = eth_link.link_status;
- link->speed = eth_link.link_speed;
- link->an = eth_link.link_autoneg;
- link->full_duplex = eth_link.link_duplex;
-}
-
-void
-otx2_eth_dev_link_status_update(struct otx2_dev *dev,
- struct cgx_link_user_info *link)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_link eth_link;
- struct rte_eth_dev *eth_dev;
-
- if (!link || !dev)
- return;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev || !eth_dev->data->dev_conf.intr_conf.lsc)
- return;
-
- if (nix_wait_for_link_cfg(otx2_dev)) {
- otx2_err("Timeout waiting for link_cfg to complete");
- return;
- }
-
- eth_link.link_status = link->link_up;
- eth_link.link_speed = link->speed;
- eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
- eth_link.link_duplex = link->full_duplex;
-
- otx2_dev->speed = link->speed;
- otx2_dev->duplex = link->full_duplex;
-
- /* Print link info */
- nix_link_status_print(eth_dev, ð_link);
-
- /* Update link info */
- rte_eth_linkstatus_set(eth_dev, ð_link);
-
- /* Set the flag and execute application callbacks */
- rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL);
-}
-
-static int
-lbk_link_update(struct rte_eth_link *link)
-{
- link->link_status = RTE_ETH_LINK_UP;
- link->link_speed = RTE_ETH_SPEED_NUM_100G;
- link->link_autoneg = RTE_ETH_LINK_FIXED;
- link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
- return 0;
-}
-
-static int
-cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_link_info_msg *rsp;
- int rc;
- otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- link->link_status = rsp->link_info.link_up;
- link->link_speed = rsp->link_info.speed;
- link->link_autoneg = RTE_ETH_LINK_AUTONEG;
-
- if (rsp->link_info.full_duplex)
- link->link_duplex = rsp->link_info.full_duplex;
- return 0;
-}
-
-int
-otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_link link;
- int rc;
-
- RTE_SET_USED(wait_to_complete);
- memset(&link, 0, sizeof(struct rte_eth_link));
-
- if (!eth_dev->data->dev_started || otx2_dev_is_sdp(dev))
- return 0;
-
- if (otx2_dev_is_lbk(dev))
- rc = lbk_link_update(&link);
- else
- rc = cgx_link_update(dev, &link);
-
- if (rc)
- return rc;
-
- return rte_eth_linkstatus_set(eth_dev, &link);
-}
-
-static int
-nix_dev_set_link_state(struct rte_eth_dev *eth_dev, uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_set_link_state_msg *req;
-
- req = otx2_mbox_alloc_msg_cgx_set_link_state(mbox);
- req->enable = enable;
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, i;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- rc = nix_dev_set_link_state(eth_dev, 1);
- if (rc)
- goto done;
-
- /* Start tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_start(eth_dev, i);
-
-done:
- return rc;
-}
-
-int
-otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- /* Stop tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_stop(eth_dev, i);
-
- return nix_dev_set_link_state(eth_dev, 0);
-}
-
-static int
-cgx_change_mode(struct otx2_eth_dev *dev, struct cgx_set_link_mode_args *cfg)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_set_link_mode_req *req;
-
- req = otx2_mbox_alloc_msg_cgx_set_link_mode(mbox);
- req->args.speed = cfg->speed;
- req->args.duplex = cfg->duplex;
- req->args.an = cfg->an;
-
- return otx2_mbox_process(mbox);
-}
-
-#define SPEED_NONE 0
-static inline uint32_t
-nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
-{
- uint32_t link_speed = SPEED_NONE;
-
- /* 50G and 100G to be supported for board version C0 and above */
- if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & RTE_ETH_LINK_SPEED_100G)
- link_speed = 100000;
- if (link_speeds & RTE_ETH_LINK_SPEED_50G)
- link_speed = 50000;
- }
- if (link_speeds & RTE_ETH_LINK_SPEED_40G)
- link_speed = 40000;
- if (link_speeds & RTE_ETH_LINK_SPEED_25G)
- link_speed = 25000;
- if (link_speeds & RTE_ETH_LINK_SPEED_20G)
- link_speed = 20000;
- if (link_speeds & RTE_ETH_LINK_SPEED_10G)
- link_speed = 10000;
- if (link_speeds & RTE_ETH_LINK_SPEED_5G)
- link_speed = 5000;
- if (link_speeds & RTE_ETH_LINK_SPEED_1G)
- link_speed = 1000;
-
- return link_speed;
-}
-
-static inline uint8_t
-nix_parse_eth_link_duplex(uint32_t link_speeds)
-{
- if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
- return RTE_ETH_LINK_HALF_DUPLEX;
- else
- return RTE_ETH_LINK_FULL_DUPLEX;
-}
-
-int
-otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_conf *conf = ð_dev->data->dev_conf;
- struct cgx_set_link_mode_args cfg;
-
- /* If VF/SDP/LBK, link attributes cannot be changed */
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return 0;
-
- memset(&cfg, 0, sizeof(struct cgx_set_link_mode_args));
- cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
- if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
- cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
-
- return cgx_change_mode(dev, &cfg);
- }
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
deleted file mode 100644
index 5fa9ae1396..0000000000
--- a/drivers/net/octeontx2/otx2_lookup.c
+++ /dev/null
@@ -1,352 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-#include <rte_memzone.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev.h"
-
-/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
-#define ERRCODE_ERRLEN_WIDTH 12
-#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\
- sizeof(uint32_t))
-
-#define SA_TBL_SZ (RTE_MAX_ETHPORTS * sizeof(uint64_t))
-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ +\
- SA_TBL_SZ)
-
-const uint32_t *
-otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-
- static const uint32_t ptypes[] = {
- RTE_PTYPE_L2_ETHER_QINQ, /* LB */
- RTE_PTYPE_L2_ETHER_VLAN, /* LB */
- RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */
- RTE_PTYPE_L2_ETHER_ARP, /* LC */
- RTE_PTYPE_L2_ETHER_NSH, /* LC */
- RTE_PTYPE_L2_ETHER_FCOE, /* LC */
- RTE_PTYPE_L2_ETHER_MPLS, /* LC */
- RTE_PTYPE_L3_IPV4, /* LC */
- RTE_PTYPE_L3_IPV4_EXT, /* LC */
- RTE_PTYPE_L3_IPV6, /* LC */
- RTE_PTYPE_L3_IPV6_EXT, /* LC */
- RTE_PTYPE_L4_TCP, /* LD */
- RTE_PTYPE_L4_UDP, /* LD */
- RTE_PTYPE_L4_SCTP, /* LD */
- RTE_PTYPE_L4_ICMP, /* LD */
- RTE_PTYPE_L4_IGMP, /* LD */
- RTE_PTYPE_TUNNEL_GRE, /* LD */
- RTE_PTYPE_TUNNEL_ESP, /* LD */
- RTE_PTYPE_TUNNEL_NVGRE, /* LD */
- RTE_PTYPE_TUNNEL_VXLAN, /* LE */
- RTE_PTYPE_TUNNEL_GENEVE, /* LE */
- RTE_PTYPE_TUNNEL_GTPC, /* LE */
- RTE_PTYPE_TUNNEL_GTPU, /* LE */
- RTE_PTYPE_TUNNEL_VXLAN_GPE, /* LE */
- RTE_PTYPE_TUNNEL_MPLS_IN_GRE, /* LE */
- RTE_PTYPE_TUNNEL_MPLS_IN_UDP, /* LE */
- RTE_PTYPE_INNER_L2_ETHER,/* LF */
- RTE_PTYPE_INNER_L3_IPV4, /* LG */
- RTE_PTYPE_INNER_L3_IPV6, /* LG */
- RTE_PTYPE_INNER_L4_TCP, /* LH */
- RTE_PTYPE_INNER_L4_UDP, /* LH */
- RTE_PTYPE_INNER_L4_SCTP, /* LH */
- RTE_PTYPE_INNER_L4_ICMP, /* LH */
- RTE_PTYPE_UNKNOWN,
- };
-
- return ptypes;
-}
-
-int
-otx2_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (ptype_mask) {
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_PTYPE_F;
- dev->ptype_disable = 0;
- } else {
- dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_PTYPE_F;
- dev->ptype_disable = 1;
- }
-
- otx2_eth_set_rx_function(eth_dev);
-
- return 0;
-}
-
-/*
- * +------------------ +------------------ +
- * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 |
- * +-------------------+-------------------+
- *
- * +-------------------+------------------ +
- * | | LH | LG | LF | LE | LD | LC | LB |
- * +-------------------+-------------------+
- *
- * ptype [LE - LD - LC - LB] = TU - L4 - L3 - T2
- * ptype_tunnel[LH - LG - LF] = IL4 - IL3 - IL2 - TU
- *
- */
-static void
-nix_create_non_tunnel_ptype_array(uint16_t *ptype)
-{
- uint8_t lb, lc, ld, le;
- uint16_t val;
- uint32_t idx;
-
- for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) {
- lb = idx & 0xF;
- lc = (idx & 0xF0) >> 4;
- ld = (idx & 0xF00) >> 8;
- le = (idx & 0xF000) >> 12;
- val = RTE_PTYPE_UNKNOWN;
-
- switch (lb) {
- case NPC_LT_LB_STAG_QINQ:
- val |= RTE_PTYPE_L2_ETHER_QINQ;
- break;
- case NPC_LT_LB_CTAG:
- val |= RTE_PTYPE_L2_ETHER_VLAN;
- break;
- }
-
- switch (lc) {
- case NPC_LT_LC_ARP:
- val |= RTE_PTYPE_L2_ETHER_ARP;
- break;
- case NPC_LT_LC_NSH:
- val |= RTE_PTYPE_L2_ETHER_NSH;
- break;
- case NPC_LT_LC_FCOE:
- val |= RTE_PTYPE_L2_ETHER_FCOE;
- break;
- case NPC_LT_LC_MPLS:
- val |= RTE_PTYPE_L2_ETHER_MPLS;
- break;
- case NPC_LT_LC_IP:
- val |= RTE_PTYPE_L3_IPV4;
- break;
- case NPC_LT_LC_IP_OPT:
- val |= RTE_PTYPE_L3_IPV4_EXT;
- break;
- case NPC_LT_LC_IP6:
- val |= RTE_PTYPE_L3_IPV6;
- break;
- case NPC_LT_LC_IP6_EXT:
- val |= RTE_PTYPE_L3_IPV6_EXT;
- break;
- case NPC_LT_LC_PTP:
- val |= RTE_PTYPE_L2_ETHER_TIMESYNC;
- break;
- }
-
- switch (ld) {
- case NPC_LT_LD_TCP:
- val |= RTE_PTYPE_L4_TCP;
- break;
- case NPC_LT_LD_UDP:
- val |= RTE_PTYPE_L4_UDP;
- break;
- case NPC_LT_LD_SCTP:
- val |= RTE_PTYPE_L4_SCTP;
- break;
- case NPC_LT_LD_ICMP:
- case NPC_LT_LD_ICMP6:
- val |= RTE_PTYPE_L4_ICMP;
- break;
- case NPC_LT_LD_IGMP:
- val |= RTE_PTYPE_L4_IGMP;
- break;
- case NPC_LT_LD_GRE:
- val |= RTE_PTYPE_TUNNEL_GRE;
- break;
- case NPC_LT_LD_NVGRE:
- val |= RTE_PTYPE_TUNNEL_NVGRE;
- break;
- }
-
- switch (le) {
- case NPC_LT_LE_VXLAN:
- val |= RTE_PTYPE_TUNNEL_VXLAN;
- break;
- case NPC_LT_LE_ESP:
- val |= RTE_PTYPE_TUNNEL_ESP;
- break;
- case NPC_LT_LE_VXLANGPE:
- val |= RTE_PTYPE_TUNNEL_VXLAN_GPE;
- break;
- case NPC_LT_LE_GENEVE:
- val |= RTE_PTYPE_TUNNEL_GENEVE;
- break;
- case NPC_LT_LE_GTPC:
- val |= RTE_PTYPE_TUNNEL_GTPC;
- break;
- case NPC_LT_LE_GTPU:
- val |= RTE_PTYPE_TUNNEL_GTPU;
- break;
- case NPC_LT_LE_TU_MPLS_IN_GRE:
- val |= RTE_PTYPE_TUNNEL_MPLS_IN_GRE;
- break;
- case NPC_LT_LE_TU_MPLS_IN_UDP:
- val |= RTE_PTYPE_TUNNEL_MPLS_IN_UDP;
- break;
- }
- ptype[idx] = val;
- }
-}
-
-#define TU_SHIFT(x) ((x) >> PTYPE_NON_TUNNEL_WIDTH)
-static void
-nix_create_tunnel_ptype_array(uint16_t *ptype)
-{
- uint8_t lf, lg, lh;
- uint16_t val;
- uint32_t idx;
-
- /* Skip non tunnel ptype array memory */
- ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ;
-
- for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) {
- lf = idx & 0xF;
- lg = (idx & 0xF0) >> 4;
- lh = (idx & 0xF00) >> 8;
- val = RTE_PTYPE_UNKNOWN;
-
- switch (lf) {
- case NPC_LT_LF_TU_ETHER:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER);
- break;
- }
- switch (lg) {
- case NPC_LT_LG_TU_IP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4);
- break;
- case NPC_LT_LG_TU_IP6:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6);
- break;
- }
- switch (lh) {
- case NPC_LT_LH_TU_TCP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP);
- break;
- case NPC_LT_LH_TU_UDP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP);
- break;
- case NPC_LT_LH_TU_SCTP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP);
- break;
- case NPC_LT_LH_TU_ICMP:
- case NPC_LT_LH_TU_ICMP6:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP);
- break;
- }
-
- ptype[idx] = val;
- }
-}
-
-static void
-nix_create_rx_ol_flags_array(void *mem)
-{
- uint16_t idx, errcode, errlev;
- uint32_t val, *ol_flags;
-
- /* Skip ptype array memory */
- ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ);
-
- for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) {
- errlev = idx & 0xf;
- errcode = (idx & 0xff0) >> 4;
-
- val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
- val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
- val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
-
- switch (errlev) {
- case NPC_ERRLEV_RE:
- /* Mark all errors as BAD checksum errors
- * including Outer L2 length mismatch error
- */
- if (errcode) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
- break;
- case NPC_ERRLEV_LC:
- if (errcode == NPC_EC_OIP4_CSUM ||
- errcode == NPC_EC_IP_FRAG_OFFSET_1) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- }
- break;
- case NPC_ERRLEV_LG:
- if (errcode == NPC_EC_IIP4_CSUM)
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- else
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- break;
- case NPC_ERRLEV_NIX:
- if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
- errcode == NIX_RX_PERRCODE_OL4_LEN ||
- errcode == NIX_RX_PERRCODE_OL4_PORT) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
- } else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
- errcode == NIX_RX_PERRCODE_IL4_LEN ||
- errcode == NIX_RX_PERRCODE_IL4_PORT) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- } else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
- errcode == NIX_RX_PERRCODE_OL3_LEN) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
- break;
- }
- ol_flags[idx] = val;
- }
-}
-
-void *
-otx2_nix_fastpath_lookup_mem_get(void)
-{
- const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- const struct rte_memzone *mz;
- void *mem;
-
- /* SA_TBL starts after PTYPE_ARRAY & ERR_ARRAY */
- RTE_BUILD_BUG_ON(OTX2_NIX_SA_TBL_START != (PTYPE_ARRAY_SZ +
- ERR_ARRAY_SZ));
-
- mz = rte_memzone_lookup(name);
- if (mz != NULL)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ,
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz != NULL) {
- mem = mz->addr;
- /* Form the ptype array lookup memory */
- nix_create_non_tunnel_ptype_array(mem);
- nix_create_tunnel_ptype_array(mem);
- /* Form the rx ol_flags based on errcode */
- nix_create_rx_ol_flags_array(mem);
- return mem;
- }
- return NULL;
-}
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
deleted file mode 100644
index 49a700ca1d..0000000000
--- a/drivers/net/octeontx2/otx2_mac.c
+++ /dev/null
@@ -1,151 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-
-int
-otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_mac_addr_set_or_get *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (otx2_dev_active_vfs(dev))
- return -ENOTSUP;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to set mac address in CGX, rc=%d", rc);
-
- return 0;
-}
-
-int
-otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
-{
- struct cgx_max_dmac_entries_get_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- return rsp->max_dmac_filters;
-}
-
-int
-otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr,
- uint32_t index __rte_unused, uint32_t pool __rte_unused)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_mac_addr_add_req *req;
- struct cgx_mac_addr_add_rsp *rsp;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (otx2_dev_active_vfs(dev))
- return -ENOTSUP;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to add mac address, rc=%d", rc);
- goto done;
- }
-
- /* Enable promiscuous mode at NIX level */
- otx2_nix_promisc_config(eth_dev, 1);
- dev->dmac_filter_enable = true;
- eth_dev->data->promiscuous = 0;
-
-done:
- return rc;
-}
-
-void
-otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_mac_addr_del_req *req;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox);
- req->index = index;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to delete mac address, rc=%d", rc);
-}
-
-int
-otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_set_mac_addr *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to set mac address, rc=%d", rc);
- goto done;
- }
-
- otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- /* Install the same entry into CGX DMAC filter table too. */
- otx2_cgx_mac_addr_set(eth_dev, addr);
-
-done:
- return rc;
-}
-
-int
-otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_get_mac_addr_rsp *rsp;
- int rc;
-
- otx2_mbox_alloc_msg_nix_get_mac_addr(mbox);
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get mac address, rc=%d", rc);
- goto done;
- }
-
- otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN);
-
-done:
- return rc;
-}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
deleted file mode 100644
index b9c63ad3bc..0000000000
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ /dev/null
@@ -1,339 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-static int
-nix_mc_addr_list_free(struct otx2_eth_dev *dev, uint32_t entry_count)
-{
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (entry_count == 0)
- goto exit;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry->mcam_index;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- if (rc < 0)
- goto exit;
-
- TAILQ_REMOVE(&dev->mc_fltr_tbl, entry, next);
- rte_free(entry);
- entry_count--;
-
- if (entry_count == 0)
- break;
- }
-
- if (entry == NULL)
- dev->mc_tbl_set = false;
-
-exit:
- return rc;
-}
-
-static int
-nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- volatile uint8_t *key_data, *key_mask;
- struct npc_mcam_write_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct npc_xtract_info *x_info;
- uint64_t mcam_data, mcam_mask;
- struct mcast_entry *entry;
- otx2_dxcfg_t *ld_cfg;
- uint8_t *mac_addr;
- uint64_t action;
- int idx, rc = 0;
-
- ld_cfg = &npc->prx_dxcfg;
- /* Get ETH layer profile info for populating mcam entries */
- x_info = &(*ld_cfg)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- if (req == NULL) {
- /* The mbox memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- req->intf = NPC_MCAM_RX;
- req->enable_entry = 1;
-
- /* Channel base extracted to KW0[11:0] */
- req->entry_data.kw[0] = dev->rx_chan_base;
- req->entry_data.kw_mask[0] = RTE_LEN2MASK(12, uint64_t);
-
- /* Update mcam address */
- key_data = (volatile uint8_t *)req->entry_data.kw;
- key_mask = (volatile uint8_t *)req->entry_data.kw_mask;
-
- mcam_data = 0ull;
- mcam_mask = RTE_LEN2MASK(48, uint64_t);
- mac_addr = &entry->mcast_mac.addr_bytes[0];
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- otx2_mbox_memcpy(key_data + x_info->key_off,
- &mcam_data, x_info->len);
- otx2_mbox_memcpy(key_mask + x_info->key_off,
- &mcam_mask, x_info->len);
-
- action = NIX_RX_ACTIONOP_UCAST;
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
- action = NIX_RX_ACTIONOP_RSS;
- action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
- }
-
- action |= ((uint64_t)otx2_pfvf_func(dev->pf, dev->vf)) << 4;
- req->entry_data.action = action;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_mc_addr_list_install(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t entry_count = 0, idx = 0;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (!dev->mc_tbl_set)
- return 0;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- entry_count++;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->priority = NPC_MCAM_ANY_PRIO;
- req->count = entry_count;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || rsp->count < entry_count) {
- otx2_err("Failed to allocate required mcam entries");
- goto exit;
- }
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- entry->mcam_index = rsp->entry_list[idx];
-
- rc = nix_hw_update_mc_addr_list(eth_dev);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_mc_addr_list_uninstall(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (!dev->mc_tbl_set)
- return 0;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- if (req == NULL) {
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-static int
-nix_setup_mc_addr_list(struct otx2_eth_dev *dev,
- struct rte_ether_addr *mc_addr_set)
-{
- struct npc_mcam_ena_dis_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- uint32_t idx = 0;
- int rc = 0;
-
- /* Populate PMD's mcast list with given mcast mac addresses and
- * disable all mcam entries pertaining to the mcast list.
- */
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- rte_memcpy(&entry->mcast_mac, &mc_addr_set[idx++],
- RTE_ETHER_ADDR_LEN);
-
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
- if (req == NULL) {
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_set_mc_addr_list(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *mc_addr_set,
- uint32_t nb_mc_addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t idx, priv_count = 0;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (otx2_dev_is_vf(dev))
- return -ENOTSUP;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- priv_count++;
-
- if (nb_mc_addr == 0 || mc_addr_set == NULL) {
- /* Free existing list if new list is null */
- nb_mc_addr = priv_count;
- goto exit;
- }
-
- for (idx = 0; idx < nb_mc_addr; idx++) {
- if (!rte_is_multicast_ether_addr(&mc_addr_set[idx]))
- return -EINVAL;
- }
-
- /* New list is bigger than the existing list,
- * allocate mcam entries for the extra entries.
- */
- if (nb_mc_addr > priv_count) {
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->priority = NPC_MCAM_ANY_PRIO;
- req->count = nb_mc_addr - priv_count;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || (rsp->count + priv_count < nb_mc_addr)) {
- otx2_err("Failed to allocate required entries");
- nb_mc_addr = priv_count;
- goto exit;
- }
-
- /* Append new mcam entries to the existing mc list */
- for (idx = 0; idx < rsp->count; idx++) {
- entry = rte_zmalloc("otx2_nix_mc_entry",
- sizeof(struct mcast_entry), 0);
- if (!entry) {
- otx2_err("Failed to allocate memory");
- nb_mc_addr = priv_count;
- rc = -ENOMEM;
- goto exit;
- }
- entry->mcam_index = rsp->entry_list[idx];
- TAILQ_INSERT_HEAD(&dev->mc_fltr_tbl, entry, next);
- }
- } else {
- /* Free the extra mcam entries if the new list is smaller
- * than exiting list.
- */
- nix_mc_addr_list_free(dev, priv_count - nb_mc_addr);
- }
-
-
- /* Now mc_fltr_tbl has the required number of mcam entries,
- * Traverse through it and add new multicast filter table entries.
- */
- rc = nix_setup_mc_addr_list(dev, mc_addr_set);
- if (rc < 0)
- goto exit;
-
- rc = nix_hw_update_mc_addr_list(eth_dev);
- if (rc < 0)
- goto exit;
-
- dev->mc_tbl_set = true;
-
- return 0;
-
-exit:
- nix_mc_addr_list_free(dev, nb_mc_addr);
- return rc;
-}
-
-void
-otx2_nix_mc_filter_init(struct otx2_eth_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- return;
-
- TAILQ_INIT(&dev->mc_fltr_tbl);
-}
-
-void
-otx2_nix_mc_filter_fini(struct otx2_eth_dev *dev)
-{
- struct mcast_entry *entry;
- uint32_t count = 0;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- count++;
-
- nix_mc_addr_list_free(dev, count);
-}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
deleted file mode 100644
index abb2130587..0000000000
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ /dev/null
@@ -1,450 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <ethdev_driver.h>
-
-#include "otx2_ethdev.h"
-
-#define PTP_FREQ_ADJUST (1 << 9)
-
-/* Function to enable ptp config for VFs */
-void
-otx2_nix_ptp_enable_vf(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (otx2_nix_recalc_mtu(eth_dev))
- otx2_err("Failed to set MTU size for ptp");
-
- dev->scalar_ena = true;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
-}
-
-static uint16_t
-nix_eth_ptp_vf_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- struct otx2_eth_rxq *rxq = queue;
- struct rte_eth_dev *eth_dev;
-
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- eth_dev = rxq->eth_dev;
- otx2_nix_ptp_enable_vf(eth_dev);
-
- return 0;
-}
-
-static int
-nix_read_raw_clock(struct otx2_eth_dev *dev, uint64_t *clock, uint64_t *tsc,
- uint8_t is_pmu)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_GET_CLOCK;
- req->is_pmu = is_pmu;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto fail;
-
- if (clock)
- *clock = rsp->clk;
- if (tsc)
- *tsc = rsp->tsc;
-
-fail:
- return rc;
-}
-
-/* This function calculates two parameters "clk_freq_mult" and
- * "clk_delta" which is useful in deriving PTP HI clock from
- * timestamp counter (tsc) value.
- */
-int
-otx2_nix_raw_clock_tsc_conv(struct otx2_eth_dev *dev)
-{
- uint64_t ticks_base = 0, ticks = 0, tsc = 0, t_freq;
- int rc, val;
-
- /* Calculating the frequency at which PTP HI clock is running */
- rc = nix_read_raw_clock(dev, &ticks_base, &tsc, false);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- rte_delay_ms(100);
-
- rc = nix_read_raw_clock(dev, &ticks, &tsc, false);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- t_freq = (ticks - ticks_base) * 10;
-
- /* Calculating the freq multiplier viz the ratio between the
- * frequency at which PTP HI clock works and tsc clock runs
- */
- dev->clk_freq_mult =
- (double)pow(10, floor(log10(t_freq))) / rte_get_timer_hz();
-
- val = false;
-#ifdef RTE_ARM_EAL_RDTSC_USE_PMU
- val = true;
-#endif
- rc = nix_read_raw_clock(dev, &ticks, &tsc, val);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- /* Calculating delta between PTP HI clock and tsc */
- dev->clk_delta = ((uint64_t)(ticks / dev->clk_freq_mult) - tsc);
-
-fail:
- return rc;
-}
-
-static void
-nix_start_timecounters(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter));
- memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter));
- memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter));
-
- dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
- dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
- dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
-}
-
-static int
-nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t rc = -EINVAL;
-
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return rc;
-
- if (en) {
- /* Enable time stamping of sent PTP packets. */
- otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("MBOX ptp tx conf enable failed: err %d", rc);
- return rc;
- }
- /* Enable time stamping of received PTP packets. */
- otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox);
- } else {
- /* Disable time stamping of sent PTP packets. */
- otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("MBOX ptp tx conf disable failed: err %d", rc);
- return rc;
- }
- /* Disable time stamping of received PTP packets. */
- otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox);
- }
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_dev *eth_dev;
- int i;
-
- if (!dev)
- return -EINVAL;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev)
- return -EINVAL;
-
- otx2_dev->ptp_en = ptp_en;
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i];
- rxq->mbuf_initializer =
- otx2_nix_rxq_mbuf_setup(otx2_dev,
- eth_dev->data->port_id);
- }
- if (otx2_dev_is_vf(otx2_dev) && !(otx2_dev_is_sdp(otx2_dev)) &&
- !(otx2_dev_is_lbk(otx2_dev))) {
- /* In case of VF, setting of MTU cant be done directly in this
- * function as this is running as part of MBOX request(PF->VF)
- * and MTU setting also requires MBOX message to be
- * sent(VF->PF)
- */
- eth_dev->rx_pkt_burst = nix_eth_ptp_vf_burst;
- rte_mb();
- }
-
- return 0;
-}
-
-int
-otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i, rc = 0;
-
- /* If we are VF/SDP/LBK, ptp cannot not be enabled */
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev)) {
- otx2_info("PTP cannot be enabled in case of VF/SDP/LBK");
- return -EINVAL;
- }
-
- if (otx2_ethdev_is_ptp_en(dev)) {
- otx2_info("PTP mode is already enabled");
- return -EINVAL;
- }
-
- if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) {
- otx2_err("Ptype offload is disabled, it should be enabled");
- return -EINVAL;
- }
-
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- otx2_err("Both PTP and switch header enabled");
- return -EINVAL;
- }
-
- /* Allocating a iova address for tx tstamp */
- const struct rte_memzone *ts;
- ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts",
- 0, OTX2_ALIGN, OTX2_ALIGN,
- dev->node);
- if (ts == NULL) {
- otx2_err("Failed to allocate mem for tx tstamp addr");
- return -ENOMEM;
- }
-
- dev->tstamp.tx_tstamp_iova = ts->iova;
- dev->tstamp.tx_tstamp = ts->addr;
-
- rc = rte_mbuf_dyn_rx_timestamp_register(
- &dev->tstamp.tstamp_dynfield_offset,
- &dev->tstamp.rx_tstamp_dynflag);
- if (rc != 0) {
- otx2_err("Failed to register Rx timestamp field/flag");
- return -rte_errno;
- }
-
- /* System time should be already on by default */
- nix_start_timecounters(eth_dev);
-
- dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
-
- rc = nix_ptp_config(eth_dev, 1);
- if (!rc) {
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
- otx2_nix_form_default_desc(txq);
- }
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
- }
-
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- otx2_err("Failed to set MTU size for ptp");
-
- return rc;
-}
-
-int
-otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i, rc = 0;
-
- if (!otx2_ethdev_is_ptp_en(dev)) {
- otx2_nix_dbg("PTP mode is disabled");
- return -EINVAL;
- }
-
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return -EINVAL;
-
- dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
- dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
-
- rc = nix_ptp_config(eth_dev, 0);
- if (!rc) {
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
- otx2_nix_form_default_desc(txq);
- }
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
- }
-
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- otx2_err("Failed to set MTU size for ptp");
-
- return rc;
-}
-
-int
-otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp,
- uint32_t __rte_unused flags)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_timesync_info *tstamp = &dev->tstamp;
- uint64_t ns;
-
- if (!tstamp->rx_ready)
- return -EINVAL;
-
- ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp);
- *timestamp = rte_ns_to_timespec(ns);
- tstamp->rx_ready = 0;
-
- otx2_nix_dbg("rx timestamp: %"PRIu64" sec: %"PRIu64" nsec %"PRIu64"",
- (uint64_t)tstamp->rx_tstamp, (uint64_t)timestamp->tv_sec,
- (uint64_t)timestamp->tv_nsec);
-
- return 0;
-}
-
-int
-otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_timesync_info *tstamp = &dev->tstamp;
- uint64_t ns;
-
- if (*tstamp->tx_tstamp == 0)
- return -EINVAL;
-
- ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp);
- *timestamp = rte_ns_to_timespec(ns);
-
- otx2_nix_dbg("tx timestamp: %"PRIu64" sec: %"PRIu64" nsec %"PRIu64"",
- *tstamp->tx_tstamp, (uint64_t)timestamp->tv_sec,
- (uint64_t)timestamp->tv_nsec);
-
- *tstamp->tx_tstamp = 0;
- rte_wmb();
-
- return 0;
-}
-
-int
-otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- int rc;
-
- /* Adjust the frequent to make tics increments in 10^9 tics per sec */
- if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) {
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_ADJFINE;
- req->scaled_ppm = delta;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
- /* Since the frequency of PTP comp register is tuned, delta and
- * freq mult calculation for deriving PTP_HI from timestamp
- * counter should be done again.
- */
- rc = otx2_nix_raw_clock_tsc_conv(dev);
- if (rc)
- otx2_err("Failed to calculate delta and freq mult");
- }
- dev->systime_tc.nsec += delta;
- dev->rx_tstamp_tc.nsec += delta;
- dev->tx_tstamp_tc.nsec += delta;
-
- return 0;
-}
-
-int
-otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
- const struct timespec *ts)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t ns;
-
- ns = rte_timespec_to_ns(ts);
- /* Set the time counters to a new value. */
- dev->systime_tc.nsec = ns;
- dev->rx_tstamp_tc.nsec = ns;
- dev->tx_tstamp_tc.nsec = ns;
-
- return 0;
-}
-
-int
-otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- uint64_t ns;
- int rc;
-
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_GET_CLOCK;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- ns = rte_timecounter_update(&dev->systime_tc, rsp->clk);
- *ts = rte_ns_to_timespec(ns);
-
- otx2_nix_dbg("PTP time read: %"PRIu64" .%09"PRIu64"",
- (uint64_t)ts->tv_sec, (uint64_t)ts->tv_nsec);
-
- return 0;
-}
-
-
-int
-otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *clock)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* This API returns the raw PTP HI clock value. Since LFs doesn't
- * have direct access to PTP registers and it requires mbox msg
- * to AF for this value. In fastpath reading this value for every
- * packet (which involes mbox call) becomes very expensive, hence
- * we should be able to derive PTP HI clock value from tsc by
- * using freq_mult and clk_delta calculated during configure stage.
- */
- *clock = (rte_get_tsc_cycles() + dev->clk_delta) * dev->clk_freq_mult;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
deleted file mode 100644
index 68cef1caa3..0000000000
--- a/drivers/net/octeontx2/otx2_rss.c
+++ /dev/null
@@ -1,427 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
- uint8_t group, uint16_t *ind_tbl)
-{
- struct otx2_rss_info *rss = &dev->rss_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *req;
- int rc, idx;
-
- for (idx = 0; idx < rss->rss_size; idx++) {
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req)
- return -ENOMEM;
- }
- req->rss.rq = ind_tbl[idx];
- /* Fill AQ info */
- req->qidx = (group * rss->rss_size) + idx;
- req->ctype = NIX_AQ_CTYPE_RSS;
- req->op = NIX_AQ_INSTOP_INIT;
-
- if (!dev->lock_rx_ctx)
- continue;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req)
- return -ENOMEM;
- }
- req->rss.rq = ind_tbl[idx];
- /* Fill AQ info */
- req->qidx = (group * rss->rss_size) + idx;
- req->ctype = NIX_AQ_CTYPE_RSS;
- req->op = NIX_AQ_INSTOP_LOCK;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- return 0;
-}
-
-int
-otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_rss_info *rss = &dev->rss_info;
- int rc, i, j;
- int idx = 0;
-
- rc = -EINVAL;
- if (reta_size != dev->rss_info.rss_size) {
- otx2_err("Size of hash lookup table configured "
- "(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, dev->rss_info.rss_size);
- goto fail;
- }
-
- /* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
- if ((reta_conf[i].mask >> j) & 0x01)
- rss->ind_tbl[idx] = reta_conf[i].reta[j];
- idx++;
- }
- }
-
- return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
-
-fail:
- return rc;
-}
-
-int
-otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_rss_info *rss = &dev->rss_info;
- int rc, i, j;
-
- rc = -EINVAL;
-
- if (reta_size != dev->rss_info.rss_size) {
- otx2_err("Size of hash lookup table configured "
- "(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, dev->rss_info.rss_size);
- goto fail;
- }
-
- /* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
- if ((reta_conf[i].mask >> j) & 0x01)
- reta_conf[i].reta[j] = rss->ind_tbl[j];
- }
-
- return 0;
-
-fail:
- return rc;
-}
-
-void
-otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key,
- uint32_t key_len)
-{
- const uint8_t default_key[NIX_HASH_KEY_SIZE] = {
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
- };
- struct otx2_rss_info *rss = &dev->rss_info;
- uint64_t *keyptr;
- uint64_t val;
- uint32_t idx;
-
- if (key == NULL || key == 0) {
- keyptr = (uint64_t *)(uintptr_t)default_key;
- key_len = NIX_HASH_KEY_SIZE;
- memset(rss->key, 0, key_len);
- } else {
- memcpy(rss->key, key, key_len);
- keyptr = (uint64_t *)rss->key;
- }
-
- for (idx = 0; idx < (key_len >> 3); idx++) {
- val = rte_cpu_to_be_64(*keyptr);
- otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx));
- keyptr++;
- }
-}
-
-static void
-rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
-{
- uint64_t *keyptr = (uint64_t *)key;
- uint64_t val;
- int idx;
-
- for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) {
- val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx));
- *keyptr = rte_be_to_cpu_64(val);
- keyptr++;
- }
-}
-
-#define RSS_IPV4_ENABLE ( \
- RTE_ETH_RSS_IPV4 | \
- RTE_ETH_RSS_FRAG_IPV4 | \
- RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-
-#define RSS_IPV6_ENABLE ( \
- RTE_ETH_RSS_IPV6 | \
- RTE_ETH_RSS_FRAG_IPV6 | \
- RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define RSS_IPV6_EX_ENABLE ( \
- RTE_ETH_RSS_IPV6_EX | \
- RTE_ETH_RSS_IPV6_TCP_EX | \
- RTE_ETH_RSS_IPV6_UDP_EX)
-
-#define RSS_MAX_LEVELS 3
-
-#define RSS_IPV4_INDEX 0
-#define RSS_IPV6_INDEX 1
-#define RSS_TCP_INDEX 2
-#define RSS_UDP_INDEX 3
-#define RSS_SCTP_INDEX 4
-#define RSS_DMAC_INDEX 5
-
-uint32_t
-otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
- uint8_t rss_level)
-{
- uint32_t flow_key_type[RSS_MAX_LEVELS][6] = {
- {
- FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6,
- FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP,
- FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC
- },
- {
- FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6,
- FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP,
- FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC
- },
- {
- FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4,
- FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6,
- FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP,
- FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP,
- FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP,
- FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC
- }
- };
- uint32_t flowkey_cfg = 0;
-
- dev->rss_info.nix_rss = ethdev_rss;
-
- if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
- flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
- }
-
- if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
- flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
-
- if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
-
- if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
-
- if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
-
- if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
-
- if (ethdev_rss & RSS_IPV4_ENABLE)
- flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX];
-
- if (ethdev_rss & RSS_IPV6_ENABLE)
- flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_TCP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_UDP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_SCTP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
- flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
-
- if (ethdev_rss & RSS_IPV6_EX_ENABLE)
- flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
-
- if (ethdev_rss & RTE_ETH_RSS_PORT)
- flowkey_cfg |= FLOW_KEY_TYPE_PORT;
-
- if (ethdev_rss & RTE_ETH_RSS_NVGRE)
- flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
-
- if (ethdev_rss & RTE_ETH_RSS_VXLAN)
- flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
-
- if (ethdev_rss & RTE_ETH_RSS_GENEVE)
- flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
-
- if (ethdev_rss & RTE_ETH_RSS_GTPU)
- flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
-
- return flowkey_cfg;
-}
-
-int
-otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg,
- uint8_t *alg_idx, uint8_t group, int mcam_index)
-{
- struct nix_rss_flowkey_cfg_rsp *rss_rsp;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rss_flowkey_cfg *cfg;
- int rc;
-
- rc = -EINVAL;
-
- dev->rss_info.flowkey_cfg = flowkey_cfg;
-
- cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox);
-
- cfg->flowkey_cfg = flowkey_cfg;
- cfg->mcam_index = mcam_index; /* -1 indicates default group */
- cfg->group = group; /* 0 is default group */
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp);
- if (rc)
- return rc;
-
- if (alg_idx)
- *alg_idx = rss_rsp->alg_idx;
-
- return rc;
-}
-
-int
-otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t rss_hash_level;
- uint32_t flowkey_cfg;
- uint8_t alg_idx;
- int rc;
-
- rc = -EINVAL;
-
- if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) {
- otx2_err("Hash key size mismatch %d vs %d",
- rss_conf->rss_key_len, NIX_HASH_KEY_SIZE);
- goto fail;
- }
-
- if (rss_conf->rss_key)
- otx2_nix_rss_set_key(dev, rss_conf->rss_key,
- (uint32_t)rss_conf->rss_key_len);
-
- rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
- if (rss_hash_level)
- rss_hash_level -= 1;
- flowkey_cfg =
- otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, rss_hash_level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
- NIX_DEFAULT_RSS_CTX_GROUP,
- NIX_DEFAULT_RSS_MCAM_IDX);
- if (rc) {
- otx2_err("Failed to set RSS hash function rc=%d", rc);
- return rc;
- }
-
- dev->rss_info.alg_idx = alg_idx;
-
-fail:
- return rc;
-}
-
-int
-otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (rss_conf->rss_key)
- rss_get_key(dev, rss_conf->rss_key);
-
- rss_conf->rss_key_len = NIX_HASH_KEY_SIZE;
- rss_conf->rss_hf = dev->rss_info.nix_rss;
-
- return 0;
-}
-
-int
-otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t idx, qcnt = eth_dev->data->nb_rx_queues;
- uint8_t rss_hash_level;
- uint32_t flowkey_cfg;
- uint64_t rss_hf;
- uint8_t alg_idx;
- int rc;
-
- /* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
- return 0;
-
- /* Update default RSS key and cfg */
- otx2_nix_rss_set_key(dev, NULL, 0);
-
- /* Update default RSS RETA */
- for (idx = 0; idx < dev->rss_info.rss_size; idx++)
- dev->rss_info.ind_tbl[idx] = idx % qcnt;
-
- /* Init RSS table context */
- rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
- if (rc) {
- otx2_err("Failed to init RSS table rc=%d", rc);
- return rc;
- }
-
- rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
- if (rss_hash_level)
- rss_hash_level -= 1;
- flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
- NIX_DEFAULT_RSS_CTX_GROUP,
- NIX_DEFAULT_RSS_MCAM_IDX);
- if (rc) {
- otx2_err("Failed to set RSS hash function rc=%d", rc);
- return rc;
- }
-
- dev->rss_info.alg_idx = alg_idx;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
deleted file mode 100644
index 5ee1aed786..0000000000
--- a/drivers/net/octeontx2/otx2_rx.c
+++ /dev/null
@@ -1,430 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_vect.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_rx.h"
-
-#define NIX_DESCS_PER_LOOP 4
-#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
-#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ)
-
-static inline uint16_t
-nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata,
- const uint16_t pkts, const uint32_t qmask)
-{
- uint32_t available = rxq->available;
-
- /* Update the available count if cached value is not enough */
- if (unlikely(available < pkts)) {
- uint64_t reg, head, tail;
-
- /* Use LDADDA version to avoid reorder */
- reg = otx2_atomic64_add_sync(wdata, rxq->cq_status);
- /* CQ_OP_STATUS operation error */
- if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) ||
- reg & BIT_ULL(CQ_OP_STAT_CQ_ERR))
- return 0;
-
- tail = reg & 0xFFFFF;
- head = (reg >> 20) & 0xFFFFF;
- if (tail < head)
- available = tail - head + qmask + 1;
- else
- available = tail - head;
-
- rxq->available = available;
- }
-
- return RTE_MIN(pkts, available);
-}
-
-static __rte_always_inline uint16_t
-nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- const uint64_t mbuf_init = rxq->mbuf_initializer;
- const void *lookup_mem = rxq->lookup_mem;
- const uint64_t data_off = rxq->data_off;
- const uintptr_t desc = rxq->desc;
- const uint64_t wdata = rxq->wdata;
- const uint32_t qmask = rxq->qmask;
- uint16_t packets = 0, nb_pkts;
- uint32_t head = rxq->head;
- struct nix_cqe_hdr_s *cq;
- struct rte_mbuf *mbuf;
-
- nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
-
- while (packets < nb_pkts) {
- /* Prefetch N desc ahead */
- rte_prefetch_non_temporal((void *)(desc +
- (CQE_SZ((head + 2) & qmask))));
- cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
-
- mbuf = nix_get_mbuf_from_cqe(cq, data_off);
-
- otx2_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
- flags);
- otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags,
- (uint64_t *)((uint8_t *)mbuf + data_off));
- rx_pkts[packets++] = mbuf;
- otx2_prefetch_store_keep(mbuf);
- head++;
- head &= qmask;
- }
-
- rxq->head = head;
- rxq->available -= nb_pkts;
-
- /* Free all the CQs that we've processed */
- otx2_write64((wdata | nb_pkts), rxq->cq_door);
-
- return nb_pkts;
-}
-
-#if defined(RTE_ARCH_ARM64)
-
-static __rte_always_inline uint64_t
-nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
-{
- if (w2 & BIT_ULL(21) /* vtag0_gone */) {
- ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
- *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline uint64_t
-nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
-{
- if (w2 & BIT_ULL(23) /* vtag1_gone */) {
- ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
- mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline uint16_t
-nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0;
- uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
- const uint64_t mbuf_initializer = rxq->mbuf_initializer;
- const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
- uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
- uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
- struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
- const uint16_t *lookup_mem = rxq->lookup_mem;
- const uint32_t qmask = rxq->qmask;
- const uint64_t wdata = rxq->wdata;
- const uintptr_t desc = rxq->desc;
- uint8x16_t f0, f1, f2, f3;
- uint32_t head = rxq->head;
- uint16_t pkts_left;
-
- pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
- pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
-
- /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
- pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
-
- while (packets < pkts) {
- /* Exit loop if head is about to wrap and become unaligned */
- if (((head + NIX_DESCS_PER_LOOP - 1) & qmask) <
- NIX_DESCS_PER_LOOP) {
- pkts_left += (pkts - packets);
- break;
- }
-
- const uintptr_t cq0 = desc + CQE_SZ(head);
-
- /* Prefetch N desc ahead */
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
-
- /* Get NIX_RX_SG_S for size and buffer pointer */
- cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
- cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
- cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
- cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
-
- /* Extract mbuf from NIX_RX_SG_S */
- mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
- mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
- mbuf01 = vqsubq_u64(mbuf01, data_off);
- mbuf23 = vqsubq_u64(mbuf23, data_off);
-
- /* Move mbufs to scalar registers for future use */
- mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
- mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
- mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
- mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
-
- /* Mask to get packet len from NIX_RX_SG_S */
- const uint8x16_t shuf_msk = {
- 0xFF, 0xFF, /* pkt_type set as unknown */
- 0xFF, 0xFF, /* pkt_type set as unknown */
- 0, 1, /* octet 1~0, low 16 bits pkt_len */
- 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
- 0, 1, /* octet 1~0, 16 bits data_len */
- 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF
- };
-
- /* Form the rx_descriptor_fields1 with pkt_len and data_len */
- f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
- f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
- f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
- f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
-
- /* Load CQE word0 and word 1 */
- uint64_t cq0_w0 = ((uint64_t *)(cq0 + CQE_SZ(0)))[0];
- uint64_t cq0_w1 = ((uint64_t *)(cq0 + CQE_SZ(0)))[1];
- uint64_t cq1_w0 = ((uint64_t *)(cq0 + CQE_SZ(1)))[0];
- uint64_t cq1_w1 = ((uint64_t *)(cq0 + CQE_SZ(1)))[1];
- uint64_t cq2_w0 = ((uint64_t *)(cq0 + CQE_SZ(2)))[0];
- uint64_t cq2_w1 = ((uint64_t *)(cq0 + CQE_SZ(2)))[1];
- uint64_t cq3_w0 = ((uint64_t *)(cq0 + CQE_SZ(3)))[0];
- uint64_t cq3_w1 = ((uint64_t *)(cq0 + CQE_SZ(3)))[1];
-
- if (flags & NIX_RX_OFFLOAD_RSS_F) {
- /* Fill rss in the rx_descriptor_fields1 */
- f0 = vsetq_lane_u32(cq0_w0, f0, 3);
- f1 = vsetq_lane_u32(cq1_w0, f1, 3);
- f2 = vsetq_lane_u32(cq2_w0, f2, 3);
- f3 = vsetq_lane_u32(cq3_w0, f3, 3);
- ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
- } else {
- ol_flags0 = 0; ol_flags1 = 0;
- ol_flags2 = 0; ol_flags3 = 0;
- }
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F) {
- /* Fill packet_type in the rx_descriptor_fields1 */
- f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq0_w1),
- f0, 0);
- f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq1_w1),
- f1, 0);
- f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq2_w1),
- f2, 0);
- f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq3_w1),
- f3, 0);
- }
-
- if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) {
- ol_flags0 |= nix_rx_olflags_get(lookup_mem, cq0_w1);
- ol_flags1 |= nix_rx_olflags_get(lookup_mem, cq1_w1);
- ol_flags2 |= nix_rx_olflags_get(lookup_mem, cq2_w1);
- ol_flags3 |= nix_rx_olflags_get(lookup_mem, cq3_w1);
- }
-
- if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
- uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
- uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
- uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16);
- uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16);
-
- ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0);
- ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1);
- ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2);
- ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3);
-
- ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0);
- ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1);
- ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2);
- ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3);
- }
-
- if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) {
- ol_flags0 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0);
- ol_flags1 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1);
- ol_flags2 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2);
- ol_flags3 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3);
- }
-
- /* Form rearm_data with ol_flags */
- rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1);
- rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1);
- rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1);
- rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1);
-
- /* Update rx_descriptor_fields1 */
- vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0);
- vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1);
- vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2);
- vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3);
-
- /* Update rearm_data */
- vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0);
- vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1);
- vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
- vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
-
- /* Update that no more segments */
- mbuf0->next = NULL;
- mbuf1->next = NULL;
- mbuf2->next = NULL;
- mbuf3->next = NULL;
-
- /* Store the mbufs to rx_pkts */
- vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01);
- vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23);
-
- /* Prefetch mbufs */
- otx2_prefetch_store_keep(mbuf0);
- otx2_prefetch_store_keep(mbuf1);
- otx2_prefetch_store_keep(mbuf2);
- otx2_prefetch_store_keep(mbuf3);
-
- /* Mark mempool obj as "get" as it is alloc'ed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
-
- /* Advance head pointer and packets */
- head += NIX_DESCS_PER_LOOP; head &= qmask;
- packets += NIX_DESCS_PER_LOOP;
- }
-
- rxq->head = head;
- rxq->available -= packets;
-
- rte_io_wmb();
- /* Free all the CQs that we've processed */
- otx2_write64((rxq->wdata | packets), rxq->cq_door);
-
- if (unlikely(pkts_left))
- packets += nix_recv_pkts(rx_queue, &rx_pkts[packets],
- pkts_left, flags);
-
- return packets;
-}
-
-#else
-
-static inline uint16_t
-nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- RTE_SET_USED(rx_queue);
- RTE_SET_USED(rx_pkts);
- RTE_SET_USED(pkts);
- RTE_SET_USED(flags);
-
- return 0;
-}
-
-#endif
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
-} \
- \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
- (flags) | NIX_RX_MULTI_SEG_F); \
-} \
- \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- /* TSTMP is not supported by vector */ \
- if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \
- return 0; \
- return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \
-} \
-
-NIX_RX_FASTPATH_MODES
-#undef R
-
-static inline void
-pick_rx_func(struct rte_eth_dev *eth_dev,
- const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* [SEC] [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
- eth_dev->rx_pkt_burst = rx_burst
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
-}
-
-void
-otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- /* For PTP enabled, scalar rx function should be chosen as most of the
- * PTP apps are implemented to rx burst 1 pkt.
- */
- if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- pick_rx_func(eth_dev, nix_eth_rx_burst);
- else
- pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
- pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
-
- /* Copy multi seg version with no offload for tear down sequence */
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- dev->rx_pkt_burst_no_offload =
- nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
- rte_mb();
-}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
deleted file mode 100644
index 98406244e2..0000000000
--- a/drivers/net/octeontx2/otx2_rx.h
+++ /dev/null
@@ -1,583 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_RX_H__
-#define __OTX2_RX_H__
-
-#include <rte_ether.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_ipsec_anti_replay.h"
-#include "otx2_ipsec_fp.h"
-
-/* Default mark value used when none is provided. */
-#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff
-
-#define PTYPE_NON_TUNNEL_WIDTH 16
-#define PTYPE_TUNNEL_WIDTH 12
-#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_NON_TUNNEL_WIDTH)
-#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_TUNNEL_WIDTH)
-#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\
- PTYPE_TUNNEL_ARRAY_SZ) *\
- sizeof(uint16_t))
-
-#define NIX_RX_OFFLOAD_NONE (0)
-#define NIX_RX_OFFLOAD_RSS_F BIT(0)
-#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
-#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2)
-#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
-#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
-#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
-#define NIX_RX_OFFLOAD_SECURITY_F BIT(6)
-
-/* Flags to control cqe_to_mbuf conversion function.
- * Defining it from backwards to denote its been
- * not used as offload flags to pick function
- */
-#define NIX_RX_MULTI_SEG_F BIT(15)
-#define NIX_TIMESYNC_RX_OFFSET 8
-
-/* Inline IPsec offsets */
-
-/* nix_cqe_hdr_s + nix_rx_parse_s + nix_rx_sg_s + nix_iova_s */
-#define INLINE_CPT_RESULT_OFFSET 80
-
-struct otx2_timesync_info {
- uint64_t rx_tstamp;
- rte_iova_t tx_tstamp_iova;
- uint64_t *tx_tstamp;
- uint64_t rx_tstamp_dynflag;
- int tstamp_dynfield_offset;
- uint8_t tx_ready;
- uint8_t rx_ready;
-} __rte_cache_aligned;
-
-union mbuf_initializer {
- struct {
- uint16_t data_off;
- uint16_t refcnt;
- uint16_t nb_segs;
- uint16_t port;
- } fields;
- uint64_t value;
-};
-
-static inline rte_mbuf_timestamp_t *
-otx2_timestamp_dynfield(struct rte_mbuf *mbuf,
- struct otx2_timesync_info *info)
-{
- return RTE_MBUF_DYNFIELD(mbuf,
- info->tstamp_dynfield_offset, rte_mbuf_timestamp_t *);
-}
-
-static __rte_always_inline void
-otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
- struct otx2_timesync_info *tstamp, const uint16_t flag,
- uint64_t *tstamp_ptr)
-{
- if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) &&
- (mbuf->data_off == RTE_PKTMBUF_HEADROOM +
- NIX_TIMESYNC_RX_OFFSET)) {
-
- mbuf->pkt_len -= NIX_TIMESYNC_RX_OFFSET;
-
- /* Reading the rx timestamp inserted by CGX, viz at
- * starting of the packet data.
- */
- *otx2_timestamp_dynfield(mbuf, tstamp) =
- rte_be_to_cpu_64(*tstamp_ptr);
- /* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
- * PTP packets are received.
- */
- if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
- tstamp->rx_tstamp =
- *otx2_timestamp_dynfield(mbuf, tstamp);
- tstamp->rx_ready = 1;
- mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
- RTE_MBUF_F_RX_IEEE1588_TMST |
- tstamp->rx_tstamp_dynflag;
- }
- }
-}
-
-static __rte_always_inline uint64_t
-nix_clear_data_off(uint64_t oldval)
-{
- union mbuf_initializer mbuf_init = { .value = oldval };
-
- mbuf_init.fields.data_off = 0;
- return mbuf_init.value;
-}
-
-static __rte_always_inline struct rte_mbuf *
-nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
-{
- rte_iova_t buff;
-
- /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */
- buff = *((rte_iova_t *)((uint64_t *)cq + 9));
- return (struct rte_mbuf *)(buff - data_off);
-}
-
-
-static __rte_always_inline uint32_t
-nix_ptype_get(const void * const lookup_mem, const uint64_t in)
-{
- const uint16_t * const ptype = lookup_mem;
- const uint16_t lh_lg_lf = (in & 0xFFF0000000000000) >> 52;
- const uint16_t tu_l2 = ptype[(in & 0x000FFFF000000000) >> 36];
- const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lh_lg_lf];
-
- return (il4_tu << PTYPE_NON_TUNNEL_WIDTH) | tu_l2;
-}
-
-static __rte_always_inline uint32_t
-nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in)
-{
- const uint32_t * const ol_flags = (const uint32_t *)
- ((const uint8_t *)lookup_mem + PTYPE_ARRAY_SZ);
-
- return ol_flags[(in & 0xfff00000) >> 20];
-}
-
-static inline uint64_t
-nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
- struct rte_mbuf *mbuf)
-{
- /* There is no separate bit to check match_id
- * is valid or not? and no flag to identify it is an
- * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK
- * action. The former case addressed through 0 being invalid
- * value and inc/dec match_id pair when MARK is activated.
- * The later case addressed through defining
- * OTX2_FLOW_MARK_DEFAULT as value for
- * RTE_FLOW_ACTION_TYPE_MARK.
- * This would translate to not use
- * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and
- * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id.
- * i.e valid mark_id's are from
- * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
- */
- if (likely(match_id)) {
- ol_flags |= RTE_MBUF_F_RX_FDIR;
- if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
- ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
- mbuf->hash.fdir.hi = match_id - 1;
- }
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline void
-nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
- struct rte_mbuf *mbuf, uint64_t rearm)
-{
- const rte_iova_t *iova_list;
- struct rte_mbuf *head;
- const rte_iova_t *eol;
- uint8_t nb_segs;
- uint64_t sg;
-
- sg = *(const uint64_t *)(rx + 1);
- nb_segs = (sg >> 48) & 0x3;
- mbuf->nb_segs = nb_segs;
- mbuf->data_len = sg & 0xFFFF;
- sg = sg >> 16;
-
- eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1));
- /* Skip SG_S and first IOVA*/
- iova_list = ((const rte_iova_t *)(rx + 1)) + 2;
- nb_segs--;
-
- rearm = rearm & ~0xFFFF;
-
- head = mbuf;
- while (nb_segs) {
- mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
- mbuf = mbuf->next;
-
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
-
- mbuf->data_len = sg & 0xFFFF;
- sg = sg >> 16;
- *(uint64_t *)(&mbuf->rearm_data) = rearm;
- nb_segs--;
- iova_list++;
-
- if (!nb_segs && (iova_list + 1 < eol)) {
- sg = *(const uint64_t *)(iova_list);
- nb_segs = (sg >> 48) & 0x3;
- head->nb_segs += nb_segs;
- iova_list = (const rte_iova_t *)(iova_list + 1);
- }
- }
- mbuf->next = NULL;
-}
-
-static __rte_always_inline uint16_t
-nix_rx_sec_cptres_get(const void *cq)
-{
- volatile const struct otx2_cpt_res *res;
-
- res = (volatile const struct otx2_cpt_res *)((const char *)cq +
- INLINE_CPT_RESULT_OFFSET);
-
- return res->u16[0];
-}
-
-static __rte_always_inline void *
-nix_rx_sec_sa_get(const void * const lookup_mem, int spi, uint16_t port)
-{
- const uint64_t *const *sa_tbl = (const uint64_t * const *)
- ((const uint8_t *)lookup_mem + OTX2_NIX_SA_TBL_START);
-
- return (void *)sa_tbl[port][spi];
-}
-
-static __rte_always_inline uint64_t
-nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
- const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
- const void * const lookup_mem)
-{
- uint8_t *l2_ptr, *l3_ptr, *l2_ptr_actual, *l3_ptr_actual;
- struct otx2_ipsec_fp_in_sa *sa;
- uint16_t m_len, l2_len, ip_len;
- struct rte_ipv6_hdr *ip6h;
- struct rte_ipv4_hdr *iph;
- uint16_t *ether_type;
- uint32_t spi;
- int i;
-
- if (unlikely(nix_rx_sec_cptres_get(cq) != OTX2_SEC_COMP_GOOD))
- return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
-
- /* 20 bits of tag would have the SPI */
- spi = cq->tag & 0xFFFFF;
-
- sa = nix_rx_sec_sa_get(lookup_mem, spi, m->port);
- *rte_security_dynfield(m) = sa->udata64;
-
- l2_ptr = rte_pktmbuf_mtod(m, uint8_t *);
- l2_len = rx->lcptr - rx->laptr;
- l3_ptr = RTE_PTR_ADD(l2_ptr, l2_len);
-
- if (sa->replay_win_sz) {
- if (cpt_ipsec_ip_antireplay_check(sa, l3_ptr) < 0)
- return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
- }
-
- l2_ptr_actual = RTE_PTR_ADD(l2_ptr,
- sizeof(struct otx2_ipsec_fp_res_hdr));
- l3_ptr_actual = RTE_PTR_ADD(l3_ptr,
- sizeof(struct otx2_ipsec_fp_res_hdr));
-
- for (i = l2_len - RTE_ETHER_TYPE_LEN - 1; i >= 0; i--)
- l2_ptr_actual[i] = l2_ptr[i];
-
- m->data_off += sizeof(struct otx2_ipsec_fp_res_hdr);
-
- ether_type = RTE_PTR_SUB(l3_ptr_actual, RTE_ETHER_TYPE_LEN);
-
- iph = (struct rte_ipv4_hdr *)l3_ptr_actual;
- if ((iph->version_ihl >> 4) == 4) {
- ip_len = rte_be_to_cpu_16(iph->total_length);
- *ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
- } else {
- ip6h = (struct rte_ipv6_hdr *)iph;
- ip_len = rte_be_to_cpu_16(ip6h->payload_len);
- *ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
- }
-
- m_len = ip_len + l2_len;
- m->data_len = m_len;
- m->pkt_len = m_len;
- return RTE_MBUF_F_RX_SEC_OFFLOAD;
-}
-
-static __rte_always_inline void
-otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
- struct rte_mbuf *mbuf, const void *lookup_mem,
- const uint64_t val, const uint16_t flag)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
- const uint64_t w1 = *(const uint64_t *)rx;
- const uint16_t len = rx->pkt_lenm1 + 1;
- uint64_t ol_flags = 0;
-
- /* Mark mempool obj as "get" as it is alloc'ed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
-
- if (flag & NIX_RX_OFFLOAD_PTYPE_F)
- mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
- else
- mbuf->packet_type = 0;
-
- if (flag & NIX_RX_OFFLOAD_RSS_F) {
- mbuf->hash.rss = tag;
- ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
- }
-
- if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
- ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
-
- if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
- if (rx->vtag0_gone) {
- ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
- mbuf->vlan_tci = rx->vtag0_tci;
- }
- if (rx->vtag1_gone) {
- ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
- mbuf->vlan_tci_outer = rx->vtag1_tci;
- }
- }
-
- if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F)
- ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf);
-
- if ((flag & NIX_RX_OFFLOAD_SECURITY_F) &&
- cq->cqe_type == NIX_XQE_TYPE_RX_IPSECH) {
- *(uint64_t *)(&mbuf->rearm_data) = val;
- ol_flags |= nix_rx_sec_mbuf_update(rx, cq, mbuf, lookup_mem);
- mbuf->ol_flags = ol_flags;
- return;
- }
-
- mbuf->ol_flags = ol_flags;
- *(uint64_t *)(&mbuf->rearm_data) = val;
- mbuf->pkt_len = len;
-
- if (flag & NIX_RX_MULTI_SEG_F) {
- nix_cqe_xtract_mseg(rx, mbuf, val);
- } else {
- mbuf->data_len = len;
- mbuf->next = NULL;
- }
-}
-
-#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
-#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F
-#define RSS_F NIX_RX_OFFLOAD_RSS_F
-#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
-#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F
-#define TS_F NIX_RX_OFFLOAD_TSTAMP_F
-#define RX_SEC_F NIX_RX_OFFLOAD_SECURITY_F
-
-/* [SEC] [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
-#define NIX_RX_FASTPATH_MODES \
-R(no_offload, 0, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \
-R(rss, 0, 0, 0, 0, 0, 0, 1, RSS_F) \
-R(ptype, 0, 0, 0, 0, 0, 1, 0, PTYPE_F) \
-R(ptype_rss, 0, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \
-R(cksum, 0, 0, 0, 0, 1, 0, 0, CKSUM_F) \
-R(cksum_rss, 0, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \
-R(cksum_ptype, 0, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \
-R(cksum_ptype_rss, 0, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\
-R(vlan, 0, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \
-R(vlan_rss, 0, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \
-R(vlan_ptype, 0, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \
-R(vlan_ptype_rss, 0, 0, 0, 1, 0, 1, 1, \
- RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum, 0, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \
-R(vlan_cksum_rss, 0, 0, 0, 1, 1, 0, 1, \
- RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype, 0, 0, 0, 1, 1, 1, 0, \
- RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(vlan_cksum_ptype_rss, 0, 0, 0, 1, 1, 1, 1, \
- RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(mark, 0, 0, 1, 0, 0, 0, 0, MARK_F) \
-R(mark_rss, 0, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \
-R(mark_ptype, 0, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \
-R(mark_ptype_rss, 0, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F) \
-R(mark_cksum, 0, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \
-R(mark_cksum_rss, 0, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F) \
-R(mark_cksum_ptype, 0, 0, 1, 0, 1, 1, 0, \
- MARK_F | CKSUM_F | PTYPE_F) \
-R(mark_cksum_ptype_rss, 0, 0, 1, 0, 1, 1, 1, \
- MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(mark_vlan, 0, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \
-R(mark_vlan_rss, 0, 0, 1, 1, 0, 0, 1, \
- MARK_F | RX_VLAN_F | RSS_F) \
-R(mark_vlan_ptype, 0, 0, 1, 1, 0, 1, 0, \
- MARK_F | RX_VLAN_F | PTYPE_F) \
-R(mark_vlan_ptype_rss, 0, 0, 1, 1, 0, 1, 1, \
- MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(mark_vlan_cksum, 0, 0, 1, 1, 1, 0, 0, \
- MARK_F | RX_VLAN_F | CKSUM_F) \
-R(mark_vlan_cksum_rss, 0, 0, 1, 1, 1, 0, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(mark_vlan_cksum_ptype, 0, 0, 1, 1, 1, 1, 0, \
- MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(mark_vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts, 0, 1, 0, 0, 0, 0, 0, TS_F) \
-R(ts_rss, 0, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \
-R(ts_ptype, 0, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \
-R(ts_ptype_rss, 0, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F) \
-R(ts_cksum, 0, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \
-R(ts_cksum_rss, 0, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F) \
-R(ts_cksum_ptype, 0, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F) \
-R(ts_cksum_ptype_rss, 0, 1, 0, 0, 1, 1, 1, \
- TS_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_vlan, 0, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \
-R(ts_vlan_rss, 0, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F) \
-R(ts_vlan_ptype, 0, 1, 0, 1, 0, 1, 0, \
- TS_F | RX_VLAN_F | PTYPE_F) \
-R(ts_vlan_ptype_rss, 0, 1, 0, 1, 0, 1, 1, \
- TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(ts_vlan_cksum, 0, 1, 0, 1, 1, 0, 0, \
- TS_F | RX_VLAN_F | CKSUM_F) \
-R(ts_vlan_cksum_rss, 0, 1, 0, 1, 1, 0, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(ts_vlan_cksum_ptype, 0, 1, 0, 1, 1, 1, 0, \
- TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(ts_vlan_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, 1, \
- TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_mark, 0, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \
-R(ts_mark_rss, 0, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F) \
-R(ts_mark_ptype, 0, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F) \
-R(ts_mark_ptype_rss, 0, 1, 1, 0, 0, 1, 1, \
- TS_F | MARK_F | PTYPE_F | RSS_F) \
-R(ts_mark_cksum, 0, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F) \
-R(ts_mark_cksum_rss, 0, 1, 1, 0, 1, 0, 1, \
- TS_F | MARK_F | CKSUM_F | RSS_F) \
-R(ts_mark_cksum_ptype, 0, 1, 1, 0, 1, 1, 0, \
- TS_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(ts_mark_cksum_ptype_rss, 0, 1, 1, 0, 1, 1, 1, \
- TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_mark_vlan, 0, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\
-R(ts_mark_vlan_rss, 0, 1, 1, 1, 0, 0, 1, \
- TS_F | MARK_F | RX_VLAN_F | RSS_F) \
-R(ts_mark_vlan_ptype, 0, 1, 1, 1, 0, 1, 0, \
- TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(ts_mark_vlan_ptype_rss, 0, 1, 1, 1, 0, 1, 1, \
- TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(ts_mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 1, 0, \
- TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(ts_mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, 1, \
- TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec, 1, 0, 0, 0, 0, 0, 0, RX_SEC_F) \
-R(sec_rss, 1, 0, 0, 0, 0, 0, 1, RX_SEC_F | RSS_F) \
-R(sec_ptype, 1, 0, 0, 0, 0, 1, 0, RX_SEC_F | PTYPE_F) \
-R(sec_ptype_rss, 1, 0, 0, 0, 0, 1, 1, \
- RX_SEC_F | PTYPE_F | RSS_F) \
-R(sec_cksum, 1, 0, 0, 0, 1, 0, 0, RX_SEC_F | CKSUM_F) \
-R(sec_cksum_rss, 1, 0, 0, 0, 1, 0, 1, \
- RX_SEC_F | CKSUM_F | RSS_F) \
-R(sec_cksum_ptype, 1, 0, 0, 0, 1, 1, 0, \
- RX_SEC_F | CKSUM_F | PTYPE_F) \
-R(sec_cksum_ptype_rss, 1, 0, 0, 0, 1, 1, 1, \
- RX_SEC_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_vlan, 1, 0, 0, 1, 0, 0, 0, RX_SEC_F | RX_VLAN_F) \
-R(sec_vlan_rss, 1, 0, 0, 1, 0, 0, 1, \
- RX_SEC_F | RX_VLAN_F | RSS_F) \
-R(sec_vlan_ptype, 1, 0, 0, 1, 0, 1, 0, \
- RX_SEC_F | RX_VLAN_F | PTYPE_F) \
-R(sec_vlan_ptype_rss, 1, 0, 0, 1, 0, 1, 1, \
- RX_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_vlan_cksum, 1, 0, 0, 1, 1, 0, 0, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F) \
-R(sec_vlan_cksum_rss, 1, 0, 0, 1, 1, 0, 1, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_vlan_cksum_ptype, 1, 0, 0, 1, 1, 1, 0, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_vlan_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, 1, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_mark, 1, 0, 1, 0, 0, 0, 0, RX_SEC_F | MARK_F) \
-R(sec_mark_rss, 1, 0, 1, 0, 0, 0, 1, RX_SEC_F | MARK_F | RSS_F)\
-R(sec_mark_ptype, 1, 0, 1, 0, 0, 1, 0, \
- RX_SEC_F | MARK_F | PTYPE_F) \
-R(sec_mark_ptype_rss, 1, 0, 1, 0, 0, 1, 1, \
- RX_SEC_F | MARK_F | PTYPE_F | RSS_F) \
-R(sec_mark_cksum, 1, 0, 1, 0, 1, 0, 0, \
- RX_SEC_F | MARK_F | CKSUM_F) \
-R(sec_mark_cksum_rss, 1, 0, 1, 0, 1, 0, 1, \
- RX_SEC_F | MARK_F | CKSUM_F | RSS_F) \
-R(sec_mark_cksum_ptype, 1, 0, 1, 0, 1, 1, 0, \
- RX_SEC_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(sec_mark_cksum_ptype_rss, 1, 0, 1, 0, 1, 1, 1, \
- RX_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_mark_vlan, 1, 0, 1, 1, 0, 0, 0, RX_SEC_F | RX_VLAN_F) \
-R(sec_mark_vlan_rss, 1, 0, 1, 1, 0, 0, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | RSS_F) \
-R(sec_mark_vlan_ptype, 1, 0, 1, 1, 0, 1, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(sec_mark_vlan_ptype_rss, 1, 0, 1, 1, 0, 1, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_mark_vlan_cksum, 1, 0, 1, 1, 1, 0, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F) \
-R(sec_mark_vlan_cksum_rss, 1, 0, 1, 1, 1, 0, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_mark_vlan_cksum_ptype, 1, 0, 1, 1, 1, 1, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_mark_vlan_cksum_ptype_rss, \
- 1, 0, 1, 1, 1, 1, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | \
- RSS_F) \
-R(sec_ts, 1, 1, 0, 0, 0, 0, 0, RX_SEC_F | TS_F) \
-R(sec_ts_rss, 1, 1, 0, 0, 0, 0, 1, RX_SEC_F | TS_F | RSS_F) \
-R(sec_ts_ptype, 1, 1, 0, 0, 0, 1, 0, RX_SEC_F | TS_F | PTYPE_F)\
-R(sec_ts_ptype_rss, 1, 1, 0, 0, 0, 1, 1, \
- RX_SEC_F | TS_F | PTYPE_F | RSS_F) \
-R(sec_ts_cksum, 1, 1, 0, 0, 1, 0, 0, RX_SEC_F | TS_F | CKSUM_F)\
-R(sec_ts_cksum_rss, 1, 1, 0, 0, 1, 0, 1, \
- RX_SEC_F | TS_F | CKSUM_F | RSS_F) \
-R(sec_ts_cksum_ptype, 1, 1, 0, 0, 1, 1, 0, \
- RX_SEC_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_cksum_ptype_rss, 1, 1, 0, 0, 1, 1, 1, \
- RX_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_ts_vlan, 1, 1, 0, 1, 0, 0, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F) \
-R(sec_ts_vlan_rss, 1, 1, 0, 1, 0, 0, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | RSS_F) \
-R(sec_ts_vlan_ptype, 1, 1, 0, 1, 0, 1, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | PTYPE_F) \
-R(sec_ts_vlan_ptype_rss, 1, 1, 0, 1, 0, 1, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_ts_vlan_cksum, 1, 1, 0, 1, 1, 0, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F) \
-R(sec_ts_vlan_cksum_rss, 1, 1, 0, 1, 1, 0, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_ts_vlan_cksum_ptype, 1, 1, 0, 1, 1, 1, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_vlan_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | \
- RSS_F) \
-R(sec_ts_mark, 1, 1, 1, 0, 0, 0, 0, RX_SEC_F | TS_F | MARK_F) \
-R(sec_ts_mark_rss, 1, 1, 1, 0, 0, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | RSS_F) \
-R(sec_ts_mark_ptype, 1, 1, 1, 0, 0, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | PTYPE_F) \
-R(sec_ts_mark_ptype_rss, 1, 1, 1, 0, 0, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F) \
-R(sec_ts_mark_cksum, 1, 1, 1, 0, 1, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F) \
-R(sec_ts_mark_cksum_rss, 1, 1, 1, 0, 1, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F) \
-R(sec_ts_mark_cksum_ptype, 1, 1, 1, 0, 1, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_mark_cksum_ptype_rss, 1, 1, 1, 0, 1, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_ts_mark_vlan, 1, 1, 1, 1, 0, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F) \
-R(sec_ts_mark_vlan_rss, 1, 1, 1, 1, 0, 0, 1, \
- RX_SEC_F | RX_VLAN_F | RSS_F) \
-R(sec_ts_mark_vlan_ptype, 1, 1, 1, 1, 0, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(sec_ts_mark_vlan_ptype_rss, 1, 1, 1, 1, 0, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F)\
-R(sec_ts_mark_vlan_cksum, 1, 1, 1, 1, 1, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F) \
-R(sec_ts_mark_vlan_cksum_rss, 1, 1, 1, 1, 1, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | RSS_F)\
-R(sec_ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | \
- PTYPE_F) \
-R(sec_ts_mark_vlan_cksum_ptype_rss, \
- 1, 1, 1, 1, 1, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | \
- PTYPE_F | RSS_F)
-#endif /* __OTX2_RX_H__ */
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
deleted file mode 100644
index 3adf21608c..0000000000
--- a/drivers/net/octeontx2/otx2_stats.c
+++ /dev/null
@@ -1,397 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include "otx2_ethdev.h"
-
-struct otx2_nix_xstats_name {
- char name[RTE_ETH_XSTATS_NAME_SIZE];
- uint32_t offset;
-};
-
-static const struct otx2_nix_xstats_name nix_tx_xstats[] = {
- {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST},
- {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST},
- {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST},
- {"tx_drop", NIX_STAT_LF_TX_TX_DROP},
- {"tx_octs", NIX_STAT_LF_TX_TX_OCTS},
-};
-
-static const struct otx2_nix_xstats_name nix_rx_xstats[] = {
- {"rx_octs", NIX_STAT_LF_RX_RX_OCTS},
- {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST},
- {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST},
- {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST},
- {"rx_drop", NIX_STAT_LF_RX_RX_DROP},
- {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS},
- {"rx_fcs", NIX_STAT_LF_RX_RX_FCS},
- {"rx_err", NIX_STAT_LF_RX_RX_ERR},
- {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST},
- {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST},
- {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST},
- {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST},
-};
-
-static const struct otx2_nix_xstats_name nix_q_xstats[] = {
- {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS},
-};
-
-#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats)
-#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats)
-#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats)
-
-#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \
- OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS)
-
-int
-otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t reg, val;
- uint32_t qidx, i;
- int64_t *addr;
-
- stats->opackets = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST));
- stats->opackets += otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST));
- stats->opackets += otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST));
- stats->oerrors = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP));
- stats->obytes = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS));
-
- stats->ipackets = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST));
- stats->ipackets += otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST));
- stats->ipackets += otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST));
- stats->imissed = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP));
- stats->ibytes = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS));
- stats->ierrors = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR));
-
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
- if (dev->txmap[i] & (1U << 31)) {
- qidx = dev->txmap[i] & 0xFFFF;
- reg = (((uint64_t)qidx) << 32);
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_opackets[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_obytes[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_errors[i] = val;
- }
- }
-
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
- if (dev->rxmap[i] & (1U << 31)) {
- qidx = dev->rxmap[i] & 0xFFFF;
- reg = (((uint64_t)qidx) << 32);
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_ipackets[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_ibytes[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_errors[i] += val;
- }
- }
-
- return 0;
-}
-
-int
-otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
- return -ENOMEM;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- uint8_t stat_idx, uint8_t is_rx)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (is_rx)
- dev->rxmap[stat_idx] = ((1U << 31) | queue_id);
- else
- dev->txmap[stat_idx] = ((1U << 31) | queue_id);
-
- return 0;
-}
-
-int
-otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- unsigned int i, count = 0;
- uint64_t reg, val;
-
- if (n < OTX2_NIX_NUM_XSTATS_REG)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (xstats == NULL)
- return 0;
-
- for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
- xstats[count].value = otx2_read64(dev->base +
- NIX_LF_TX_STATX(nix_tx_xstats[i].offset));
- xstats[count].id = count;
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
- xstats[count].value = otx2_read64(dev->base +
- NIX_LF_RX_STATX(nix_rx_xstats[i].offset));
- xstats[count].id = count;
- count++;
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- reg = (((uint64_t)i) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base +
- nix_q_xstats[0].offset));
- if (val & OP_ERR)
- val = 0;
- xstats[count].value += val;
- }
- xstats[count].id = count;
- count++;
-
- return count;
-}
-
-int
-otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit)
-{
- unsigned int i, count = 0;
-
- RTE_SET_USED(eth_dev);
-
- if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL)
- return -ENOMEM;
-
- if (xstats_names) {
- for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_tx_xstats[i].name);
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_rx_xstats[i].name);
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_q_xstats[i].name);
- count++;
- }
- }
-
- return OTX2_NIX_NUM_XSTATS_REG;
-}
-
-int
-otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit)
-{
- struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG];
- uint16_t i;
-
- if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (limit > OTX2_NIX_NUM_XSTATS_REG)
- return -EINVAL;
-
- if (xstats_names == NULL)
- return -ENOMEM;
-
- otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit);
-
- for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
- if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
- otx2_err("Invalid id value");
- return -EINVAL;
- }
- strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
- sizeof(xstats_names[i].name));
- }
-
- return limit;
-}
-
-int
-otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
- uint64_t *values, unsigned int n)
-{
- struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG];
- uint16_t i;
-
- if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (n > OTX2_NIX_NUM_XSTATS_REG)
- return -EINVAL;
-
- if (values == NULL)
- return -ENOMEM;
-
- otx2_nix_xstats_get(eth_dev, xstats, n);
-
- for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
- if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
- otx2_err("Invalid id value");
- return -EINVAL;
- }
- values[i] = xstats[ids[i]].value;
- }
-
- return n;
-}
-
-static int
-nix_queue_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- uint32_t i;
- int rc;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read rq context");
- return rc;
- }
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
- otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq));
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask));
- aq->rq.octs = 0;
- aq->rq.pkts = 0;
- aq->rq.drop_octs = 0;
- aq->rq.drop_pkts = 0;
- aq->rq.re_pkts = 0;
-
- aq->rq_mask.octs = ~(aq->rq_mask.octs);
- aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
- aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
- aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
- aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to write rq context");
- return rc;
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read sq context");
- return rc;
- }
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
- otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq));
- otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask));
- aq->sq.octs = 0;
- aq->sq.pkts = 0;
- aq->sq.drop_octs = 0;
- aq->sq.drop_pkts = 0;
-
- aq->sq_mask.octs = ~(aq->sq_mask.octs);
- aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
- aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
- aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to write sq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-int
-otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- int ret;
-
- if (otx2_mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
- return -ENOMEM;
-
- ret = otx2_mbox_process(mbox);
- if (ret != 0)
- return ret;
-
- /* Reset queue stats */
- return nix_queue_stats_reset(eth_dev);
-}
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
deleted file mode 100644
index 6aff1f9587..0000000000
--- a/drivers/net/octeontx2/otx2_tm.c
+++ /dev/null
@@ -1,3317 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_malloc.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_tm.h"
-
-/* Use last LVL_CNT nodes as default nodes */
-#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT)
-
-enum otx2_tm_node_level {
- OTX2_TM_LVL_ROOT = 0,
- OTX2_TM_LVL_SCH1,
- OTX2_TM_LVL_SCH2,
- OTX2_TM_LVL_SCH3,
- OTX2_TM_LVL_SCH4,
- OTX2_TM_LVL_QUEUE,
- OTX2_TM_LVL_MAX,
-};
-
-static inline
-uint64_t shaper2regval(struct shaper_params *shaper)
-{
- return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) |
- (shaper->div_exp << 13) | (shaper->exponent << 9) |
- (shaper->mantissa << 1);
-}
-
-int
-otx2_nix_get_link(struct otx2_eth_dev *dev)
-{
- int link = 13 /* SDP */;
- uint16_t lmac_chan;
- uint16_t map;
-
- lmac_chan = dev->tx_chan_base;
-
- /* CGX lmac link */
- if (lmac_chan >= 0x800) {
- map = lmac_chan & 0x7FF;
- link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF);
- } else if (lmac_chan < 0x700) {
- /* LBK channel */
- link = 12;
- }
-
- return link;
-}
-
-static uint8_t
-nix_get_relchan(struct otx2_eth_dev *dev)
-{
- return dev->tx_chan_base & 0xff;
-}
-
-static bool
-nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
-{
- bool is_lbk = otx2_dev_is_lbk(dev);
- return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) && !is_lbk;
-}
-
-static bool
-nix_tm_is_leaf(struct otx2_eth_dev *dev, int lvl)
-{
- if (nix_tm_have_tl1_access(dev))
- return (lvl == OTX2_TM_LVL_QUEUE);
-
- return (lvl == OTX2_TM_LVL_SCH4);
-}
-
-static int
-find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id)
-{
- struct otx2_nix_tm_node *child_node;
-
- TAILQ_FOREACH(child_node, &dev->node_list, node) {
- if (!child_node->parent)
- continue;
- if (!(child_node->parent->id == node_id))
- continue;
- if (child_node->priority == child_node->parent->rr_prio)
- continue;
- return child_node->hw_id - child_node->priority;
- }
- return 0;
-}
-
-
-static struct otx2_nix_tm_shaper_profile *
-nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
-{
- struct otx2_nix_tm_shaper_profile *tm_shaper_profile;
-
- TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) {
- if (tm_shaper_profile->shaper_profile_id == shaper_id)
- return tm_shaper_profile;
- }
- return NULL;
-}
-
-static inline uint64_t
-shaper_rate_to_nix(uint64_t value, uint64_t *exponent_p,
- uint64_t *mantissa_p, uint64_t *div_exp_p)
-{
- uint64_t div_exp, exponent, mantissa;
-
- /* Boundary checks */
- if (value < MIN_SHAPER_RATE ||
- value > MAX_SHAPER_RATE)
- return 0;
-
- if (value <= SHAPER_RATE(0, 0, 0)) {
- /* Calculate rate div_exp and mantissa using
- * the following formula:
- *
- * value = (2E6 * (256 + mantissa)
- * / ((1 << div_exp) * 256))
- */
- div_exp = 0;
- exponent = 0;
- mantissa = MAX_RATE_MANTISSA;
-
- while (value < (NIX_SHAPER_RATE_CONST / (1 << div_exp)))
- div_exp += 1;
-
- while (value <
- ((NIX_SHAPER_RATE_CONST * (256 + mantissa)) /
- ((1 << div_exp) * 256)))
- mantissa -= 1;
- } else {
- /* Calculate rate exponent and mantissa using
- * the following formula:
- *
- * value = (2E6 * ((256 + mantissa) << exponent)) / 256
- *
- */
- div_exp = 0;
- exponent = MAX_RATE_EXPONENT;
- mantissa = MAX_RATE_MANTISSA;
-
- while (value < (NIX_SHAPER_RATE_CONST * (1 << exponent)))
- exponent -= 1;
-
- while (value < ((NIX_SHAPER_RATE_CONST *
- ((256 + mantissa) << exponent)) / 256))
- mantissa -= 1;
- }
-
- if (div_exp > MAX_RATE_DIV_EXP ||
- exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA)
- return 0;
-
- if (div_exp_p)
- *div_exp_p = div_exp;
- if (exponent_p)
- *exponent_p = exponent;
- if (mantissa_p)
- *mantissa_p = mantissa;
-
- /* Calculate real rate value */
- return SHAPER_RATE(exponent, mantissa, div_exp);
-}
-
-static inline uint64_t
-shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
- uint64_t *mantissa_p)
-{
- uint64_t exponent, mantissa;
-
- if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST)
- return 0;
-
- /* Calculate burst exponent and mantissa using
- * the following formula:
- *
- * value = (((256 + mantissa) << (exponent + 1)
- / 256)
- *
- */
- exponent = MAX_BURST_EXPONENT;
- mantissa = MAX_BURST_MANTISSA;
-
- while (value < (1ull << (exponent + 1)))
- exponent -= 1;
-
- while (value < ((256 + mantissa) << (exponent + 1)) / 256)
- mantissa -= 1;
-
- if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA)
- return 0;
-
- if (exponent_p)
- *exponent_p = exponent;
- if (mantissa_p)
- *mantissa_p = mantissa;
-
- return SHAPER_BURST(exponent, mantissa);
-}
-
-static void
-shaper_config_to_nix(struct otx2_nix_tm_shaper_profile *profile,
- struct shaper_params *cir,
- struct shaper_params *pir)
-{
- struct rte_tm_shaper_params *param = &profile->params;
-
- if (!profile)
- return;
-
- /* Calculate CIR exponent and mantissa */
- if (param->committed.rate)
- cir->rate = shaper_rate_to_nix(param->committed.rate,
- &cir->exponent,
- &cir->mantissa,
- &cir->div_exp);
-
- /* Calculate PIR exponent and mantissa */
- if (param->peak.rate)
- pir->rate = shaper_rate_to_nix(param->peak.rate,
- &pir->exponent,
- &pir->mantissa,
- &pir->div_exp);
-
- /* Calculate CIR burst exponent and mantissa */
- if (param->committed.size)
- cir->burst = shaper_burst_to_nix(param->committed.size,
- &cir->burst_exponent,
- &cir->burst_mantissa);
-
- /* Calculate PIR burst exponent and mantissa */
- if (param->peak.size)
- pir->burst = shaper_burst_to_nix(param->peak.size,
- &pir->burst_exponent,
- &pir->burst_mantissa);
-}
-
-static void
-shaper_default_red_algo(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- struct otx2_nix_tm_shaper_profile *profile)
-{
- struct shaper_params cir, pir;
-
- /* C0 doesn't support STALL when both PIR & CIR are enabled */
- if (profile && otx2_dev_is_96xx_Cx(dev)) {
- memset(&cir, 0, sizeof(cir));
- memset(&pir, 0, sizeof(pir));
- shaper_config_to_nix(profile, &cir, &pir);
-
- if (pir.rate && cir.rate) {
- tm_node->red_algo = NIX_REDALG_DISCARD;
- tm_node->flags |= NIX_TM_NODE_RED_DISCARD;
- return;
- }
- }
-
- tm_node->red_algo = NIX_REDALG_STD;
- tm_node->flags &= ~NIX_TM_NODE_RED_DISCARD;
-}
-
-static int
-populate_tm_tl1_default(struct otx2_eth_dev *dev, uint32_t schq)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txschq_config *req;
-
- /*
- * Default config for TL1.
- * For VF this is always ignored.
- */
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_TL1;
-
- /* Set DWRR quantum */
- req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
- req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
- req->num_regs++;
-
- req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
- req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
- req->num_regs++;
-
- req->reg[2] = NIX_AF_TL1X_CIR(schq);
- req->regval[2] = 0;
- req->num_regs++;
-
- return otx2_mbox_process(mbox);
-}
-
-static uint8_t
-prepare_tm_sched_reg(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- uint64_t strict_prio = tm_node->priority;
- uint32_t hw_lvl = tm_node->hw_lvl;
- uint32_t schq = tm_node->hw_id;
- uint64_t rr_quantum;
- uint8_t k = 0;
-
- rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- /* For children to root, strict prio is default if either
- * device root is TL2 or TL1 Static Priority is disabled.
- */
- if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
- (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
- dev->tm_flags & NIX_TM_TL1_NO_SP))
- strict_prio = TXSCH_TL1_DFLT_RR_PRIO;
-
- otx2_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
- "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)",
- nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
- tm_node->id, strict_prio, rr_quantum, tm_node);
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
- regval[k] = rr_quantum;
- k++;
-
- break;
- }
-
- return k;
-}
-
-static uint8_t
-prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
- struct otx2_nix_tm_shaper_profile *profile,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- struct shaper_params cir, pir;
- uint32_t schq = tm_node->hw_id;
- uint64_t adjust = 0;
- uint8_t k = 0;
-
- memset(&cir, 0, sizeof(cir));
- memset(&pir, 0, sizeof(pir));
- shaper_config_to_nix(profile, &cir, &pir);
-
- /* Packet length adjust */
- if (tm_node->pkt_mode)
- adjust = 1;
- else if (profile)
- adjust = profile->params.pkt_length_adjust & 0x1FF;
-
- otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, pir %" PRIu64
- "(%" PRIu64 "B), cir %" PRIu64 "(%" PRIu64 "B)"
- "adjust 0x%" PRIx64 "(pktmode %u) (%p)",
- nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
- tm_node->id, pir.rate, pir.burst, cir.rate, cir.burst,
- adjust, tm_node->pkt_mode, tm_node);
-
- switch (tm_node->hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_MDQX_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_MDQX_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED ALG */
- reg[k] = NIX_AF_MDQX_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- case NIX_TXSCH_LVL_TL4:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL4X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL4X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL4X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- case NIX_TXSCH_LVL_TL3:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL3X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL3X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL3X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL2:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL2X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL2X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL2X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL1:
- /* Configure CIR */
- reg[k] = NIX_AF_TL1X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure length disable and adjust */
- reg[k] = NIX_AF_TL1X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- }
-
- return k;
-}
-
-static uint8_t
-prepare_tm_sw_xoff(struct otx2_nix_tm_node *tm_node, bool enable,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- uint32_t hw_lvl = tm_node->hw_lvl;
- uint32_t schq = tm_node->hw_id;
- uint8_t k = 0;
-
- otx2_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)",
- nix_hwlvl2str(hw_lvl), schq, tm_node->lvl,
- tm_node->id, enable, tm_node);
-
- regval[k] = enable;
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_MDQ:
- reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
- k++;
- break;
- default:
- break;
- }
-
- return k;
-}
-
-static int
-populate_tm_reg(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG];
- uint64_t regval[MAX_REGS_PER_MBOX_MSG];
- uint64_t reg[MAX_REGS_PER_MBOX_MSG];
- struct otx2_mbox *mbox = dev->mbox;
- uint64_t parent = 0, child = 0;
- uint32_t hw_lvl, rr_prio, schq;
- struct nix_txschq_config *req;
- int rc = -EFAULT;
- uint8_t k = 0;
-
- memset(regval_mask, 0, sizeof(regval_mask));
- profile = nix_tm_shaper_profile_search(dev,
- tm_node->params.shaper_profile_id);
- rr_prio = tm_node->rr_prio;
- hw_lvl = tm_node->hw_lvl;
- schq = tm_node->hw_id;
-
- /* Root node will not have a parent node */
- if (hw_lvl == dev->otx2_tm_root_lvl)
- parent = tm_node->parent_hw_id;
- else
- parent = tm_node->parent->hw_id;
-
- /* Do we need this trigger to configure TL1 */
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
- hw_lvl == dev->otx2_tm_root_lvl) {
- rc = populate_tm_tl1_default(dev, parent);
- if (rc)
- goto error;
- }
-
- if (hw_lvl != NIX_TXSCH_LVL_SMQ)
- child = find_prio_anchor(dev, tm_node->id);
-
- /* Override default rr_prio when TL1
- * Static Priority is disabled
- */
- if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
- dev->tm_flags & NIX_TM_TL1_NO_SP) {
- rr_prio = TXSCH_TL1_DFLT_RR_PRIO;
- child = 0;
- }
-
- otx2_tm_dbg("Topology config node %s(%u)->%s(%"PRIu64") lvl %u, id %u"
- " prio_anchor %"PRIu64" rr_prio %u (%p)",
- nix_hwlvl2str(hw_lvl), schq, nix_hwlvl2str(hw_lvl + 1),
- parent, tm_node->lvl, tm_node->id, child, rr_prio, tm_node);
-
- /* Prepare Topology and Link config */
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
-
- /* Set xoff which will be cleared later and minimum length
- * which will be used for zero padding if packet length is
- * smaller
- */
- reg[k] = NIX_AF_SMQX_CFG(schq);
- regval[k] = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
- NIX_MIN_HW_FRS;
- regval_mask[k] = ~(BIT_ULL(50) | (0x7ULL << 36) | 0x7f);
- k++;
-
- /* Parent and schedule conf */
- reg[k] = NIX_AF_MDQX_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL4:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL4X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Configure TL4 to send to SDP channel instead of CGX/LBK */
- if (otx2_dev_is_sdp(dev)) {
- reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
- regval[k] = BIT_ULL(12);
- k++;
- }
- break;
- case NIX_TXSCH_LVL_TL3:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL3X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Link configuration */
- if (!otx2_dev_is_sdp(dev) &&
- dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
- otx2_nix_get_link(dev));
- regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
- k++;
- }
-
- break;
- case NIX_TXSCH_LVL_TL2:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL2X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Link configuration */
- if (!otx2_dev_is_sdp(dev) &&
- dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
- otx2_nix_get_link(dev));
- regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
- k++;
- }
-
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
- k++;
-
- break;
- }
-
- /* Prepare schedule config */
- k += prepare_tm_sched_reg(dev, tm_node, ®[k], ®val[k]);
-
- /* Prepare shaping config */
- k += prepare_tm_shaper_reg(tm_node, profile, ®[k], ®val[k]);
-
- if (!k)
- return 0;
-
- /* Copy and send config mbox */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = hw_lvl;
- req->num_regs = k;
-
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- otx2_mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k);
- otx2_mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k);
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- goto error;
-
- return 0;
-error:
- otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
- return rc;
-}
-
-
-static int
-nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *tm_node;
- uint32_t hw_lvl;
- int rc = 0;
-
- for (hw_lvl = 0; hw_lvl <= dev->otx2_tm_root_lvl; hw_lvl++) {
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl == hw_lvl &&
- tm_node->hw_lvl != NIX_TXSCH_LVL_CNT) {
- rc = populate_tm_reg(dev, tm_node);
- if (rc)
- goto exit;
- }
- }
- }
-exit:
- return rc;
-}
-
-static struct otx2_nix_tm_node *
-nix_tm_node_search(struct otx2_eth_dev *dev,
- uint32_t node_id, bool user)
-{
- struct otx2_nix_tm_node *tm_node;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->id == node_id &&
- (user == !!(tm_node->flags & NIX_TM_NODE_USER)))
- return tm_node;
- }
- return NULL;
-}
-
-static uint32_t
-check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id)
-{
- struct otx2_nix_tm_node *tm_node;
- uint32_t rr_num = 0;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
-
- if (!(tm_node->parent->id == parent_id))
- continue;
-
- if (tm_node->priority == priority)
- rr_num++;
- }
- return rr_num;
-}
-
-static int
-nix_tm_update_parent_info(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *tm_node_child;
- struct otx2_nix_tm_node *tm_node;
- struct otx2_nix_tm_node *parent;
- uint32_t rr_num = 0;
- uint32_t priority;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
- /* Count group of children of same priority i.e are RR */
- parent = tm_node->parent;
- priority = tm_node->priority;
- rr_num = check_rr(dev, priority, parent->id);
-
- /* Assuming that multiple RR groups are
- * not configured based on capability.
- */
- if (rr_num > 1) {
- parent->rr_prio = priority;
- parent->rr_num = rr_num;
- }
-
- /* Find out static priority children that are not in RR */
- TAILQ_FOREACH(tm_node_child, &dev->node_list, node) {
- if (!tm_node_child->parent)
- continue;
- if (parent->id != tm_node_child->parent->id)
- continue;
- if (parent->max_prio == UINT32_MAX &&
- tm_node_child->priority != parent->rr_prio)
- parent->max_prio = 0;
-
- if (parent->max_prio < tm_node_child->priority &&
- parent->rr_prio != tm_node_child->priority)
- parent->max_prio = tm_node_child->priority;
- }
- }
-
- return 0;
-}
-
-static int
-nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
- uint32_t parent_node_id, uint32_t priority,
- uint32_t weight, uint16_t hw_lvl,
- uint16_t lvl, bool user,
- struct rte_tm_node_params *params)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_nix_tm_node *tm_node, *parent_node;
- uint32_t profile_id;
-
- profile_id = params->shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
-
- parent_node = nix_tm_node_search(dev, parent_node_id, user);
-
- tm_node = rte_zmalloc("otx2_nix_tm_node",
- sizeof(struct otx2_nix_tm_node), 0);
- if (!tm_node)
- return -ENOMEM;
-
- tm_node->lvl = lvl;
- tm_node->hw_lvl = hw_lvl;
-
- /* Maintain minimum weight */
- if (!weight)
- weight = 1;
-
- tm_node->id = node_id;
- tm_node->priority = priority;
- tm_node->weight = weight;
- tm_node->rr_prio = 0xf;
- tm_node->max_prio = UINT32_MAX;
- tm_node->hw_id = UINT32_MAX;
- tm_node->flags = 0;
- if (user)
- tm_node->flags = NIX_TM_NODE_USER;
-
- /* Packet mode */
- if (!nix_tm_is_leaf(dev, lvl) &&
- ((profile && profile->params.packet_mode) ||
- (params->nonleaf.wfq_weight_mode &&
- params->nonleaf.n_sp_priorities &&
- !params->nonleaf.wfq_weight_mode[0])))
- tm_node->pkt_mode = 1;
-
- rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
-
- if (profile)
- profile->reference_count++;
-
- tm_node->parent = parent_node;
- tm_node->parent_hw_id = UINT32_MAX;
- shaper_default_red_algo(dev, tm_node, profile);
-
- TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
-
- return 0;
-}
-
-static int
-nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_shaper_profile *shaper_profile;
-
- while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) {
- if (shaper_profile->reference_count)
- otx2_tm_dbg("Shaper profile %u has non zero references",
- shaper_profile->shaper_profile_id);
- TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper);
- rte_free(shaper_profile);
- }
-
- return 0;
-}
-
-static int
-nix_clear_path_xoff(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node)
-{
- struct nix_txschq_config *req;
- struct otx2_nix_tm_node *p;
- int rc;
-
- /* Manipulating SW_XOFF not supported on Ax */
- if (otx2_dev_is_Ax(dev))
- return 0;
-
- /* Enable nodes in path for flush to succeed */
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- p = tm_node;
- else
- p = tm_node->parent;
- while (p) {
- if (!(p->flags & NIX_TM_NODE_ENABLED) &&
- (p->flags & NIX_TM_NODE_HWRES)) {
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = p->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(p, false, req->reg,
- req->regval);
- rc = otx2_mbox_process(dev->mbox);
- if (rc)
- return rc;
-
- p->flags |= NIX_TM_NODE_ENABLED;
- }
- p = p->parent;
- }
-
- return 0;
-}
-
-static int
-nix_smq_xoff(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- bool enable)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txschq_config *req;
- uint16_t smq;
- int rc;
-
- smq = tm_node->hw_id;
- otx2_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq,
- enable ? "enable" : "disable");
-
- rc = nix_clear_path_xoff(dev, tm_node);
- if (rc)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_SMQ;
- req->num_regs = 1;
-
- req->reg[0] = NIX_AF_SMQX_CFG(smq);
- req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0;
- req->regval_mask[0] = enable ?
- ~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
-{
- struct otx2_eth_txq *txq = __txq;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- struct otx2_npa_lf *lf;
- struct otx2_mbox *mbox;
- uint64_t aura_handle;
- int rc;
-
- otx2_tm_dbg("Setting SQ %u SQB aura FC to %s", txq->sq,
- enable ? "enable" : "disable");
-
- lf = otx2_npa_lf_obj_get();
- if (!lf)
- return -EFAULT;
- mbox = lf->mbox;
- /* Set/clear sqb aura fc_ena */
- aura_handle = txq->sqb_pool->pool_id;
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
- /* Below is not needed for aura writes but AF driver needs it */
- /* AF will translate to associated poolctx */
- req->aura.pool_addr = req->aura_id;
-
- req->aura.fc_ena = enable;
- req->aura_mask.fc_ena = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- /* Read back npa aura ctx */
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Init when enabled as there might be no triggers */
- if (enable)
- *(volatile uint64_t *)txq->fc_mem = rsp->aura.count;
- else
- *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs;
- /* Sync write barrier */
- rte_wmb();
-
- return 0;
-}
-
-static int
-nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
-{
- uint16_t sqb_cnt, head_off, tail_off;
- struct otx2_eth_dev *dev = txq->dev;
- uint64_t wdata, val, prev;
- uint16_t sq = txq->sq;
- int64_t *regaddr;
- uint64_t timeout;/* 10's of usec */
-
- /* Wait for enough time based on shaper min rate */
- timeout = (txq->qconf.nb_desc * NIX_MAX_HW_FRS * 8 * 1E5);
- timeout = timeout / dev->tm_rate_min;
- if (!timeout)
- timeout = 10000;
-
- wdata = ((uint64_t)sq << 32);
- regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
- val = otx2_atomic64_add_nosync(wdata, regaddr);
-
- /* Spin multiple iterations as "txq->fc_cache_pkts" can still
- * have space to send pkts even though fc_mem is disabled
- */
-
- while (true) {
- prev = val;
- rte_delay_us(10);
- val = otx2_atomic64_add_nosync(wdata, regaddr);
- /* Continue on error */
- if (val & BIT_ULL(63))
- continue;
-
- if (prev != val)
- continue;
-
- sqb_cnt = val & 0xFFFF;
- head_off = (val >> 20) & 0x3F;
- tail_off = (val >> 28) & 0x3F;
-
- /* SQ reached quiescent state */
- if (sqb_cnt <= 1 && head_off == tail_off &&
- (*txq->fc_mem == txq->nb_sqb_bufs)) {
- break;
- }
-
- /* Timeout */
- if (!timeout)
- goto exit;
- timeout--;
- }
-
- return 0;
-exit:
- otx2_nix_tm_dump(dev);
- return -EFAULT;
-}
-
-/* Flush and disable tx queue and its parent SMQ */
-int otx2_nix_sq_flush_pre(void *_txq, bool dev_started)
-{
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_eth_txq *txq;
- struct otx2_eth_dev *dev;
- uint16_t sq;
- bool user;
- int rc;
-
- txq = _txq;
- dev = txq->dev;
- sq = txq->sq;
-
- user = !!(dev->tm_flags & NIX_TM_COMMITTED);
-
- /* Find the node for this SQ */
- tm_node = nix_tm_node_search(dev, sq, user);
- if (!tm_node || !(tm_node->flags & NIX_TM_NODE_ENABLED)) {
- otx2_err("Invalid node/state for sq %u", sq);
- return -EFAULT;
- }
-
- /* Enable CGX RXTX to drain pkts */
- if (!dev_started) {
- /* Though it enables both RX MCAM Entries and CGX Link
- * we assume all the rx queues are stopped way back.
- */
- otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
- rc = otx2_mbox_process(dev->mbox);
- if (rc) {
- otx2_err("cgx start failed, rc=%d", rc);
- return rc;
- }
- }
-
- /* Disable smq xoff for case it was enabled earlier */
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- return rc;
- }
-
- /* As per HRM, to disable an SQ, all other SQ's
- * that feed to same SMQ must be paused before SMQ flush.
- */
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- if (!(sibling->flags & NIX_TM_NODE_ENABLED))
- continue;
-
- sq = sibling->id;
- txq = dev->eth_dev->data->tx_queues[sq];
- if (!txq)
- continue;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
- goto cleanup;
- }
-
- /* Wait for sq entries to be flushed */
- rc = nix_txq_flush_sq_spin(txq);
- if (rc) {
- otx2_err("Failed to drain sq %u, rc=%d\n", txq->sq, rc);
- return rc;
- }
- }
-
- tm_node->flags &= ~NIX_TM_NODE_ENABLED;
-
- /* Disable and flush */
- rc = nix_smq_xoff(dev, tm_node->parent, true);
- if (rc) {
- otx2_err("Failed to disable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- goto cleanup;
- }
-cleanup:
- /* Restore cgx state */
- if (!dev_started) {
- otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
- rc |= otx2_mbox_process(dev->mbox);
- }
-
- return rc;
-}
-
-int otx2_nix_sq_flush_post(void *_txq)
-{
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_eth_txq *txq = _txq;
- struct otx2_eth_txq *s_txq;
- struct otx2_eth_dev *dev;
- bool once = false;
- uint16_t sq, s_sq;
- bool user;
- int rc;
-
- dev = txq->dev;
- sq = txq->sq;
- user = !!(dev->tm_flags & NIX_TM_COMMITTED);
-
- /* Find the node for this SQ */
- tm_node = nix_tm_node_search(dev, sq, user);
- if (!tm_node) {
- otx2_err("Invalid node for sq %u", sq);
- return -EFAULT;
- }
-
- /* Enable all the siblings back */
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
-
- if (sibling->id == sq)
- continue;
-
- if (!(sibling->flags & NIX_TM_NODE_ENABLED))
- continue;
-
- s_sq = sibling->id;
- s_txq = dev->eth_dev->data->tx_queues[s_sq];
- if (!s_txq)
- continue;
-
- if (!once) {
- /* Enable back if any SQ is still present */
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- return rc;
- }
- once = true;
- }
-
- rc = otx2_nix_sq_sqb_aura_fc(s_txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
- return rc;
- }
- }
-
- return 0;
-}
-
-static int
-nix_sq_sched_data(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- bool rr_quantum_only)
-{
- struct rte_eth_dev *eth_dev = dev->eth_dev;
- struct otx2_mbox *mbox = dev->mbox;
- uint16_t sq = tm_node->id, smq;
- struct nix_aq_enq_req *req;
- uint64_t rr_quantum;
- int rc;
-
- smq = tm_node->parent->hw_id;
- rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- if (rr_quantum_only)
- otx2_tm_dbg("Update sq(%u) rr_quantum 0x%"PRIx64, sq, rr_quantum);
- else
- otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%"PRIx64,
- sq, smq, rr_quantum);
-
- if (sq > eth_dev->data->nb_tx_queues)
- return -EFAULT;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- req->qidx = sq;
- req->ctype = NIX_AQ_CTYPE_SQ;
- req->op = NIX_AQ_INSTOP_WRITE;
-
- /* smq update only when needed */
- if (!rr_quantum_only) {
- req->sq.smq = smq;
- req->sq_mask.smq = ~req->sq_mask.smq;
- }
- req->sq.smq_rr_quantum = rr_quantum;
- req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to set smq, rc=%d", rc);
- return rc;
-}
-
-int otx2_nix_sq_enable(void *_txq)
-{
- struct otx2_eth_txq *txq = _txq;
- int rc;
-
- /* Enable sqb_aura fc */
- rc = otx2_nix_sq_sqb_aura_fc(txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
- return rc;
- }
-
- return 0;
-}
-
-static int
-nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
- uint32_t flags, bool hw_only)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_nix_tm_node *tm_node, *next_node;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txsch_free_req *req;
- uint32_t profile_id;
- int rc = 0;
-
- next_node = TAILQ_FIRST(&dev->node_list);
- while (next_node) {
- tm_node = next_node;
- next_node = TAILQ_NEXT(tm_node, node);
-
- /* Check for only requested nodes */
- if ((tm_node->flags & flags_mask) != flags)
- continue;
-
- if (!nix_tm_is_leaf(dev, tm_node->lvl) &&
- tm_node->hw_lvl != NIX_TXSCH_LVL_TL1 &&
- tm_node->flags & NIX_TM_NODE_HWRES) {
- /* Free specific HW resource */
- otx2_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)",
- nix_hwlvl2str(tm_node->hw_lvl),
- tm_node->hw_id, tm_node->lvl,
- tm_node->id, tm_node);
-
- rc = nix_clear_path_xoff(dev, tm_node);
- if (rc)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
- req->flags = 0;
- req->schq_lvl = tm_node->hw_lvl;
- req->schq = tm_node->hw_id;
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
- tm_node->flags &= ~NIX_TM_NODE_HWRES;
- }
-
- /* Leave software elements if needed */
- if (hw_only)
- continue;
-
- otx2_tm_dbg("Free node lvl %u id %u (%p)",
- tm_node->lvl, tm_node->id, tm_node);
-
- profile_id = tm_node->params.shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile)
- profile->reference_count--;
-
- TAILQ_REMOVE(&dev->node_list, tm_node, node);
- rte_free(tm_node);
- }
-
- if (!flags_mask) {
- /* Free all hw resources */
- req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
- req->flags = TXSCHQ_FREE_ALL;
-
- return otx2_mbox_process(mbox);
- }
-
- return rc;
-}
-
-static uint8_t
-nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_rsp *rsp)
-{
- uint16_t schq;
- uint8_t lvl;
-
- for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
- for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) {
- dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq];
- dev->txschq_contig_list[lvl][schq] =
- rsp->schq_contig_list[lvl][schq];
- }
-
- dev->txschq[lvl] = rsp->schq[lvl];
- dev->txschq_contig[lvl] = rsp->schq_contig[lvl];
- }
- return 0;
-}
-
-static int
-nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *child,
- struct otx2_nix_tm_node *parent)
-{
- uint32_t hw_id, schq_con_index, prio_offset;
- uint32_t l_id, schq_index;
-
- otx2_tm_dbg("Assign hw id for child node %s lvl %u id %u (%p)",
- nix_hwlvl2str(child->hw_lvl), child->lvl, child->id, child);
-
- child->flags |= NIX_TM_NODE_HWRES;
-
- /* Process root nodes */
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
- child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
- int idx = 0;
- uint32_t tschq_con_index;
-
- l_id = child->hw_lvl;
- tschq_con_index = dev->txschq_contig_index[l_id];
- hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
- child->hw_id = hw_id;
- dev->txschq_contig_index[l_id]++;
- /* Update TL1 hw_id for its parent for config purpose */
- idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++;
- hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx];
- child->parent_hw_id = hw_id;
- return 0;
- }
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
- child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
- uint32_t tschq_con_index;
-
- l_id = child->hw_lvl;
- tschq_con_index = dev->txschq_index[l_id];
- hw_id = dev->txschq_list[l_id][tschq_con_index];
- child->hw_id = hw_id;
- dev->txschq_index[l_id]++;
- return 0;
- }
-
- /* Process children with parents */
- l_id = child->hw_lvl;
- schq_index = dev->txschq_index[l_id];
- schq_con_index = dev->txschq_contig_index[l_id];
-
- if (child->priority == parent->rr_prio) {
- hw_id = dev->txschq_list[l_id][schq_index];
- child->hw_id = hw_id;
- child->parent_hw_id = parent->hw_id;
- dev->txschq_index[l_id]++;
- } else {
- prio_offset = schq_con_index + child->priority;
- hw_id = dev->txschq_contig_list[l_id][prio_offset];
- child->hw_id = hw_id;
- }
- return 0;
-}
-
-static int
-nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *parent, *child;
- uint32_t child_hw_lvl, con_index_inc, i;
-
- for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
- TAILQ_FOREACH(parent, &dev->node_list, node) {
- child_hw_lvl = parent->hw_lvl - 1;
- if (parent->hw_lvl != i)
- continue;
- TAILQ_FOREACH(child, &dev->node_list, node) {
- if (!child->parent)
- continue;
- if (child->parent->id != parent->id)
- continue;
- nix_tm_assign_id_to_node(dev, child, parent);
- }
-
- con_index_inc = parent->max_prio + 1;
- dev->txschq_contig_index[child_hw_lvl] += con_index_inc;
-
- /*
- * Explicitly assign id to parent node if it
- * doesn't have a parent
- */
- if (parent->hw_lvl == dev->otx2_tm_root_lvl)
- nix_tm_assign_id_to_node(dev, parent, NULL);
- }
- }
- return 0;
-}
-
-static uint8_t
-nix_tm_count_req_schq(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_req *req, uint8_t lvl)
-{
- struct otx2_nix_tm_node *tm_node;
- uint8_t contig_count;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (lvl == tm_node->hw_lvl) {
- req->schq[lvl - 1] += tm_node->rr_num;
- if (tm_node->max_prio != UINT32_MAX) {
- contig_count = tm_node->max_prio + 1;
- req->schq_contig[lvl - 1] += contig_count;
- }
- }
- if (lvl == dev->otx2_tm_root_lvl &&
- dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
- tm_node->hw_lvl == dev->otx2_tm_root_lvl) {
- req->schq_contig[dev->otx2_tm_root_lvl]++;
- }
- }
-
- req->schq[NIX_TXSCH_LVL_TL1] = 1;
- req->schq_contig[NIX_TXSCH_LVL_TL1] = 0;
-
- return 0;
-}
-
-static int
-nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_req *req)
-{
- uint8_t i;
-
- for (i = NIX_TXSCH_LVL_TL1; i > 0; i--)
- nix_tm_count_req_schq(dev, req, i);
-
- for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
- dev->txschq_index[i] = 0;
- dev->txschq_contig_index[i] = 0;
- }
- return 0;
-}
-
-static int
-nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txsch_alloc_req *req;
- struct nix_txsch_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox);
-
- rc = nix_tm_prepare_txschq_req(dev, req);
- if (rc)
- return rc;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- nix_tm_copy_rsp_to_dev(dev, rsp);
- dev->link_cfg_lvl = rsp->link_cfg_lvl;
-
- nix_tm_assign_hw_id(dev);
- return 0;
-}
-
-static int
-nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- struct otx2_eth_txq *txq;
- uint16_t sq;
- int rc;
-
- nix_tm_update_parent_info(dev);
-
- rc = nix_tm_send_txsch_alloc_msg(dev);
- if (rc) {
- otx2_err("TM failed to alloc tm resources=%d", rc);
- return rc;
- }
-
- rc = nix_tm_txsch_reg_config(dev);
- if (rc) {
- otx2_err("TM failed to configure sched registers=%d", rc);
- return rc;
- }
-
- /* Trigger MTU recalculate as SMQ needs MTU conf */
- if (eth_dev->data->dev_started && eth_dev->data->nb_rx_queues) {
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc) {
- otx2_err("TM MTU update failed, rc=%d", rc);
- return rc;
- }
- }
-
- /* Mark all non-leaf's as enabled */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- }
-
- if (!xmit_enable)
- return 0;
-
- /* Update SQ Sched Data while SQ is idle */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- continue;
-
- rc = nix_sq_sched_data(dev, tm_node, false);
- if (rc) {
- otx2_err("SQ %u sched update failed, rc=%d",
- tm_node->id, rc);
- return rc;
- }
- }
-
- /* Finally XON all SMQ's */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- return rc;
- }
- }
-
- /* Enable xmit as all the topology is ready */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- continue;
-
- sq = tm_node->id;
- txq = eth_dev->data->tx_queues[sq];
-
- rc = otx2_nix_sq_enable(txq);
- if (rc) {
- otx2_err("TM sw xon failed on SQ %u, rc=%d",
- tm_node->id, rc);
- return rc;
- }
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- }
-
- return 0;
-}
-
-static int
-send_tm_reqval(struct otx2_mbox *mbox,
- struct nix_txschq_config *req,
- struct rte_tm_error *error)
-{
- int rc;
-
- if (!req->num_regs ||
- req->num_regs > MAX_REGS_PER_MBOX_MSG) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "invalid config";
- return -EIO;
- }
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- }
- return rc;
-}
-
-static uint16_t
-nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
-{
- if (nix_tm_have_tl1_access(dev)) {
- switch (lvl) {
- case OTX2_TM_LVL_ROOT:
- return NIX_TXSCH_LVL_TL1;
- case OTX2_TM_LVL_SCH1:
- return NIX_TXSCH_LVL_TL2;
- case OTX2_TM_LVL_SCH2:
- return NIX_TXSCH_LVL_TL3;
- case OTX2_TM_LVL_SCH3:
- return NIX_TXSCH_LVL_TL4;
- case OTX2_TM_LVL_SCH4:
- return NIX_TXSCH_LVL_SMQ;
- default:
- return NIX_TXSCH_LVL_CNT;
- }
- } else {
- switch (lvl) {
- case OTX2_TM_LVL_ROOT:
- return NIX_TXSCH_LVL_TL2;
- case OTX2_TM_LVL_SCH1:
- return NIX_TXSCH_LVL_TL3;
- case OTX2_TM_LVL_SCH2:
- return NIX_TXSCH_LVL_TL4;
- case OTX2_TM_LVL_SCH3:
- return NIX_TXSCH_LVL_SMQ;
- default:
- return NIX_TXSCH_LVL_CNT;
- }
- }
-}
-
-static uint16_t
-nix_max_prio(struct otx2_eth_dev *dev, uint16_t hw_lvl)
-{
- if (hw_lvl >= NIX_TXSCH_LVL_CNT)
- return 0;
-
- /* MDQ doesn't support SP */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- return 0;
-
- /* PF's TL1 with VF's enabled doesn't support SP */
- if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
- (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
- (dev->tm_flags & NIX_TM_TL1_NO_SP)))
- return 0;
-
- return TXSCH_TLX_SP_PRIO_MAX - 1;
-}
-
-
-static int
-validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
- uint32_t parent_id, uint32_t priority,
- struct rte_tm_error *error)
-{
- uint8_t priorities[TXSCH_TLX_SP_PRIO_MAX];
- struct otx2_nix_tm_node *tm_node;
- uint32_t rr_num = 0;
- int i;
-
- /* Validate priority against max */
- if (priority > nix_max_prio(dev, nix_tm_lvl2nix(dev, lvl - 1))) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "unsupported priority value";
- return -EINVAL;
- }
-
- if (parent_id == RTE_TM_NODE_ID_NULL)
- return 0;
-
- memset(priorities, 0, TXSCH_TLX_SP_PRIO_MAX);
- priorities[priority] = 1;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
-
- if (!(tm_node->flags & NIX_TM_NODE_USER))
- continue;
-
- if (tm_node->parent->id != parent_id)
- continue;
-
- priorities[tm_node->priority]++;
- }
-
- for (i = 0; i < TXSCH_TLX_SP_PRIO_MAX; i++)
- if (priorities[i] > 1)
- rr_num++;
-
- /* At max, one rr groups per parent */
- if (rr_num > 1) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "multiple DWRR node priority";
- return -EINVAL;
- }
-
- /* Check for previous priority to avoid holes in priorities */
- if (priority && !priorities[priority - 1]) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority not in order";
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int
-read_tm_reg(struct otx2_mbox *mbox, uint64_t reg,
- uint64_t *regval, uint32_t hw_lvl)
-{
- volatile struct nix_txschq_config *req;
- struct nix_txschq_config *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->read = 1;
- req->lvl = hw_lvl;
- req->reg[0] = reg;
- req->num_regs = 1;
-
- rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
- if (rc)
- return rc;
- *regval = rsp->regval[0];
- return 0;
-}
-
-/* Search for min rate in topology */
-static void
-nix_tm_shaper_profile_update_min(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- uint64_t rate_min = 1E9; /* 1 Gbps */
-
- TAILQ_FOREACH(profile, &dev->shaper_profile_list, shaper) {
- if (profile->params.peak.rate &&
- profile->params.peak.rate < rate_min)
- rate_min = profile->params.peak.rate;
-
- if (profile->params.committed.rate &&
- profile->params.committed.rate < rate_min)
- rate_min = profile->params.committed.rate;
- }
-
- dev->tm_rate_min = rate_min;
-}
-
-static int
-nix_xmit_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
- uint16_t sqb_cnt, head_off, tail_off;
- struct otx2_nix_tm_node *tm_node;
- struct otx2_eth_txq *txq;
- uint64_t wdata, val;
- int i, rc;
-
- otx2_tm_dbg("Disabling xmit on %s", eth_dev->data->name);
-
- /* Enable CGX RXTX to drain pkts */
- if (!eth_dev->data->dev_started) {
- otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
- rc = otx2_mbox_process(dev->mbox);
- if (rc)
- return rc;
- }
-
- /* XON all SMQ's */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- goto cleanup;
- }
- }
-
- /* Flush all tx queues */
- for (i = 0; i < sq_cnt; i++) {
- txq = eth_dev->data->tx_queues[i];
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
- goto cleanup;
- }
-
- /* Wait for sq entries to be flushed */
- rc = nix_txq_flush_sq_spin(txq);
- if (rc) {
- otx2_err("Failed to drain sq, rc=%d\n", rc);
- goto cleanup;
- }
- }
-
- /* XOFF & Flush all SMQ's. HRM mandates
- * all SQ's empty before SMQ flush is issued.
- */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, true);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- goto cleanup;
- }
- }
-
- /* Verify sanity of all tx queues */
- for (i = 0; i < sq_cnt; i++) {
- txq = eth_dev->data->tx_queues[i];
-
- wdata = ((uint64_t)txq->sq << 32);
- val = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS));
-
- sqb_cnt = val & 0xFFFF;
- head_off = (val >> 20) & 0x3F;
- tail_off = (val >> 28) & 0x3F;
-
- if (sqb_cnt > 1 || head_off != tail_off ||
- (*txq->fc_mem != txq->nb_sqb_bufs))
- otx2_err("Failed to gracefully flush sq %u", txq->sq);
- }
-
-cleanup:
- /* restore cgx state */
- if (!eth_dev->data->dev_started) {
- otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
- rc |= otx2_mbox_process(dev->mbox);
- }
-
- return rc;
-}
-
-static int
-otx2_nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
- int *is_leaf, struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
-
- if (is_leaf == NULL) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (node_id == RTE_TM_NODE_ID_NULL || !tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- return -EINVAL;
- }
- if (nix_tm_is_leaf(dev, tm_node->lvl))
- *is_leaf = true;
- else
- *is_leaf = false;
- return 0;
-}
-
-static int
-otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev,
- struct rte_tm_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- int rc, max_nr_nodes = 0, i;
- struct free_rsrcs_rsp *rsp;
-
- memset(cap, 0, sizeof(*cap));
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- for (i = 0; i < NIX_TXSCH_LVL_TL1; i++)
- max_nr_nodes += rsp->schq[i];
-
- cap->n_nodes_max = max_nr_nodes + dev->tm_leaf_cnt;
- /* TL1 level is reserved for PF */
- cap->n_levels_max = nix_tm_have_tl1_access(dev) ?
- OTX2_TM_LVL_MAX : OTX2_TM_LVL_MAX - 1;
- cap->non_leaf_nodes_identical = 1;
- cap->leaf_nodes_identical = 1;
-
- /* Shaper Capabilities */
- cap->shaper_private_n_max = max_nr_nodes;
- cap->shaper_n_max = max_nr_nodes;
- cap->shaper_private_dual_rate_n_max = max_nr_nodes;
- cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->shaper_private_packet_mode_supported = 1;
- cap->shaper_private_byte_mode_supported = 1;
- cap->shaper_pkt_length_adjust_min = NIX_LENGTH_ADJUST_MIN;
- cap->shaper_pkt_length_adjust_max = NIX_LENGTH_ADJUST_MAX;
-
- /* Schedule Capabilities */
- cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ];
- cap->sched_sp_n_priorities_max = TXSCH_TLX_SP_PRIO_MAX;
- cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max;
- cap->sched_wfq_n_groups_max = 1;
- cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->sched_wfq_packet_mode_supported = 1;
- cap->sched_wfq_byte_mode_supported = 1;
-
- cap->dynamic_update_mask =
- RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL |
- RTE_TM_UPDATE_NODE_SUSPEND_RESUME;
- cap->stats_mask =
- RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES |
- RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
-
- for (i = 0; i < RTE_COLORS; i++) {
- cap->mark_vlan_dei_supported[i] = false;
- cap->mark_ip_ecn_tcp_supported[i] = false;
- cap->mark_ip_dscp_supported[i] = false;
- }
-
- return 0;
-}
-
-static int
-otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl,
- struct rte_tm_level_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct free_rsrcs_rsp *rsp;
- uint16_t hw_lvl;
- int rc;
-
- memset(cap, 0, sizeof(*cap));
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- hw_lvl = nix_tm_lvl2nix(dev, lvl);
-
- if (nix_tm_is_leaf(dev, lvl)) {
- /* Leaf */
- cap->n_nodes_max = dev->tm_leaf_cnt;
- cap->n_nodes_leaf_max = dev->tm_leaf_cnt;
- cap->leaf_nodes_identical = 1;
- cap->leaf.stats_mask =
- RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES;
-
- } else if (lvl == OTX2_TM_LVL_ROOT) {
- /* Root node, aka TL2(vf)/TL1(pf) */
- cap->n_nodes_max = 1;
- cap->n_nodes_nonleaf_max = 1;
- cap->non_leaf_nodes_identical = 1;
-
- cap->nonleaf.shaper_private_supported = true;
- cap->nonleaf.shaper_private_dual_rate_supported =
- nix_tm_have_tl1_access(dev) ? false : true;
- cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_packet_mode_supported = 1;
- cap->nonleaf.shaper_private_byte_mode_supported = 1;
-
- cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
- cap->nonleaf.sched_sp_n_priorities_max =
- nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
-
- if (nix_tm_have_tl1_access(dev))
- cap->nonleaf.stats_mask =
- RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
- } else if ((lvl < OTX2_TM_LVL_MAX) &&
- (hw_lvl < NIX_TXSCH_LVL_CNT)) {
- /* TL2, TL3, TL4, MDQ */
- cap->n_nodes_max = rsp->schq[hw_lvl];
- cap->n_nodes_nonleaf_max = cap->n_nodes_max;
- cap->non_leaf_nodes_identical = 1;
-
- cap->nonleaf.shaper_private_supported = true;
- cap->nonleaf.shaper_private_dual_rate_supported = true;
- cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_packet_mode_supported = 1;
- cap->nonleaf.shaper_private_byte_mode_supported = 1;
-
- /* MDQ doesn't support Strict Priority */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
- else
- cap->nonleaf.sched_n_children_max =
- rsp->schq[hw_lvl - 1];
- cap->nonleaf.sched_sp_n_priorities_max =
- nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
- } else {
- /* unsupported level */
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- return rc;
- }
- return 0;
-}
-
-static int
-otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_node_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct free_rsrcs_rsp *rsp;
- int rc, hw_lvl, lvl;
-
- memset(cap, 0, sizeof(*cap));
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- hw_lvl = tm_node->hw_lvl;
- lvl = tm_node->lvl;
-
- /* Leaf node */
- if (nix_tm_is_leaf(dev, lvl)) {
- cap->stats_mask = RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES;
- return 0;
- }
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- /* Non Leaf Shaper */
- cap->shaper_private_supported = true;
- cap->shaper_private_dual_rate_supported =
- (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true;
- cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->shaper_private_packet_mode_supported = 1;
- cap->shaper_private_byte_mode_supported = 1;
-
- /* Non Leaf Scheduler */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
- else
- cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
-
- cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_children_per_group_max =
- cap->nonleaf.sched_n_children_max;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
-
- if (hw_lvl == NIX_TXSCH_LVL_TL1)
- cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
- return 0;
-}
-
-static int
-otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev,
- uint32_t profile_id,
- struct rte_tm_shaper_params *params,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile;
-
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID exist";
- return -EINVAL;
- }
-
- /* Committed rate and burst size can be enabled/disabled */
- if (params->committed.size || params->committed.rate) {
- if (params->committed.size < MIN_SHAPER_BURST ||
- params->committed.size > MAX_SHAPER_BURST) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
- return -EINVAL;
- } else if (!shaper_rate_to_nix(params->committed.rate * 8,
- NULL, NULL, NULL)) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
- error->message = "shaper committed rate invalid";
- return -EINVAL;
- }
- }
-
- /* Peak rate and burst size can be enabled/disabled */
- if (params->peak.size || params->peak.rate) {
- if (params->peak.size < MIN_SHAPER_BURST ||
- params->peak.size > MAX_SHAPER_BURST) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
- return -EINVAL;
- } else if (!shaper_rate_to_nix(params->peak.rate * 8,
- NULL, NULL, NULL)) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
- error->message = "shaper peak rate invalid";
- return -EINVAL;
- }
- }
-
- if (params->pkt_length_adjust < NIX_LENGTH_ADJUST_MIN ||
- params->pkt_length_adjust > NIX_LENGTH_ADJUST_MAX) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
- error->message = "length adjust invalid";
- return -EINVAL;
- }
-
- profile = rte_zmalloc("otx2_nix_tm_shaper_profile",
- sizeof(struct otx2_nix_tm_shaper_profile), 0);
- if (!profile)
- return -ENOMEM;
-
- profile->shaper_profile_id = profile_id;
- rte_memcpy(&profile->params, params,
- sizeof(struct rte_tm_shaper_params));
- TAILQ_INSERT_TAIL(&dev->shaper_profile_list, profile, shaper);
-
- otx2_tm_dbg("Added TM shaper profile %u, "
- " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64
- ", cbs %" PRIu64 " , adj %u, pkt mode %d",
- profile_id,
- params->peak.rate * 8,
- params->peak.size,
- params->committed.rate * 8,
- params->committed.size,
- params->pkt_length_adjust,
- params->packet_mode);
-
- /* Translate rate as bits per second */
- profile->params.peak.rate = profile->params.peak.rate * 8;
- profile->params.committed.rate = profile->params.committed.rate * 8;
- /* Always use PIR for single rate shaping */
- if (!params->peak.rate && params->committed.rate) {
- profile->params.peak = profile->params.committed;
- memset(&profile->params.committed, 0,
- sizeof(profile->params.committed));
- }
-
- /* update min rate */
- nix_tm_shaper_profile_update_min(dev);
- return 0;
-}
-
-static int
-otx2_nix_tm_shaper_profile_delete(struct rte_eth_dev *eth_dev,
- uint32_t profile_id,
- struct rte_tm_error *error)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- profile = nix_tm_shaper_profile_search(dev, profile_id);
-
- if (!profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID not exist";
- return -EINVAL;
- }
-
- if (profile->reference_count) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
- error->message = "shaper profile in use";
- return -EINVAL;
- }
-
- otx2_tm_dbg("Removing TM shaper profile %u", profile_id);
- TAILQ_REMOVE(&dev->shaper_profile_list, profile, shaper);
- rte_free(profile);
-
- /* update min rate */
- nix_tm_shaper_profile_update_min(dev);
- return 0;
-}
-
-static int
-otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
- uint32_t parent_node_id, uint32_t priority,
- uint32_t weight, uint32_t lvl,
- struct rte_tm_node_params *params,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile = NULL;
- struct otx2_nix_tm_node *parent_node;
- int rc, pkt_mode, clear_on_fail = 0;
- uint32_t exp_next_lvl, i;
- uint32_t profile_id;
- uint16_t hw_lvl;
-
- /* we don't support dynamic updates */
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "dynamic update not supported";
- return -EIO;
- }
-
- /* Leaf nodes have to be same priority */
- if (nix_tm_is_leaf(dev, lvl) && priority != 0) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "queue shapers must be priority 0";
- return -EIO;
- }
-
- parent_node = nix_tm_node_search(dev, parent_node_id, true);
-
- /* find the right level */
- if (lvl == RTE_TM_NODE_LEVEL_ID_ANY) {
- if (parent_node_id == RTE_TM_NODE_ID_NULL) {
- lvl = OTX2_TM_LVL_ROOT;
- } else if (parent_node) {
- lvl = parent_node->lvl + 1;
- } else {
- /* Neigher proper parent nor proper level id given */
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "invalid parent node id";
- return -ERANGE;
- }
- }
-
- /* Translate rte_tm level id's to nix hw level id's */
- hw_lvl = nix_tm_lvl2nix(dev, lvl);
- if (hw_lvl == NIX_TXSCH_LVL_CNT &&
- !nix_tm_is_leaf(dev, lvl)) {
- error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
- error->message = "invalid level id";
- return -ERANGE;
- }
-
- if (node_id < dev->tm_leaf_cnt)
- exp_next_lvl = NIX_TXSCH_LVL_SMQ;
- else
- exp_next_lvl = hw_lvl + 1;
-
- /* Check if there is no parent node yet */
- if (hw_lvl != dev->otx2_tm_root_lvl &&
- (!parent_node || parent_node->hw_lvl != exp_next_lvl)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "invalid parent node id";
- return -EINVAL;
- }
-
- /* Check if a node already exists */
- if (nix_tm_node_search(dev, node_id, true)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "node already exists";
- return -EINVAL;
- }
-
- if (!nix_tm_is_leaf(dev, lvl)) {
- /* Check if shaper profile exists for non leaf node */
- profile_id = params->shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && !profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "invalid shaper profile";
- return -EINVAL;
- }
-
- /* Minimum static priority count is 1 */
- if (!params->nonleaf.n_sp_priorities ||
- params->nonleaf.n_sp_priorities > TXSCH_TLX_SP_PRIO_MAX) {
- error->type =
- RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
- error->message = "invalid sp priorities";
- return -EINVAL;
- }
-
- pkt_mode = 0;
- /* Validate weight mode */
- for (i = 0; i < params->nonleaf.n_sp_priorities &&
- params->nonleaf.wfq_weight_mode; i++) {
- pkt_mode = !params->nonleaf.wfq_weight_mode[i];
- if (pkt_mode == !params->nonleaf.wfq_weight_mode[0])
- continue;
-
- error->type =
- RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
- error->message = "unsupported weight mode";
- return -EINVAL;
- }
-
- if (profile && params->nonleaf.n_sp_priorities &&
- pkt_mode != profile->params.packet_mode) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
- error->message = "shaper wfq packet mode mismatch";
- return -EINVAL;
- }
- }
-
- /* Check if there is second DWRR already in siblings or holes in prio */
- if (validate_prio(dev, lvl, parent_node_id, priority, error))
- return -EINVAL;
-
- if (weight > MAX_SCHED_WEIGHT) {
- error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "max weight exceeded";
- return -EINVAL;
- }
-
- rc = nix_tm_node_add_to_list(dev, node_id, parent_node_id,
- priority, weight, hw_lvl,
- lvl, true, params);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- /* cleanup user added nodes */
- if (clear_on_fail)
- nix_tm_free_resources(dev, NIX_TM_NODE_USER,
- NIX_TM_NODE_USER, false);
- error->message = "failed to add node";
- return rc;
- }
- error->type = RTE_TM_ERROR_TYPE_NONE;
- return 0;
-}
-
-static int
-otx2_nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node, *child_node;
- struct otx2_nix_tm_shaper_profile *profile;
- uint32_t profile_id;
-
- /* we don't support dynamic updates yet */
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "hierarchy exists";
- return -EIO;
- }
-
- if (node_id == RTE_TM_NODE_ID_NULL) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "invalid node id";
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- /* Check for any existing children */
- TAILQ_FOREACH(child_node, &dev->node_list, node) {
- if (child_node->parent == tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "children exist";
- return -EINVAL;
- }
- }
-
- /* Remove shaper profile reference */
- profile_id = tm_node->params.shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- profile->reference_count--;
-
- TAILQ_REMOVE(&dev->node_list, tm_node, node);
- rte_free(tm_node);
- return 0;
-}
-
-static int
-nix_tm_node_suspend_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error, bool suspend)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct nix_txschq_config *req;
- uint16_t flags;
- int rc;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy doesn't exist";
- return -EINVAL;
- }
-
- flags = tm_node->flags;
- flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) :
- (flags | NIX_TM_NODE_ENABLED);
-
- if (tm_node->flags == flags)
- return 0;
-
- /* send mbox for state change */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-
- req->lvl = tm_node->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node, suspend,
- req->reg, req->regval);
- rc = send_tm_reqval(mbox, req, error);
- if (!rc)
- tm_node->flags = flags;
- return rc;
-}
-
-static int
-otx2_nix_tm_node_suspend(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- return nix_tm_node_suspend_resume(eth_dev, node_id, error, true);
-}
-
-static int
-otx2_nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
-}
-
-static int
-otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
- int clear_on_fail,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- uint32_t leaf_cnt = 0;
- int rc;
-
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy exists";
- return -EINVAL;
- }
-
- /* Check if we have all the leaf nodes */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->flags & NIX_TM_NODE_USER &&
- tm_node->id < dev->tm_leaf_cnt)
- leaf_cnt++;
- }
-
- if (leaf_cnt != dev->tm_leaf_cnt) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "incomplete hierarchy";
- return -EINVAL;
- }
-
- /*
- * Disable xmit will be enabled when
- * new topology is available.
- */
- rc = nix_xmit_disable(eth_dev);
- if (rc) {
- otx2_err("failed to disable TX, rc=%d", rc);
- return -EIO;
- }
-
- /* Delete default/ratelimit tree */
- if (dev->tm_flags & (NIX_TM_DEFAULT_TREE | NIX_TM_RATE_LIMIT_TREE)) {
- rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "failed to free default resources";
- return rc;
- }
- dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE |
- NIX_TM_RATE_LIMIT_TREE);
- }
-
- /* Free up user alloc'ed resources */
- rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER,
- NIX_TM_NODE_USER, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "failed to free user resources";
- return rc;
- }
-
- rc = nix_tm_alloc_resources(eth_dev, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "alloc resources failed";
- /* TODO should we restore default config ? */
- if (clear_on_fail)
- nix_tm_free_resources(dev, 0, 0, false);
- return rc;
- }
-
- error->type = RTE_TM_ERROR_TYPE_NONE;
- dev->tm_flags |= NIX_TM_COMMITTED;
- return 0;
-}
-
-static int
-otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev,
- uint32_t node_id,
- uint32_t profile_id,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile = NULL;
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct nix_txschq_config *req;
- uint8_t k;
- int rc;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node || nix_tm_is_leaf(dev, tm_node->lvl)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "invalid node";
- return -EINVAL;
- }
-
- if (profile_id == tm_node->params.shaper_profile_id)
- return 0;
-
- if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (!profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID not exist";
- return -EINVAL;
- }
- }
-
- if (profile && profile->params.packet_mode != tm_node->pkt_mode) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile pkt mode mismatch";
- return -EINVAL;
- }
-
- tm_node->params.shaper_profile_id = profile_id;
-
- /* Nothing to do if not yet committed */
- if (!(dev->tm_flags & NIX_TM_COMMITTED))
- return 0;
-
- tm_node->flags &= ~NIX_TM_NODE_ENABLED;
-
- /* Flush the specific node with SW_XOFF */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = tm_node->hw_lvl;
- k = prepare_tm_sw_xoff(tm_node, true, req->reg, req->regval);
- req->num_regs = k;
-
- rc = send_tm_reqval(mbox, req, error);
- if (rc)
- return rc;
-
- shaper_default_red_algo(dev, tm_node, profile);
-
- /* Update the PIR/CIR and clear SW XOFF */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = prepare_tm_shaper_reg(tm_node, profile, req->reg, req->regval);
-
- k += prepare_tm_sw_xoff(tm_node, false, &req->reg[k], &req->regval[k]);
-
- req->num_regs = k;
- rc = send_tm_reqval(mbox, req, error);
- if (!rc)
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- return rc;
-}
-
-static int
-otx2_nix_tm_node_parent_update(struct rte_eth_dev *eth_dev,
- uint32_t node_id, uint32_t new_parent_id,
- uint32_t priority, uint32_t weight,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_nix_tm_node *new_parent;
- struct nix_txschq_config *req;
- uint8_t k;
- int rc;
-
- if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy doesn't exist";
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- /* Parent id valid only for non root nodes */
- if (tm_node->hw_lvl != dev->otx2_tm_root_lvl) {
- new_parent = nix_tm_node_search(dev, new_parent_id, true);
- if (!new_parent) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "no such parent node";
- return -EINVAL;
- }
-
- /* Current support is only for dynamic weight update */
- if (tm_node->parent != new_parent ||
- tm_node->priority != priority) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "only weight update supported";
- return -EINVAL;
- }
- }
-
- /* Skip if no change */
- if (tm_node->weight == weight)
- return 0;
-
- tm_node->weight = weight;
-
- /* For leaf nodes, SQ CTX needs update */
- if (nix_tm_is_leaf(dev, tm_node->lvl)) {
- /* Update SQ quantum data on the fly */
- rc = nix_sq_sched_data(dev, tm_node, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "sq sched data update failed";
- return rc;
- }
- } else {
- /* XOFF Parent node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->parent->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node->parent, true,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XOFF this node and all other siblings */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = 0;
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- k += prepare_tm_sw_xoff(sibling, true, &req->reg[k],
- &req->regval[k]);
- }
- req->num_regs = k;
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* Update new weight for current node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
- req->num_regs = prepare_tm_sched_reg(dev, tm_node,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XON this node and all other siblings */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = 0;
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- k += prepare_tm_sw_xoff(sibling, false, &req->reg[k],
- &req->regval[k]);
- }
- req->num_regs = k;
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XON Parent node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->parent->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node->parent, false,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
- }
- return 0;
-}
-
-static int
-otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_node_stats *stats,
- uint64_t *stats_mask, int clear,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- uint64_t reg, val;
- int64_t *addr;
- int rc = 0;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- if (!(tm_node->flags & NIX_TM_NODE_HWRES)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "HW resources not allocated";
- return -EINVAL;
- }
-
- /* Stats support only for leaf node or TL1 root */
- if (nix_tm_is_leaf(dev, tm_node->lvl)) {
- reg = (((uint64_t)tm_node->id) << 32);
-
- /* Packets */
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->n_pkts = val - tm_node->last_pkts;
-
- /* Bytes */
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->n_bytes = val - tm_node->last_bytes;
-
- if (clear) {
- tm_node->last_pkts = stats->n_pkts;
- tm_node->last_bytes = stats->n_bytes;
- }
-
- *stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES;
-
- } else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "stats read error";
-
- /* RED Drop packets */
- reg = NIX_AF_TL1X_DROPPED_PACKETS(tm_node->hw_id);
- rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
- if (rc)
- goto exit;
- stats->leaf.n_pkts_dropped[RTE_COLOR_RED] =
- val - tm_node->last_pkts;
-
- /* RED Drop bytes */
- reg = NIX_AF_TL1X_DROPPED_BYTES(tm_node->hw_id);
- rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
- if (rc)
- goto exit;
- stats->leaf.n_bytes_dropped[RTE_COLOR_RED] =
- val - tm_node->last_bytes;
-
- /* Clear stats */
- if (clear) {
- tm_node->last_pkts =
- stats->leaf.n_pkts_dropped[RTE_COLOR_RED];
- tm_node->last_bytes =
- stats->leaf.n_bytes_dropped[RTE_COLOR_RED];
- }
-
- *stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
-
- } else {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "unsupported node";
- rc = -EINVAL;
- }
-
-exit:
- return rc;
-}
-
-const struct rte_tm_ops otx2_tm_ops = {
- .node_type_get = otx2_nix_tm_node_type_get,
-
- .capabilities_get = otx2_nix_tm_capa_get,
- .level_capabilities_get = otx2_nix_tm_level_capa_get,
- .node_capabilities_get = otx2_nix_tm_node_capa_get,
-
- .shaper_profile_add = otx2_nix_tm_shaper_profile_add,
- .shaper_profile_delete = otx2_nix_tm_shaper_profile_delete,
-
- .node_add = otx2_nix_tm_node_add,
- .node_delete = otx2_nix_tm_node_delete,
- .node_suspend = otx2_nix_tm_node_suspend,
- .node_resume = otx2_nix_tm_node_resume,
- .hierarchy_commit = otx2_nix_tm_hierarchy_commit,
-
- .node_shaper_update = otx2_nix_tm_node_shaper_update,
- .node_parent_update = otx2_nix_tm_node_parent_update,
- .node_stats_read = otx2_nix_tm_node_stats_read,
-};
-
-static int
-nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t def = eth_dev->data->nb_tx_queues;
- struct rte_tm_node_params params;
- uint32_t leaf_parent, i;
- int rc = 0, leaf_level;
-
- /* Default params */
- memset(¶ms, 0, sizeof(params));
- params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
-
- if (nix_tm_have_tl1_access(dev)) {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL1,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto exit;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH4, false, ¶ms);
- if (rc)
- goto exit;
-
- leaf_parent = def + 4;
- leaf_level = OTX2_TM_LVL_QUEUE;
- } else {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto exit;
-
- leaf_parent = def + 3;
- leaf_level = OTX2_TM_LVL_SCH4;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- leaf_level, false, ¶ms);
- if (rc)
- break;
- }
-
-exit:
- return rc;
-}
-
-void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- TAILQ_INIT(&dev->node_list);
- TAILQ_INIT(&dev->shaper_profile_list);
- dev->tm_rate_min = 1E9; /* 1Gbps */
-}
-
-int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
- int rc;
-
- /* Free up all resources already held */
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc) {
- otx2_err("Failed to freeup existing resources,rc=%d", rc);
- return rc;
- }
-
- /* Clear shaper profiles */
- nix_tm_clear_shaper_profiles(dev);
- dev->tm_flags = NIX_TM_DEFAULT_TREE;
-
- /* Disable TL1 Static Priority when VF's are enabled
- * as otherwise VF's TL2 reallocation will be needed
- * runtime to support a specific topology of PF.
- */
- if (pci_dev->max_vfs)
- dev->tm_flags |= NIX_TM_TL1_NO_SP;
-
- rc = nix_tm_prepare_default_tree(eth_dev);
- if (rc != 0)
- return rc;
-
- rc = nix_tm_alloc_resources(eth_dev, false);
- if (rc != 0)
- return rc;
- dev->tm_leaf_cnt = sq_cnt;
-
- return 0;
-}
-
-static int
-nix_tm_prepare_rate_limited_tree(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t def = eth_dev->data->nb_tx_queues;
- struct rte_tm_node_params params;
- uint32_t leaf_parent, i, rc = 0;
-
- memset(¶ms, 0, sizeof(params));
-
- if (nix_tm_have_tl1_access(dev)) {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL1,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto error;
- leaf_parent = def + 3;
-
- /* Add per queue SMQ nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
- leaf_parent,
- 0, DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH4,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i,
- leaf_parent + 1 + i, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- OTX2_TM_LVL_QUEUE,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- return 0;
- }
-
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto error;
- leaf_parent = def + 2;
-
- /* Add per queue SMQ nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
- leaf_parent,
- 0, DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH3,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i, leaf_parent + 1 + i, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- OTX2_TM_LVL_SCH4,
- false, ¶ms);
- if (rc)
- break;
- }
-error:
- return rc;
-}
-
-static int
-otx2_nix_tm_rate_limit_mdq(struct rte_eth_dev *eth_dev,
- struct otx2_nix_tm_node *tm_node,
- uint64_t tx_rate)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile profile;
- struct otx2_mbox *mbox = dev->mbox;
- volatile uint64_t *reg, *regval;
- struct nix_txschq_config *req;
- uint16_t flags;
- uint8_t k = 0;
- int rc;
-
- flags = tm_node->flags;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_MDQ;
- reg = req->reg;
- regval = req->regval;
-
- if (tx_rate == 0) {
- k += prepare_tm_sw_xoff(tm_node, true, ®[k], ®val[k]);
- flags &= ~NIX_TM_NODE_ENABLED;
- goto exit;
- }
-
- if (!(flags & NIX_TM_NODE_ENABLED)) {
- k += prepare_tm_sw_xoff(tm_node, false, ®[k], ®val[k]);
- flags |= NIX_TM_NODE_ENABLED;
- }
-
- /* Use only PIR for rate limit */
- memset(&profile, 0, sizeof(profile));
- profile.params.peak.rate = tx_rate;
- /* Minimum burst of ~4us Bytes of Tx */
- profile.params.peak.size = RTE_MAX(NIX_MAX_HW_FRS,
- (4ull * tx_rate) / (1E6 * 8));
- if (!dev->tm_rate_min || dev->tm_rate_min > tx_rate)
- dev->tm_rate_min = tx_rate;
-
- k += prepare_tm_shaper_reg(tm_node, &profile, ®[k], ®val[k]);
-exit:
- req->num_regs = k;
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- tm_node->flags = flags;
- return 0;
-}
-
-int
-otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
- uint16_t queue_idx, uint16_t tx_rate_mbps)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6;
- struct otx2_nix_tm_node *tm_node;
- int rc;
-
- /* Check for supported revisions */
- if (otx2_dev_is_95xx_Ax(dev) ||
- otx2_dev_is_96xx_Ax(dev))
- return -EINVAL;
-
- if (queue_idx >= eth_dev->data->nb_tx_queues)
- return -EINVAL;
-
- if (!(dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
- !(dev->tm_flags & NIX_TM_RATE_LIMIT_TREE))
- goto error;
-
- if ((dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
- eth_dev->data->nb_tx_queues > 1) {
- /* For TM topology change ethdev needs to be stopped */
- if (eth_dev->data->dev_started)
- return -EBUSY;
-
- /*
- * Disable xmit will be enabled when
- * new topology is available.
- */
- rc = nix_xmit_disable(eth_dev);
- if (rc) {
- otx2_err("failed to disable TX, rc=%d", rc);
- return -EIO;
- }
-
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc < 0) {
- otx2_tm_dbg("failed to free default resources, rc %d",
- rc);
- return -EIO;
- }
-
- rc = nix_tm_prepare_rate_limited_tree(eth_dev);
- if (rc < 0) {
- otx2_tm_dbg("failed to prepare tm tree, rc=%d", rc);
- return rc;
- }
-
- rc = nix_tm_alloc_resources(eth_dev, true);
- if (rc != 0) {
- otx2_tm_dbg("failed to allocate tm tree, rc=%d", rc);
- return rc;
- }
-
- dev->tm_flags &= ~NIX_TM_DEFAULT_TREE;
- dev->tm_flags |= NIX_TM_RATE_LIMIT_TREE;
- }
-
- tm_node = nix_tm_node_search(dev, queue_idx, false);
-
- /* check if we found a valid leaf node */
- if (!tm_node ||
- !nix_tm_is_leaf(dev, tm_node->lvl) ||
- !tm_node->parent ||
- tm_node->parent->hw_id == UINT32_MAX)
- return -EIO;
-
- return otx2_nix_tm_rate_limit_mdq(eth_dev, tm_node->parent, tx_rate);
-error:
- otx2_tm_dbg("Unsupported TM tree 0x%0x", dev->tm_flags);
- return -EINVAL;
-}
-
-int
-otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *arg)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (!arg)
- return -EINVAL;
-
- /* Check for supported revisions */
- if (otx2_dev_is_95xx_Ax(dev) ||
- otx2_dev_is_96xx_Ax(dev))
- return -EINVAL;
-
- *(const void **)arg = &otx2_tm_ops;
-
- return 0;
-}
-
-int
-otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
-
- /* Xmit is assumed to be disabled */
- /* Free up resources already held */
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc) {
- otx2_err("Failed to freeup existing resources,rc=%d", rc);
- return rc;
- }
-
- /* Clear shaper profiles */
- nix_tm_clear_shaper_profiles(dev);
-
- dev->tm_flags = 0;
- return 0;
-}
-
-int
-otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
- uint32_t *rr_quantum, uint16_t *smq)
-{
- struct otx2_nix_tm_node *tm_node;
- int rc;
-
- /* 0..sq_cnt-1 are leaf nodes */
- if (sq >= dev->tm_leaf_cnt)
- return -EINVAL;
-
- /* Search for internal node first */
- tm_node = nix_tm_node_search(dev, sq, false);
- if (!tm_node)
- tm_node = nix_tm_node_search(dev, sq, true);
-
- /* Check if we found a valid leaf node */
- if (!tm_node || !nix_tm_is_leaf(dev, tm_node->lvl) ||
- !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
- return -EIO;
- }
-
- /* Get SMQ Id of leaf node's parent */
- *smq = tm_node->parent->hw_id;
- *rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc)
- return rc;
- tm_node->flags |= NIX_TM_NODE_ENABLED;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
deleted file mode 100644
index db44d4891f..0000000000
--- a/drivers/net/octeontx2/otx2_tm.h
+++ /dev/null
@@ -1,176 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TM_H__
-#define __OTX2_TM_H__
-
-#include <stdbool.h>
-
-#include <rte_tm_driver.h>
-
-#define NIX_TM_DEFAULT_TREE BIT_ULL(0)
-#define NIX_TM_COMMITTED BIT_ULL(1)
-#define NIX_TM_RATE_LIMIT_TREE BIT_ULL(2)
-#define NIX_TM_TL1_NO_SP BIT_ULL(3)
-
-struct otx2_eth_dev;
-
-void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops);
-int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
- uint32_t *rr_quantum, uint16_t *smq);
-int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
- uint16_t queue_idx, uint16_t tx_rate);
-int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
-int otx2_nix_sq_flush_post(void *_txq);
-int otx2_nix_sq_enable(void *_txq);
-int otx2_nix_get_link(struct otx2_eth_dev *dev);
-int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
-
-struct otx2_nix_tm_node {
- TAILQ_ENTRY(otx2_nix_tm_node) node;
- uint32_t id;
- uint32_t hw_id;
- uint32_t priority;
- uint32_t weight;
- uint16_t lvl;
- uint16_t hw_lvl;
- uint32_t rr_prio;
- uint32_t rr_num;
- uint32_t max_prio;
- uint32_t parent_hw_id;
- uint32_t flags:16;
-#define NIX_TM_NODE_HWRES BIT_ULL(0)
-#define NIX_TM_NODE_ENABLED BIT_ULL(1)
-#define NIX_TM_NODE_USER BIT_ULL(2)
-#define NIX_TM_NODE_RED_DISCARD BIT_ULL(3)
- /* Shaper algorithm for RED state @NIX_REDALG_E */
- uint32_t red_algo:2;
- uint32_t pkt_mode:1;
-
- struct otx2_nix_tm_node *parent;
- struct rte_tm_node_params params;
-
- /* Last stats */
- uint64_t last_pkts;
- uint64_t last_bytes;
-};
-
-struct otx2_nix_tm_shaper_profile {
- TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
- uint32_t shaper_profile_id;
- uint32_t reference_count;
- struct rte_tm_shaper_params params; /* Rate in bits/sec */
-};
-
-struct shaper_params {
- uint64_t burst_exponent;
- uint64_t burst_mantissa;
- uint64_t div_exp;
- uint64_t exponent;
- uint64_t mantissa;
- uint64_t burst;
- uint64_t rate;
-};
-
-TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node);
-TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
-
-#define MAX_SCHED_WEIGHT ((uint8_t)~0)
-#define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1)
-#define NIX_TM_WEIGHT_TO_RR_QUANTUM(__weight) \
- ((((__weight) & MAX_SCHED_WEIGHT) * \
- NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
-
-/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */
-/* = NIX_MAX_HW_MTU */
-#define DEFAULT_RR_WEIGHT 71
-
-/** NIX rate limits */
-#define MAX_RATE_DIV_EXP 12
-#define MAX_RATE_EXPONENT 0xf
-#define MAX_RATE_MANTISSA 0xff
-
-#define NIX_SHAPER_RATE_CONST ((uint64_t)2E6)
-
-/* NIX rate calculation in Bits/Sec
- * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
- * << NIX_*_PIR[RATE_EXPONENT]) / 256
- * PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
- *
- * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
- * << NIX_*_CIR[RATE_EXPONENT]) / 256
- * CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
- */
-#define SHAPER_RATE(exponent, mantissa, div_exp) \
- ((NIX_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent)))\
- / (((1ull << (div_exp)) * 256)))
-
-/* 96xx rate limits in Bits/Sec */
-#define MIN_SHAPER_RATE \
- SHAPER_RATE(0, 0, MAX_RATE_DIV_EXP)
-
-#define MAX_SHAPER_RATE \
- SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0)
-
-/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */
-#define NIX_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1)
-#define NIX_LENGTH_ADJUST_MAX 255
-
-/** TM Shaper - low level operations */
-
-/** NIX burst limits */
-#define MAX_BURST_EXPONENT 0xf
-#define MAX_BURST_MANTISSA 0xff
-
-/* NIX burst calculation
- * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA])
- * << (NIX_*_PIR[BURST_EXPONENT] + 1))
- * / 256
- *
- * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA])
- * << (NIX_*_CIR[BURST_EXPONENT] + 1))
- * / 256
- */
-#define SHAPER_BURST(exponent, mantissa) \
- (((256 + (mantissa)) << ((exponent) + 1)) / 256)
-
-/** Shaper burst limits */
-#define MIN_SHAPER_BURST \
- SHAPER_BURST(0, 0)
-
-#define MAX_SHAPER_BURST \
- SHAPER_BURST(MAX_BURST_EXPONENT,\
- MAX_BURST_MANTISSA)
-
-/* Default TL1 priority and Quantum from AF */
-#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1)
-#define TXSCH_TL1_DFLT_RR_PRIO 1
-
-#define TXSCH_TLX_SP_PRIO_MAX 10
-
-static inline const char *
-nix_hwlvl2str(uint32_t hw_lvl)
-{
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_MDQ:
- return "SMQ/MDQ";
- case NIX_TXSCH_LVL_TL4:
- return "TL4";
- case NIX_TXSCH_LVL_TL3:
- return "TL3";
- case NIX_TXSCH_LVL_TL2:
- return "TL2";
- case NIX_TXSCH_LVL_TL1:
- return "TL1";
- default:
- break;
- }
-
- return "???";
-}
-
-#endif /* __OTX2_TM_H__ */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
deleted file mode 100644
index e95184632f..0000000000
--- a/drivers/net/octeontx2/otx2_tx.c
+++ /dev/null
@@ -1,1077 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_vect.h>
-
-#include "otx2_ethdev.h"
-
-#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \
- /* Cached value is low, Update the fc_cache_pkts */ \
- if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
- /* Multiply with sqe_per_sqb to express in pkts */ \
- (txq)->fc_cache_pkts = \
- ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \
- (txq)->sqes_per_sqb_log2; \
- /* Check it again for the room */ \
- if (unlikely((txq)->fc_cache_pkts < (pkts))) \
- return 0; \
- } \
-} while (0)
-
-
-static __rte_always_inline uint16_t
-nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- struct otx2_eth_txq *txq = tx_queue; uint16_t i;
- const rte_iova_t io_addr = txq->io_addr;
- void *lmt_addr = txq->lmt_addr;
- uint64_t lso_tun_fmt;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
-
- /* Perform header writes before barrier for TSO */
- if (flags & NIX_TX_OFFLOAD_TSO_F) {
- lso_tun_fmt = txq->lso_tun_fmt;
- for (i = 0; i < pkts; i++)
- otx2_nix_xmit_prepare_tso(tx_pkts[i], flags);
- }
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- for (i = 0; i < pkts; i++) {
- otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt);
- /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- tx_pkts[i]->ol_flags, 4, flags);
- otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
- }
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- return pkts;
-}
-
-static __rte_always_inline uint16_t
-nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- struct otx2_eth_txq *txq = tx_queue; uint64_t i;
- const rte_iova_t io_addr = txq->io_addr;
- void *lmt_addr = txq->lmt_addr;
- uint64_t lso_tun_fmt;
- uint16_t segdw;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
-
- /* Perform header writes before barrier for TSO */
- if (flags & NIX_TX_OFFLOAD_TSO_F) {
- lso_tun_fmt = txq->lso_tun_fmt;
- for (i = 0; i < pkts; i++)
- otx2_nix_xmit_prepare_tso(tx_pkts[i], flags);
- }
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- for (i = 0; i < pkts; i++) {
- otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt);
- segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags);
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- tx_pkts[i]->ol_flags, segdw,
- flags);
- otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
- }
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- return pkts;
-}
-
-#if defined(RTE_ARCH_ARM64)
-
-#define NIX_DESCS_PER_LOOP 4
-static __rte_always_inline uint16_t
-nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
- uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
- uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3;
- uint64x2_t senddesc01_w0, senddesc23_w0;
- uint64x2_t senddesc01_w1, senddesc23_w1;
- uint64x2_t sgdesc01_w0, sgdesc23_w0;
- uint64x2_t sgdesc01_w1, sgdesc23_w1;
- struct otx2_eth_txq *txq = tx_queue;
- uint64_t *lmt_addr = txq->lmt_addr;
- rte_iova_t io_addr = txq->io_addr;
- uint64x2_t ltypes01, ltypes23;
- uint64x2_t xtmp128, ytmp128;
- uint64x2_t xmask01, xmask23;
- uint64x2_t cmd00, cmd01;
- uint64x2_t cmd10, cmd11;
- uint64x2_t cmd20, cmd21;
- uint64x2_t cmd30, cmd31;
- uint64_t lmt_status, i;
- uint16_t pkts_left;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
- pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]);
- senddesc23_w0 = senddesc01_w0;
- senddesc01_w1 = vdupq_n_u64(0);
- senddesc23_w1 = senddesc01_w1;
- sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]);
- sgdesc23_w0 = sgdesc01_w0;
-
- for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
- /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
- senddesc01_w0 = vbicq_u64(senddesc01_w0,
- vdupq_n_u64(0xFFFFFFFF));
- sgdesc01_w0 = vbicq_u64(sgdesc01_w0,
- vdupq_n_u64(0xFFFFFFFF));
-
- senddesc23_w0 = senddesc01_w0;
- sgdesc23_w0 = sgdesc01_w0;
-
- /* Move mbufs to iova */
- mbuf0 = (uint64_t *)tx_pkts[0];
- mbuf1 = (uint64_t *)tx_pkts[1];
- mbuf2 = (uint64_t *)tx_pkts[2];
- mbuf3 = (uint64_t *)tx_pkts[3];
-
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mbuf, buf_iova));
- /*
- * Get mbuf's, olflags, iova, pktlen, dataoff
- * dataoff_iovaX.D[0] = iova,
- * dataoff_iovaX.D[1](15:0) = mbuf->dataoff
- * len_olflagsX.D[0] = ol_flags,
- * len_olflagsX.D[1](63:32) = mbuf->pkt_len
- */
- dataoff_iova0 = vld1q_u64(mbuf0);
- len_olflags0 = vld1q_u64(mbuf0 + 2);
- dataoff_iova1 = vld1q_u64(mbuf1);
- len_olflags1 = vld1q_u64(mbuf1 + 2);
- dataoff_iova2 = vld1q_u64(mbuf2);
- len_olflags2 = vld1q_u64(mbuf2 + 2);
- dataoff_iova3 = vld1q_u64(mbuf3);
- len_olflags3 = vld1q_u64(mbuf3 + 2);
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- struct rte_mbuf *mbuf;
- /* Set don't free bit if reference count > 1 */
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
- offsetof(struct rte_mbuf, buf_iova));
-
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask01, 0);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask01, 1);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask23, 0);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask23, 1);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Ensuring mbuf fields which got updated in
- * otx2_nix_prefree_seg are written before LMTST.
- */
- rte_io_wmb();
- } else {
- struct rte_mbuf *mbuf;
- /* Mark mempool object as "put" since
- * it is freed by NIX
- */
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
- RTE_SET_USED(mbuf);
- }
-
- /* Move mbufs to point pool */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
-
- if (flags &
- (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
- /* Get tx_offload for ol2, ol3, l2, l3 lengths */
- /*
- * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
- * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
- */
-
- asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" :
- [a]"+w"(senddesc01_w1) :
- [in]"r"(mbuf0 + 2) : "memory");
-
- asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" :
- [a]"+w"(senddesc01_w1) :
- [in]"r"(mbuf1 + 2) : "memory");
-
- asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" :
- [b]"+w"(senddesc23_w1) :
- [in]"r"(mbuf2 + 2) : "memory");
-
- asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" :
- [b]"+w"(senddesc23_w1) :
- [in]"r"(mbuf3 + 2) : "memory");
-
- /* Get pool pointer alone */
- mbuf0 = (uint64_t *)*mbuf0;
- mbuf1 = (uint64_t *)*mbuf1;
- mbuf2 = (uint64_t *)*mbuf2;
- mbuf3 = (uint64_t *)*mbuf3;
- } else {
- /* Get pool pointer alone */
- mbuf0 = (uint64_t *)*mbuf0;
- mbuf1 = (uint64_t *)*mbuf1;
- mbuf2 = (uint64_t *)*mbuf2;
- mbuf3 = (uint64_t *)*mbuf3;
- }
-
- const uint8x16_t shuf_mask2 = {
- 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- xtmp128 = vzip2q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip2q_u64(len_olflags2, len_olflags3);
-
- /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */
- const uint64x2_t and_mask0 = {
- 0xFFFFFFFFFFFFFFFF,
- 0x000000000000FFFF,
- };
-
- dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0);
- dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0);
- dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0);
- dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0);
-
- /*
- * Pick only 16 bits of pktlen preset at bits 63:32
- * and place them at bits 15:0.
- */
- xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2);
- ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2);
-
- /* Add pairwise to get dataoff + iova in sgdesc_w1 */
- sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1);
- sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3);
-
- /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of
- * pktlen at 15:0 position.
- */
- sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128);
- sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128);
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128);
-
- if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /*
- * Lookup table to translate ol_flags to
- * il3/il4 types. But we still use ol3/ol4 types in
- * senddesc_w1 as only one header processing is enabled.
- */
- const uint8x16_t tbl = {
- /* [0-15] = il4type:il3type */
- 0x04, /* none (IPv6 assumed) */
- 0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
- 0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
- 0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
- 0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
- 0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
- 0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
- 0x02, /* RTE_MBUF_F_TX_IPV4 */
- 0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
- 0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
- 0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
- 0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- };
-
- /* Extract olflags to translate to iltypes */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(47):L3_LEN(9):L2_LEN(7+z)
- * E(47):L3_LEN(9):L2_LEN(7+z)
- */
- senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1);
- senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1);
-
- /* Move OLFLAGS bits 55:52 to 51:48
- * with zeros preprended on the byte and rest
- * don't care
- */
- xtmp128 = vshrq_n_u8(xtmp128, 4);
- ytmp128 = vshrq_n_u8(ytmp128, 4);
- /*
- * E(48):L3_LEN(8):L2_LEN(z+7)
- * E(48):L3_LEN(8):L2_LEN(z+7)
- */
- const int8x16_t tshft3 = {
- -1, 0, 8, 8, 8, 8, 8, 8,
- -1, 0, 8, 8, 8, 8, 8, 8,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Do the lookup */
- ltypes01 = vqtbl1q_u8(tbl, xtmp128);
- ltypes23 = vqtbl1q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 48:55 of iltype
- * and place it in ol3/ol4type of senddesc_w1
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
- * a [E(32):E(16):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E(32):E(16):(OL3+OL2):OL2]
- * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u16(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u16(senddesc23_w1, 8));
-
- /* Create first half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
-
- } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /*
- * Lookup table to translate ol_flags to
- * ol3/ol4 types.
- */
-
- const uint8x16_t tbl = {
- /* [0-15] = ol4type:ol3type */
- 0x00, /* none */
- 0x03, /* OUTER_IP_CKSUM */
- 0x02, /* OUTER_IPV4 */
- 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
- 0x04, /* OUTER_IPV6 */
- 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM */
- 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */
- 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */
- 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- };
-
- /* Extract olflags to translate to iltypes */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(47):OL3_LEN(9):OL2_LEN(7+z)
- * E(47):OL3_LEN(9):OL2_LEN(7+z)
- */
- const uint8x16_t shuf_mask5 = {
- 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
- senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
-
- /* Extract outer ol flags only */
- const uint64x2_t o_cksum_mask = {
- 0x1C00020000000000,
- 0x1C00020000000000,
- };
-
- xtmp128 = vandq_u64(xtmp128, o_cksum_mask);
- ytmp128 = vandq_u64(ytmp128, o_cksum_mask);
-
- /* Extract OUTER_UDP_CKSUM bit 41 and
- * move it to bit 61
- */
-
- xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
- ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
-
- /* Shift oltype by 2 to start nibble from BIT(56)
- * instead of BIT(58)
- */
- xtmp128 = vshrq_n_u8(xtmp128, 2);
- ytmp128 = vshrq_n_u8(ytmp128, 2);
- /*
- * E(48):L3_LEN(8):L2_LEN(z+7)
- * E(48):L3_LEN(8):L2_LEN(z+7)
- */
- const int8x16_t tshft3 = {
- -1, 0, 8, 8, 8, 8, 8, 8,
- -1, 0, 8, 8, 8, 8, 8, 8,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Do the lookup */
- ltypes01 = vqtbl1q_u8(tbl, xtmp128);
- ltypes23 = vqtbl1q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 56:63 of oltype
- * and place it in ol3/ol4type of senddesc_w1
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
- * a [E(32):E(16):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E(32):E(16):(OL3+OL2):OL2]
- * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u16(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u16(senddesc23_w1, 8));
-
- /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
-
- } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /* Lookup table to translate ol_flags to
- * ol4type, ol3type, il4type, il3type of senddesc_w1
- */
- const uint8x16x2_t tbl = {
- {
- {
- /* [0-15] = il4type:il3type */
- 0x04, /* none (IPv6) */
- 0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
- 0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
- 0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
- 0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- 0x02, /* RTE_MBUF_F_TX_IPV4 */
- 0x12, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x22, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x32, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- 0x03, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_IP_CKSUM
- */
- 0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- },
-
- {
- /* [16-31] = ol4type:ol3type */
- 0x00, /* none */
- 0x03, /* OUTER_IP_CKSUM */
- 0x02, /* OUTER_IPV4 */
- 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
- 0x04, /* OUTER_IPV6 */
- 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM */
- 0x33, /* OUTER_UDP_CKSUM |
- * OUTER_IP_CKSUM
- */
- 0x32, /* OUTER_UDP_CKSUM |
- * OUTER_IPV4
- */
- 0x33, /* OUTER_UDP_CKSUM |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- 0x34, /* OUTER_UDP_CKSUM |
- * OUTER_IPV6
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- },
- }
- };
-
- /* Extract olflags to translate to oltype & iltype */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
- * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
- */
- const uint32x4_t tshft_4 = {
- 1, 0,
- 1, 0,
- };
- senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4);
- senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4);
-
- /*
- * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
- * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
- */
- const uint8x16_t shuf_mask5 = {
- 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
- senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
-
- /* Extract outer and inner header ol_flags */
- const uint64x2_t oi_cksum_mask = {
- 0x1CF0020000000000,
- 0x1CF0020000000000,
- };
-
- xtmp128 = vandq_u64(xtmp128, oi_cksum_mask);
- ytmp128 = vandq_u64(ytmp128, oi_cksum_mask);
-
- /* Extract OUTER_UDP_CKSUM bit 41 and
- * move it to bit 61
- */
-
- xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
- ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
-
- /* Shift right oltype by 2 and iltype by 4
- * to start oltype nibble from BIT(58)
- * instead of BIT(56) and iltype nibble from BIT(48)
- * instead of BIT(52).
- */
- const int8x16_t tshft5 = {
- 8, 8, 8, 8, 8, 8, -4, -2,
- 8, 8, 8, 8, 8, 8, -4, -2,
- };
-
- xtmp128 = vshlq_u8(xtmp128, tshft5);
- ytmp128 = vshlq_u8(ytmp128, tshft5);
- /*
- * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
- * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
- */
- const int8x16_t tshft3 = {
- -1, 0, -1, 0, 0, 0, 0, 0,
- -1, 0, -1, 0, 0, 0, 0, 0,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Mark Bit(4) of oltype */
- const uint64x2_t oi_cksum_mask2 = {
- 0x1000000000000000,
- 0x1000000000000000,
- };
-
- xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2);
- ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2);
-
- /* Do the lookup */
- ltypes01 = vqtbl2q_u8(tbl, xtmp128);
- ltypes23 = vqtbl2q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 48:55 of iltype and
- * Bit 56:63 of oltype and place it in corresponding
- * place in senddesc_w1.
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from
- * l3len, l2len, ol3len, ol2len.
- * a [E(32):L3(8):L2(8):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2]
- * a = a + (a << 16)
- * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2]
- * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u32(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u32(senddesc23_w1, 8));
-
- /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u32(senddesc01_w1, 16));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u32(senddesc23_w1, 16));
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
- } else {
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
-
- /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
- }
-
- do {
- vst1q_u64(lmt_addr, cmd00);
- vst1q_u64(lmt_addr + 2, cmd01);
- vst1q_u64(lmt_addr + 4, cmd10);
- vst1q_u64(lmt_addr + 6, cmd11);
- vst1q_u64(lmt_addr + 8, cmd20);
- vst1q_u64(lmt_addr + 10, cmd21);
- vst1q_u64(lmt_addr + 12, cmd30);
- vst1q_u64(lmt_addr + 14, cmd31);
- lmt_status = otx2_lmt_submit(io_addr);
-
- } while (lmt_status == 0);
- tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
- }
-
- if (unlikely(pkts_left))
- pkts += nix_xmit_pkts(tx_queue, tx_pkts, pkts_left, cmd, flags);
-
- return pkts;
-}
-
-#else
-static __rte_always_inline uint16_t
-nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- RTE_SET_USED(tx_queue);
- RTE_SET_USED(tx_pkts);
- RTE_SET_USED(pkts);
- RTE_SET_USED(cmd);
- RTE_SET_USED(flags);
- return 0;
-}
-#endif
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[sz]; \
- \
- /* For TSO inner checksum is a must */ \
- if (((flags) & NIX_TX_OFFLOAD_TSO_F) && \
- !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \
- return 0; \
- return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- \
- /* For TSO inner checksum is a must */ \
- if (((flags) & NIX_TX_OFFLOAD_TSO_F) && \
- !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \
- return 0; \
- return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \
- (flags) | NIX_TX_MULTI_SEG_F); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[sz]; \
- \
- /* VLAN, TSTMP, TSO is not supported by vec */ \
- if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \
- (flags) & NIX_TX_OFFLOAD_TSTAMP_F || \
- (flags) & NIX_TX_OFFLOAD_TSO_F) \
- return 0; \
- return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, cmd, (flags)); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-static inline void
-pick_tx_func(struct rte_eth_dev *eth_dev,
- const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* [SEC] [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
- eth_dev->tx_pkt_burst = tx_burst
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
-}
-
-void
-otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- if (dev->scalar_ena ||
- (dev->tx_offload_flags &
- (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F |
- NIX_TX_OFFLOAD_TSO_F)))
- pick_tx_func(eth_dev, nix_eth_tx_burst);
- else
- pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-
- if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
-
- rte_mb();
-}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
deleted file mode 100644
index 4bbd5a390f..0000000000
--- a/drivers/net/octeontx2/otx2_tx.h
+++ /dev/null
@@ -1,791 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TX_H__
-#define __OTX2_TX_H__
-
-#define NIX_TX_OFFLOAD_NONE (0)
-#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0)
-#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
-#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2)
-#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3)
-#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4)
-#define NIX_TX_OFFLOAD_TSO_F BIT(5)
-#define NIX_TX_OFFLOAD_SECURITY_F BIT(6)
-
-/* Flags to control xmit_prepare function.
- * Defining it from backwards to denote its been
- * not used as offload flags to pick function
- */
-#define NIX_TX_MULTI_SEG_F BIT(15)
-
-#define NIX_TX_NEED_SEND_HDR_W1 \
- (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \
- NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)
-
-#define NIX_TX_NEED_EXT_HDR \
- (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F | \
- NIX_TX_OFFLOAD_TSO_F)
-
-#define NIX_UDP_TUN_BITMASK \
- ((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
- (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
-
-#define NIX_LSO_FORMAT_IDX_TSOV4 (0)
-#define NIX_LSO_FORMAT_IDX_TSOV6 (1)
-
-/* Function to determine no of tx subdesc required in case ext
- * sub desc is enabled.
- */
-static __rte_always_inline int
-otx2_nix_tx_ext_subs(const uint16_t flags)
-{
- return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 :
- ((flags & (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)) ?
- 1 : 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
- const uint64_t ol_flags, const uint16_t no_segdw,
- const uint16_t flags)
-{
- if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
- struct nix_send_mem_s *send_mem;
- uint16_t off = (no_segdw - 1) << 1;
- const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
-
- send_mem = (struct nix_send_mem_s *)(cmd + off);
- if (flags & NIX_TX_MULTI_SEG_F) {
- /* Retrieving the default desc values */
- cmd[off] = send_mem_desc[6];
-
- /* Using compiler barier to avoid voilation of C
- * aliasing rules.
- */
- rte_compiler_barrier();
- }
-
- /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
- * should not be recorded, hence changing the alg type to
- * NIX_SENDMEMALG_SET and also changing send mem addr field to
- * next 8 bytes as it corrpt the actual tx tstamp registered
- * address.
- */
- send_mem->alg = NIX_SENDMEMALG_SETTSTMP - (is_ol_tstamp);
-
- send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] +
- (is_ol_tstamp));
- }
-}
-
-static __rte_always_inline uint64_t
-otx2_pktmbuf_detach(struct rte_mbuf *m)
-{
- struct rte_mempool *mp = m->pool;
- uint32_t mbuf_size, buf_len;
- struct rte_mbuf *md;
- uint16_t priv_size;
- uint16_t refcount;
-
- /* Update refcount of direct mbuf */
- md = rte_mbuf_from_indirect(m);
- refcount = rte_mbuf_refcnt_update(md, -1);
-
- priv_size = rte_pktmbuf_priv_size(mp);
- mbuf_size = (uint32_t)(sizeof(struct rte_mbuf) + priv_size);
- buf_len = rte_pktmbuf_data_room_size(mp);
-
- m->priv_size = priv_size;
- m->buf_addr = (char *)m + mbuf_size;
- m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size;
- m->buf_len = (uint16_t)buf_len;
- rte_pktmbuf_reset_headroom(m);
- m->data_len = 0;
- m->ol_flags = 0;
- m->next = NULL;
- m->nb_segs = 1;
-
- /* Now indirect mbuf is safe to free */
- rte_pktmbuf_free(m);
-
- if (refcount == 0) {
- rte_mbuf_refcnt_set(md, 1);
- md->data_len = 0;
- md->ol_flags = 0;
- md->next = NULL;
- md->nb_segs = 1;
- return 0;
- } else {
- return 1;
- }
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_prefree_seg(struct rte_mbuf *m)
-{
- if (likely(rte_mbuf_refcnt_read(m) == 1)) {
- if (!RTE_MBUF_DIRECT(m))
- return otx2_pktmbuf_detach(m);
-
- m->next = NULL;
- m->nb_segs = 1;
- return 0;
- } else if (rte_mbuf_refcnt_update(m, -1) == 0) {
- if (!RTE_MBUF_DIRECT(m))
- return otx2_pktmbuf_detach(m);
-
- rte_mbuf_refcnt_set(m, 1);
- m->next = NULL;
- m->nb_segs = 1;
- return 0;
- }
-
- /* Mbuf is having refcount more than 1 so need not to be freed */
- return 1;
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
-{
- uint64_t mask, ol_flags = m->ol_flags;
-
- if (flags & NIX_TX_OFFLOAD_TSO_F &&
- (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
- uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
- uint16_t *iplen, *oiplen, *oudplen;
- uint16_t lso_sb, paylen;
-
- mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
- lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
- m->l2_len + m->l3_len + m->l4_len;
-
- /* Reduce payload len from base headers */
- paylen = m->pkt_len - lso_sb;
-
- /* Get iplen position assuming no tunnel hdr */
- iplen = (uint16_t *)(mdata + m->l2_len +
- (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
- /* Handle tunnel tso */
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
- const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
- ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
-
- oiplen = (uint16_t *)(mdata + m->outer_l2_len +
- (2 << !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)));
- *oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
- paylen);
-
- /* Update format for UDP tunneled packet */
- if (is_udp_tun) {
- oudplen = (uint16_t *)(mdata + m->outer_l2_len +
- m->outer_l3_len + 4);
- *oudplen =
- rte_cpu_to_be_16(rte_be_to_cpu_16(*oudplen) -
- paylen);
- }
-
- /* Update iplen position to inner ip hdr */
- iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
- m->l4_len + (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
- }
-
- *iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
- }
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
- const uint64_t lso_tun_fmt)
-{
- struct nix_send_ext_s *send_hdr_ext;
- struct nix_send_hdr_s *send_hdr;
- uint64_t ol_flags = 0, mask;
- union nix_send_hdr_w1_u w1;
- union nix_send_sg_s *sg;
-
- send_hdr = (struct nix_send_hdr_s *)cmd;
- if (flags & NIX_TX_NEED_EXT_HDR) {
- send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
- sg = (union nix_send_sg_s *)(cmd + 4);
- /* Clear previous markings */
- send_hdr_ext->w0.lso = 0;
- send_hdr_ext->w1.u = 0;
- } else {
- sg = (union nix_send_sg_s *)(cmd + 2);
- }
-
- if (flags & NIX_TX_NEED_SEND_HDR_W1) {
- ol_flags = m->ol_flags;
- w1.u = 0;
- }
-
- if (!(flags & NIX_TX_MULTI_SEG_F)) {
- send_hdr->w0.total = m->data_len;
- send_hdr->w0.aura =
- npa_lf_aura_handle_to_aura(m->pool->pool_id);
- }
-
- /*
- * L3type: 2 => IPV4
- * 3 => IPV4 with csum
- * 4 => IPV6
- * L3type and L3ptr needs to be set for either
- * L3 csum or L4 csum or LSO
- *
- */
-
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
- const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
- const uint8_t ol3type =
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
-
- /* Outer L3 */
- w1.ol3type = ol3type;
- mask = 0xffffull << ((!!ol3type) << 4);
- w1.ol3ptr = ~mask & m->outer_l2_len;
- w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len);
-
- /* Outer L4 */
- w1.ol4type = csum + (csum << 1);
-
- /* Inner L3 */
- w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
- w1.il3ptr = w1.ol4ptr + m->l2_len;
- w1.il4ptr = w1.il3ptr + m->l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
-
- /* Inner L4 */
- w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
-
- /* In case of no tunnel header use only
- * shift IL3/IL4 fields a bit to use
- * OL3/OL4 for header checksum
- */
- mask = !ol3type;
- w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) |
- ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
-
- } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
- const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
- const uint8_t outer_l2_len = m->outer_l2_len;
-
- /* Outer L3 */
- w1.ol3ptr = outer_l2_len;
- w1.ol4ptr = outer_l2_len + m->outer_l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
-
- /* Outer L4 */
- w1.ol4type = csum + (csum << 1);
-
- } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) {
- const uint8_t l2_len = m->l2_len;
-
- /* Always use OLXPTR and OLXTYPE when only
- * when one header is present
- */
-
- /* Inner L3 */
- w1.ol3ptr = l2_len;
- w1.ol4ptr = l2_len + m->l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
-
- /* Inner L4 */
- w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
- }
-
- if (flags & NIX_TX_NEED_EXT_HDR &&
- flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
- send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
- /* HW will update ptr after vlan0 update */
- send_hdr_ext->w1.vlan1_ins_ptr = 12;
- send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
-
- send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
- /* 2B before end of l2 header */
- send_hdr_ext->w1.vlan0_ins_ptr = 12;
- send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
- }
-
- if (flags & NIX_TX_OFFLOAD_TSO_F &&
- (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
- uint16_t lso_sb;
- uint64_t mask;
-
- mask = -(!w1.il3type);
- lso_sb = (mask & w1.ol4ptr) + (~mask & w1.il4ptr) + m->l4_len;
-
- send_hdr_ext->w0.lso_sb = lso_sb;
- send_hdr_ext->w0.lso = 1;
- send_hdr_ext->w0.lso_mps = m->tso_segsz;
- send_hdr_ext->w0.lso_format =
- NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
- w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
-
- /* Handle tunnel tso */
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
- const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
- ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
- uint8_t shift = is_udp_tun ? 32 : 0;
-
- shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
- shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
-
- w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
- w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
- /* Update format for UDP tunneled packet */
- send_hdr_ext->w0.lso_format = (lso_tun_fmt >> shift);
- }
- }
-
- if (flags & NIX_TX_NEED_SEND_HDR_W1)
- send_hdr->w1.u = w1.u;
-
- if (!(flags & NIX_TX_MULTI_SEG_F)) {
- sg->seg1_size = m->data_len;
- *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m);
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- /* DF bit = 1 if refcount of current mbuf or parent mbuf
- * is greater than 1
- * DF bit = 0 otherwise
- */
- send_hdr->w0.df = otx2_nix_prefree_seg(m);
- /* Ensuring mbuf fields which got updated in
- * otx2_nix_prefree_seg are written before LMTST.
- */
- rte_io_wmb();
- }
- /* Mark mempool object as "put" since it is freed by NIX */
- if (!send_hdr->w0.df)
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
- }
-}
-
-
-static __rte_always_inline void
-otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
- const rte_iova_t io_addr, const uint32_t flags)
-{
- uint64_t lmt_status;
-
- do {
- otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prep_lmt(uint64_t *cmd, void *lmt_addr, const uint32_t flags)
-{
- otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_xmit_submit_lmt(const rte_iova_t io_addr)
-{
- return otx2_lmt_submit(io_addr);
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_xmit_submit_lmt_release(const rte_iova_t io_addr)
-{
- return otx2_lmt_submit_release(io_addr);
-}
-
-static __rte_always_inline uint16_t
-otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
-{
- struct nix_send_hdr_s *send_hdr;
- union nix_send_sg_s *sg;
- struct rte_mbuf *m_next;
- uint64_t *slist, sg_u;
- uint64_t nb_segs;
- uint64_t segdw;
- uint8_t off, i;
-
- send_hdr = (struct nix_send_hdr_s *)cmd;
- send_hdr->w0.total = m->pkt_len;
- send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
-
- if (flags & NIX_TX_NEED_EXT_HDR)
- off = 2;
- else
- off = 0;
-
- sg = (union nix_send_sg_s *)&cmd[2 + off];
- /* Clear sg->u header before use */
- sg->u &= 0xFC00000000000000;
- sg_u = sg->u;
- slist = &cmd[3 + off];
-
- i = 0;
- nb_segs = m->nb_segs;
-
- /* Fill mbuf segments */
- do {
- m_next = m->next;
- sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
- *slist = rte_mbuf_data_iova(m);
- /* Set invert df if buffer is not to be freed by H/W */
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- sg_u |= (otx2_nix_prefree_seg(m) << (i + 55));
- /* Commit changes to mbuf */
- rte_io_wmb();
- }
- /* Mark mempool object as "put" since it is freed by NIX */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
- if (!(sg_u & (1ULL << (i + 55))))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
- rte_io_wmb();
-#endif
- slist++;
- i++;
- nb_segs--;
- if (i > 2 && nb_segs) {
- i = 0;
- /* Next SG subdesc */
- *(uint64_t *)slist = sg_u & 0xFC00000000000000;
- sg->u = sg_u;
- sg->segs = 3;
- sg = (union nix_send_sg_s *)slist;
- sg_u = sg->u;
- slist++;
- }
- m = m_next;
- } while (nb_segs);
-
- sg->u = sg_u;
- sg->segs = i;
- segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
- /* Roundup extra dwords to multiple of 2 */
- segdw = (segdw >> 1) + (segdw & 0x1);
- /* Default dwords */
- segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
- send_hdr->w0.sizem1 = segdw - 1;
-
- return segdw;
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_prep_lmt(uint64_t *cmd, void *lmt_addr, uint16_t segdw)
-{
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr,
- rte_iova_t io_addr, uint16_t segdw)
-{
- uint64_t lmt_status;
-
- do {
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_one_release(uint64_t *cmd, void *lmt_addr,
- rte_iova_t io_addr, uint16_t segdw)
-{
- uint64_t lmt_status;
-
- rte_io_wmb();
- do {
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
-#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
-#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F
-#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F
-#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F
-#define TSO_F NIX_TX_OFFLOAD_TSO_F
-#define TX_SEC_F NIX_TX_OFFLOAD_SECURITY_F
-
-/* [SEC] [TSO] [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
-#define NIX_TX_FASTPATH_MODES \
-T(no_offload, 0, 0, 0, 0, 0, 0, 0, 4, \
- NIX_TX_OFFLOAD_NONE) \
-T(l3l4csum, 0, 0, 0, 0, 0, 0, 1, 4, \
- L3L4CSUM_F) \
-T(ol3ol4csum, 0, 0, 0, 0, 0, 1, 0, 4, \
- OL3OL4CSUM_F) \
-T(ol3ol4csum_l3l4csum, 0, 0, 0, 0, 0, 1, 1, 4, \
- OL3OL4CSUM_F | L3L4CSUM_F) \
-T(vlan, 0, 0, 0, 0, 1, 0, 0, 6, \
- VLAN_F) \
-T(vlan_l3l4csum, 0, 0, 0, 0, 1, 0, 1, 6, \
- VLAN_F | L3L4CSUM_F) \
-T(vlan_ol3ol4csum, 0, 0, 0, 0, 1, 1, 0, 6, \
- VLAN_F | OL3OL4CSUM_F) \
-T(vlan_ol3ol4csum_l3l4csum, 0, 0, 0, 0, 1, 1, 1, 6, \
- VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(noff, 0, 0, 0, 1, 0, 0, 0, 4, \
- NOFF_F) \
-T(noff_l3l4csum, 0, 0, 0, 1, 0, 0, 1, 4, \
- NOFF_F | L3L4CSUM_F) \
-T(noff_ol3ol4csum, 0, 0, 0, 1, 0, 1, 0, 4, \
- NOFF_F | OL3OL4CSUM_F) \
-T(noff_ol3ol4csum_l3l4csum, 0, 0, 0, 1, 0, 1, 1, 4, \
- NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(noff_vlan, 0, 0, 0, 1, 1, 0, 0, 6, \
- NOFF_F | VLAN_F) \
-T(noff_vlan_l3l4csum, 0, 0, 0, 1, 1, 0, 1, 6, \
- NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(noff_vlan_ol3ol4csum, 0, 0, 0, 1, 1, 1, 0, 6, \
- NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(noff_vlan_ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 1, 1, 6, \
- NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts, 0, 0, 1, 0, 0, 0, 0, 8, \
- TSP_F) \
-T(ts_l3l4csum, 0, 0, 1, 0, 0, 0, 1, 8, \
- TSP_F | L3L4CSUM_F) \
-T(ts_ol3ol4csum, 0, 0, 1, 0, 0, 1, 0, 8, \
- TSP_F | OL3OL4CSUM_F) \
-T(ts_ol3ol4csum_l3l4csum, 0, 0, 1, 0, 0, 1, 1, 8, \
- TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_vlan, 0, 0, 1, 0, 1, 0, 0, 8, \
- TSP_F | VLAN_F) \
-T(ts_vlan_l3l4csum, 0, 0, 1, 0, 1, 0, 1, 8, \
- TSP_F | VLAN_F | L3L4CSUM_F) \
-T(ts_vlan_ol3ol4csum, 0, 0, 1, 0, 1, 1, 0, 8, \
- TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(ts_vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 0, 1, 1, 1, 8, \
- TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_noff, 0, 0, 1, 1, 0, 0, 0, 8, \
- TSP_F | NOFF_F) \
-T(ts_noff_l3l4csum, 0, 0, 1, 1, 0, 0, 1, 8, \
- TSP_F | NOFF_F | L3L4CSUM_F) \
-T(ts_noff_ol3ol4csum, 0, 0, 1, 1, 0, 1, 0, 8, \
- TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(ts_noff_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 0, 1, 1, 8, \
- TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_noff_vlan, 0, 0, 1, 1, 1, 0, 0, 8, \
- TSP_F | NOFF_F | VLAN_F) \
-T(ts_noff_vlan_l3l4csum, 0, 0, 1, 1, 1, 0, 1, 8, \
- TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(ts_noff_vlan_ol3ol4csum, 0, 0, 1, 1, 1, 1, 0, 8, \
- TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(ts_noff_vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 1, 1, 8, \
- TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
- \
-T(tso, 0, 1, 0, 0, 0, 0, 0, 6, \
- TSO_F) \
-T(tso_l3l4csum, 0, 1, 0, 0, 0, 0, 1, 6, \
- TSO_F | L3L4CSUM_F) \
-T(tso_ol3ol4csum, 0, 1, 0, 0, 0, 1, 0, 6, \
- TSO_F | OL3OL4CSUM_F) \
-T(tso_ol3ol4csum_l3l4csum, 0, 1, 0, 0, 0, 1, 1, 6, \
- TSO_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_vlan, 0, 1, 0, 0, 1, 0, 0, 6, \
- TSO_F | VLAN_F) \
-T(tso_vlan_l3l4csum, 0, 1, 0, 0, 1, 0, 1, 6, \
- TSO_F | VLAN_F | L3L4CSUM_F) \
-T(tso_vlan_ol3ol4csum, 0, 1, 0, 0, 1, 1, 0, 6, \
- TSO_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_vlan_ol3ol4csum_l3l4csum, 0, 1, 0, 0, 1, 1, 1, 6, \
- TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_noff, 0, 1, 0, 1, 0, 0, 0, 6, \
- TSO_F | NOFF_F) \
-T(tso_noff_l3l4csum, 0, 1, 0, 1, 0, 0, 1, 6, \
- TSO_F | NOFF_F | L3L4CSUM_F) \
-T(tso_noff_ol3ol4csum, 0, 1, 0, 1, 0, 1, 0, 6, \
- TSO_F | NOFF_F | OL3OL4CSUM_F) \
-T(tso_noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 0, 1, 1, 6, \
- TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_noff_vlan, 0, 1, 0, 1, 1, 0, 0, 6, \
- TSO_F | NOFF_F | VLAN_F) \
-T(tso_noff_vlan_l3l4csum, 0, 1, 0, 1, 1, 0, 1, 6, \
- TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(tso_noff_vlan_ol3ol4csum, 0, 1, 0, 1, 1, 1, 0, 6, \
- TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 1, 1, 6, \
- TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts, 0, 1, 1, 0, 0, 0, 0, 8, \
- TSO_F | TSP_F) \
-T(tso_ts_l3l4csum, 0, 1, 1, 0, 0, 0, 1, 8, \
- TSO_F | TSP_F | L3L4CSUM_F) \
-T(tso_ts_ol3ol4csum, 0, 1, 1, 0, 0, 1, 0, 8, \
- TSO_F | TSP_F | OL3OL4CSUM_F) \
-T(tso_ts_ol3ol4csum_l3l4csum, 0, 1, 1, 0, 0, 1, 1, 8, \
- TSO_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_vlan, 0, 1, 1, 0, 1, 0, 0, 8, \
- TSO_F | TSP_F | VLAN_F) \
-T(tso_ts_vlan_l3l4csum, 0, 1, 1, 0, 1, 0, 1, 8, \
- TSO_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(tso_ts_vlan_ol3ol4csum, 0, 1, 1, 0, 1, 1, 0, 8, \
- TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_ts_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 0, 1, 1, 1, 8, \
- TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_noff, 0, 1, 1, 1, 0, 0, 0, 8, \
- TSO_F | TSP_F | NOFF_F) \
-T(tso_ts_noff_l3l4csum, 0, 1, 1, 1, 0, 0, 1, 8, \
- TSO_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(tso_ts_noff_ol3ol4csum, 0, 1, 1, 1, 0, 1, 0, 8, \
- TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(tso_ts_noff_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 0, 1, 1, 8, \
- TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_noff_vlan, 0, 1, 1, 1, 1, 0, 0, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F) \
-T(tso_ts_noff_vlan_l3l4csum, 0, 1, 1, 1, 1, 0, 1, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(tso_ts_noff_vlan_ol3ol4csum, 0, 1, 1, 1, 1, 1, 0, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_ts_noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 1, 1, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec, 1, 0, 0, 0, 0, 0, 0, 8, \
- TX_SEC_F) \
-T(sec_l3l4csum, 1, 0, 0, 0, 0, 0, 1, 8, \
- TX_SEC_F | L3L4CSUM_F) \
-T(sec_ol3ol4csum, 1, 0, 0, 0, 0, 1, 0, 8, \
- TX_SEC_F | OL3OL4CSUM_F) \
-T(sec_ol3ol4csum_l3l4csum, 1, 0, 0, 0, 0, 1, 1, 8, \
- TX_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_vlan, 1, 0, 0, 0, 1, 0, 0, 8, \
- TX_SEC_F | VLAN_F) \
-T(sec_vlan_l3l4csum, 1, 0, 0, 0, 1, 0, 1, 8, \
- TX_SEC_F | VLAN_F | L3L4CSUM_F) \
-T(sec_vlan_ol3ol4csum, 1, 0, 0, 0, 1, 1, 0, 8, \
- TX_SEC_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_vlan_ol3ol4csum_l3l4csum, 1, 0, 0, 0, 1, 1, 1, 8, \
- TX_SEC_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_noff, 1, 0, 0, 1, 0, 0, 0, 8, \
- TX_SEC_F | NOFF_F) \
-T(sec_noff_l3l4csum, 1, 0, 0, 1, 0, 0, 1, 8, \
- TX_SEC_F | NOFF_F | L3L4CSUM_F) \
-T(sec_noff_ol3ol4csum, 1, 0, 0, 1, 0, 1, 0, 8, \
- TX_SEC_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_noff_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 0, 1, 1, 8, \
- TX_SEC_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_noff_vlan, 1, 0, 0, 1, 1, 0, 0, 8, \
- TX_SEC_F | NOFF_F | VLAN_F) \
-T(sec_noff_vlan_l3l4csum, 1, 0, 0, 1, 1, 0, 1, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_noff_vlan_ol3ol4csum, 1, 0, 0, 1, 1, 1, 0, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 1, 1, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts, 1, 0, 1, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSP_F) \
-T(sec_ts_l3l4csum, 1, 0, 1, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSP_F | L3L4CSUM_F) \
-T(sec_ts_ol3ol4csum, 1, 0, 1, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSP_F | OL3OL4CSUM_F) \
-T(sec_ts_ol3ol4csum_l3l4csum, 1, 0, 1, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_vlan, 1, 0, 1, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSP_F | VLAN_F) \
-T(sec_ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(sec_ts_vlan_ol3ol4csum, 1, 0, 1, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_noff, 1, 0, 1, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F) \
-T(sec_ts_noff_l3l4csum, 1, 0, 1, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(sec_ts_noff_ol3ol4csum, 1, 0, 1, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_ts_noff_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_noff_vlan, 1, 0, 1, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F) \
-T(sec_ts_noff_vlan_l3l4csum, 1, 0, 1, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_ts_noff_vlan_ol3ol4csum, 1, 0, 1, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso, 1, 1, 0, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F) \
-T(sec_tso_l3l4csum, 1, 1, 0, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | L3L4CSUM_F) \
-T(sec_tso_ol3ol4csum, 1, 1, 0, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | OL3OL4CSUM_F) \
-T(sec_tso_ol3ol4csum_l3l4csum, 1, 1, 0, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_vlan, 1, 1, 0, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | VLAN_F) \
-T(sec_tso_vlan_l3l4csum, 1, 1, 0, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_vlan_ol3ol4csum, 1, 1, 0, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_vlan_ol3ol4csum_l3l4csum, 1, 1, 0, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_noff, 1, 1, 0, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F) \
-T(sec_tso_noff_l3l4csum, 1, 1, 0, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F) \
-T(sec_tso_noff_ol3ol4csum, 1, 1, 0, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_tso_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_noff_vlan, 1, 1, 0, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F) \
-T(sec_tso_noff_vlan_l3l4csum, 1, 1, 0, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_noff_vlan_ol3ol4csum, 1, 1, 0, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, \
- 1, 1, 0, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts, 1, 1, 1, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F) \
-T(sec_tso_ts_l3l4csum, 1, 1, 1, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | L3L4CSUM_F) \
-T(sec_tso_ts_ol3ol4csum, 1, 1, 1, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_ol3ol4csum_l3l4csum, 1, 1, 1, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_ts_vlan, 1, 1, 1, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F) \
-T(sec_tso_ts_vlan_l3l4csum, 1, 1, 1, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_ts_vlan_ol3ol4csum, 1, 1, 1, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts_noff, 1, 1, 1, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F) \
-T(sec_tso_ts_noff_l3l4csum, 1, 1, 1, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(sec_tso_ts_noff_ol3ol4csum, 1, 1, 1, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_noff_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts_noff_vlan, 1, 1, 1, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F) \
-T(sec_tso_ts_noff_vlan_l3l4csum, 1, 1, 1, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)\
-T(sec_tso_ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | \
- OL3OL4CSUM_F) \
-T(sec_tso_ts_noff_vlan_ol3ol4csum_l3l4csum, \
- 1, 1, 1, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | \
- OL3OL4CSUM_F | L3L4CSUM_F)
-#endif /* __OTX2_TX_H__ */
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
deleted file mode 100644
index cce643b7b5..0000000000
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ /dev/null
@@ -1,1035 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_malloc.h>
-#include <rte_tailq.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-
-#define VLAN_ID_MATCH 0x1
-#define VTAG_F_MATCH 0x2
-#define MAC_ADDR_MATCH 0x4
-#define QINQ_F_MATCH 0x8
-#define VLAN_DROP 0x10
-#define DEF_F_ENTRY 0x20
-
-enum vtag_cfg_dir {
- VTAG_TX,
- VTAG_RX
-};
-
-static int
-nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
- uint32_t entry, const int enable)
-{
- struct npc_mcam_ena_dis_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- if (enable)
- req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox);
- else
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
-
- req->entry = entry;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- return rc;
-}
-
-static void
-nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry, bool qinq, bool drop)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int pcifunc = otx2_pfvf_func(dev->pf, dev->vf);
- uint64_t action = 0, vtag_action = 0;
-
- action = NIX_RX_ACTIONOP_UCAST;
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
- action = NIX_RX_ACTIONOP_RSS;
- action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
- }
-
- action |= (uint64_t)pcifunc << 4;
- entry->action = action;
-
- if (drop) {
- entry->action &= ~((uint64_t)0xF);
- entry->action |= NIX_RX_ACTIONOP_DROP;
- return;
- }
-
- if (!qinq) {
- /* VTAG0 fields denote CTAG in single vlan case */
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- vtag_action |= (NPC_LID_LB << 8);
- vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
- } else {
- /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- vtag_action |= (NPC_LID_LB << 8);
- vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR;
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47);
- vtag_action |= ((uint64_t)(NPC_LID_LB) << 40);
- vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32);
- }
-
- entry->vtag_action = vtag_action;
-}
-
-static void
-nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
- int vtag_index)
-{
- union {
- uint64_t reg;
- struct nix_tx_vtag_action_s act;
- } vtag_action;
-
- uint64_t action;
-
- action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
-
- /*
- * Take offset from LA since in case of untagged packet,
- * lbptr is zero.
- */
- if (type == RTE_ETH_VLAN_TYPE_OUTER) {
- vtag_action.act.vtag0_def = vtag_index;
- vtag_action.act.vtag0_lid = NPC_LID_LA;
- vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
- vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR;
- } else {
- vtag_action.act.vtag1_def = vtag_index;
- vtag_action.act.vtag1_lid = NPC_LID_LA;
- vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT;
- vtag_action.act.vtag1_relptr = NIX_TX_VTAGACTION_VTAG1_RELPTR;
- }
-
- entry->action = action;
- entry->vtag_action = vtag_action.reg;
-}
-
-static int
-nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
-{
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- return rc;
-}
-
-static int
-nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
- struct mcam_entry *entry, uint8_t intf, uint8_t ena)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_write_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct msghdr *rsp;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
-
- req->entry = ent_idx;
- req->intf = intf;
- req->enable_entry = ena;
- memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- return rc;
-}
-
-static int
-nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry,
- uint8_t intf, bool drop)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_and_write_entry_req *req;
- struct npc_mcam_alloc_and_write_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox);
-
- if (intf == NPC_MCAM_RX) {
- if (!drop && dev->vlan_info.def_rx_mcam_idx) {
- req->priority = NPC_MCAM_HIGHER_PRIO;
- req->ref_entry = dev->vlan_info.def_rx_mcam_idx;
- } else if (drop && dev->vlan_info.qinq_mcam_idx) {
- req->priority = NPC_MCAM_LOWER_PRIO;
- req->ref_entry = dev->vlan_info.qinq_mcam_idx;
- } else {
- req->priority = NPC_MCAM_ANY_PRIO;
- req->ref_entry = 0;
- }
- } else {
- req->priority = NPC_MCAM_ANY_PRIO;
- req->ref_entry = 0;
- }
-
- req->intf = intf;
- req->enable_entry = 1;
- memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- return rsp->entry;
-}
-
-static void
-nix_vlan_update_mac(struct rte_eth_dev *eth_dev, int mcam_index,
- int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- volatile uint8_t *key_data, *key_mask;
- struct npc_mcam_read_entry_req *req;
- struct npc_mcam_read_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint64_t mcam_data, mcam_mask;
- struct mcam_entry entry;
- uint8_t intf, mcam_ena;
- int idx, rc = -EINVAL;
- uint8_t *mac_addr;
-
- memset(&entry, 0, sizeof(struct mcam_entry));
-
- /* Read entry first */
- req = otx2_mbox_alloc_msg_npc_mcam_read_entry(mbox);
-
- req->entry = mcam_index;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read entry %d", mcam_index);
- return;
- }
-
- entry = rsp->entry_data;
- intf = rsp->intf;
- mcam_ena = rsp->enable;
-
- /* Update mcam address */
- key_data = (volatile uint8_t *)entry.kw;
- key_mask = (volatile uint8_t *)entry.kw_mask;
-
- if (enable) {
- mcam_mask = 0;
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
-
- } else {
- mcam_data = 0ULL;
- mac_addr = dev->mac_addr;
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- mcam_mask = BIT_ULL(48) - 1;
-
- otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
- &mcam_data, mkex->la_xtract.len + 1);
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
- }
-
- /* Write back the mcam entry */
- rc = nix_vlan_mcam_write(eth_dev, mcam_index,
- &entry, intf, mcam_ena);
- if (rc) {
- otx2_err("Failed to write entry %d", mcam_index);
- return;
- }
-}
-
-void
-otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
-
- /* Already in required mode */
- if (enable == vlan->promisc_on)
- return;
-
- /* Update default rx entry */
- if (vlan->def_rx_mcam_idx)
- nix_vlan_update_mac(eth_dev, vlan->def_rx_mcam_idx, enable);
-
- /* Update all other rx filter entries */
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next)
- nix_vlan_update_mac(eth_dev, entry->mcam_idx, enable);
-
- vlan->promisc_on = enable;
-}
-
-/* Configure mcam entry with required MCAM search rules */
-static int
-nix_vlan_mcam_config(struct rte_eth_dev *eth_dev,
- uint16_t vlan_id, uint16_t flags)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- volatile uint8_t *key_data, *key_mask;
- uint64_t mcam_data, mcam_mask;
- struct mcam_entry entry;
- uint8_t *mac_addr;
- int idx, kwi = 0;
-
- memset(&entry, 0, sizeof(struct mcam_entry));
- key_data = (volatile uint8_t *)entry.kw;
- key_mask = (volatile uint8_t *)entry.kw_mask;
-
- /* Channel base extracted to KW0[11:0] */
- entry.kw[kwi] = dev->rx_chan_base;
- entry.kw_mask[kwi] = BIT_ULL(12) - 1;
-
- /* Adds vlan_id & LB CTAG flag to MCAM KW */
- if (flags & VLAN_ID_MATCH) {
- entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG_QINQ)
- << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |=
- (0xF & ~(NPC_LT_LB_CTAG ^ NPC_LT_LB_STAG_QINQ))
- << mkex->lb_lt_offset;
-
- mcam_data = (uint16_t)vlan_id;
- mcam_mask = (BIT_ULL(16) - 1);
- otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off,
- &mcam_data, mkex->lb_xtract.len);
- otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off,
- &mcam_mask, mkex->lb_xtract.len);
- }
-
- /* Adds LB STAG flag to MCAM KW */
- if (flags & QINQ_F_MATCH) {
- entry.kw[kwi] |= NPC_LT_LB_STAG_QINQ << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
- }
-
- /* Adds LB CTAG & LB STAG flags to MCAM KW */
- if (flags & VTAG_F_MATCH) {
- entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG_QINQ)
- << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |=
- (0xF & ~(NPC_LT_LB_CTAG ^ NPC_LT_LB_STAG_QINQ))
- << mkex->lb_lt_offset;
- }
-
- /* Adds port MAC address to MCAM KW */
- if (flags & MAC_ADDR_MATCH) {
- mcam_data = 0ULL;
- mac_addr = dev->mac_addr;
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- mcam_mask = BIT_ULL(48) - 1;
- otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
- &mcam_data, mkex->la_xtract.len + 1);
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
- }
-
- /* VLAN_DROP: for drop action for all vlan packets when filter is on.
- * For QinQ, enable vtag action for both outer & inner tags
- */
- if (flags & VLAN_DROP)
- nix_set_rx_vlan_action(eth_dev, &entry, false, true);
- else if (flags & QINQ_F_MATCH)
- nix_set_rx_vlan_action(eth_dev, &entry, true, false);
- else
- nix_set_rx_vlan_action(eth_dev, &entry, false, false);
-
- if (flags & DEF_F_ENTRY)
- dev->vlan_info.def_rx_mcam_ent = entry;
-
- return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX,
- flags & VLAN_DROP);
-}
-
-/* Installs/Removes/Modifies default rx entry */
-static int
-nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
- bool filter, bool enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- uint16_t flags = 0;
- int mcam_idx, rc;
-
- /* Use default mcam entry to either drop vlan traffic when
- * vlan filter is on or strip vtag when strip is enabled.
- * Allocate default entry which matches port mac address
- * and vtag(ctag/stag) flags with drop action.
- */
- if (!vlan->def_rx_mcam_idx) {
- if (!eth_dev->data->promiscuous)
- flags = MAC_ADDR_MATCH;
-
- if (filter && enable)
- flags |= VTAG_F_MATCH | VLAN_DROP;
- else if (strip && enable)
- flags |= VTAG_F_MATCH;
- else
- return 0;
-
- flags |= DEF_F_ENTRY;
-
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags);
- if (mcam_idx < 0) {
- otx2_err("Failed to config vlan mcam");
- return -mcam_idx;
- }
-
- vlan->def_rx_mcam_idx = mcam_idx;
- return 0;
- }
-
- /* Filter is already enabled, so packets would be dropped anyways. No
- * processing needed for enabling strip wrt mcam entry.
- */
-
- /* Filter disable request */
- if (vlan->filter_on && filter && !enable) {
- vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
-
- /* Free default rx entry only when
- * 1. strip is not on and
- * 2. qinq entry is allocated before default entry.
- */
- if (vlan->strip_on ||
- (vlan->qinq_on && !vlan->qinq_before_def)) {
- if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- RTE_ETH_MQ_RX_RSS)
- vlan->def_rx_mcam_ent.action |=
- NIX_RX_ACTIONOP_RSS;
- else
- vlan->def_rx_mcam_ent.action |=
- NIX_RX_ACTIONOP_UCAST;
- return nix_vlan_mcam_write(eth_dev,
- vlan->def_rx_mcam_idx,
- &vlan->def_rx_mcam_ent,
- NIX_INTF_RX, 1);
- } else {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
- }
-
- /* Filter enable request */
- if (!vlan->filter_on && filter && enable) {
- vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
- vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP;
- return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx,
- &vlan->def_rx_mcam_ent, NIX_INTF_RX, 1);
- }
-
- /* Strip disable request */
- if (vlan->strip_on && strip && !enable) {
- if (!vlan->filter_on &&
- !(vlan->qinq_on && !vlan->qinq_before_def)) {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
- }
-
- return 0;
-}
-
-/* Installs/Removes default tx entry */
-static int
-nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, int vtag_index,
- int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct mcam_entry entry;
- uint16_t pf_func;
- int rc;
-
- if (!vlan->def_tx_mcam_idx && enable) {
- memset(&entry, 0, sizeof(struct mcam_entry));
-
- /* Only pf_func is matched, swap it's bytes */
- pf_func = (dev->pf_func & 0xff) << 8;
- pf_func |= (dev->pf_func >> 8) & 0xff;
-
- /* PF Func extracted to KW1[47:32] */
- entry.kw[0] = (uint64_t)pf_func << 32;
- entry.kw_mask[0] = (BIT_ULL(16) - 1) << 32;
-
- nix_set_tx_vlan_action(&entry, type, vtag_index);
- vlan->def_tx_mcam_ent = entry;
-
- return nix_vlan_mcam_alloc_and_write(eth_dev, &entry,
- NIX_INTF_TX, 0);
- }
-
- if (vlan->def_tx_mcam_idx && !enable) {
- rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
-
- return 0;
-}
-
-/* Configure vlan stripping on or off */
-static int
-nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- int rc = -EINVAL;
-
- rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable);
- if (rc) {
- otx2_err("Failed to config default rx entry");
- return rc;
- }
-
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- /* cfg_type = 1 for rx vlan cfg */
- vtag_cfg->cfg_type = VTAG_RX;
-
- if (enable)
- vtag_cfg->rx.strip_vtag = 1;
- else
- vtag_cfg->rx.strip_vtag = 0;
-
- /* Always capture */
- vtag_cfg->rx.capture_vtag = 1;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- /* Use rx vtag type index[0] for now */
- vtag_cfg->rx.vtag_type = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- dev->vlan_info.strip_on = enable;
- return rc;
-}
-
-/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */
-static int
-nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
- uint16_t vlan_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int rc = -EINVAL;
-
- if (!vlan_id && enable) {
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
- enable);
- if (rc) {
- otx2_err("Failed to config vlan mcam");
- return rc;
- }
- dev->vlan_info.filter_on = enable;
- return 0;
- }
-
- /* Enable/disable existing vlan filter entries */
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (vlan_id) {
- if (entry->vlan_id == vlan_id) {
- rc = nix_vlan_mcam_enb_dis(dev,
- entry->mcam_idx,
- enable);
- if (rc)
- return rc;
- }
- } else {
- rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx,
- enable);
- if (rc)
- return rc;
- }
- }
-
- if (!vlan_id && !enable) {
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
- enable);
- if (rc) {
- otx2_err("Failed to config vlan mcam");
- return rc;
- }
- dev->vlan_info.filter_on = enable;
- return 0;
- }
-
- return 0;
-}
-
-/* Enable/disable vlan filtering for the given vlan_id */
-int
-otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
- int on)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int entry_exists = 0;
- int rc = -EINVAL;
- int mcam_idx;
-
- if (!vlan_id) {
- otx2_err("Vlan Id can't be zero");
- return rc;
- }
-
- if (!vlan->def_rx_mcam_idx) {
- otx2_err("Vlan Filtering is disabled, enable it first");
- return rc;
- }
-
- if (on) {
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (entry->vlan_id == vlan_id) {
- /* Vlan entry already exists */
- entry_exists = 1;
- /* Mcam entry already allocated */
- if (entry->mcam_idx) {
- rc = nix_vlan_hw_filter(eth_dev, on,
- vlan_id);
- return rc;
- }
- break;
- }
- }
-
- if (!entry_exists) {
- entry = rte_zmalloc("otx2_nix_vlan_entry",
- sizeof(struct vlan_entry), 0);
- if (!entry) {
- otx2_err("Failed to allocate memory");
- return -ENOMEM;
- }
- }
-
- /* Enables vlan_id & mac address based filtering */
- if (eth_dev->data->promiscuous)
- mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
- VLAN_ID_MATCH);
- else
- mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
- VLAN_ID_MATCH |
- MAC_ADDR_MATCH);
- if (mcam_idx < 0) {
- otx2_err("Failed to config vlan mcam");
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- return mcam_idx;
- }
-
- entry->mcam_idx = mcam_idx;
- if (!entry_exists) {
- entry->vlan_id = vlan_id;
- TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next);
- }
- } else {
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (entry->vlan_id == vlan_id) {
- rc = nix_vlan_mcam_free(dev, entry->mcam_idx);
- if (rc)
- return rc;
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- break;
- }
- }
- }
- return 0;
-}
-
-/* Configure double vlan(qinq) on or off */
-static int
-otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
- const uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan_info;
- int mcam_idx;
- int rc;
-
- vlan_info = &dev->vlan_info;
-
- if (!enable) {
- if (!vlan_info->qinq_mcam_idx)
- return 0;
-
- rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx);
- if (rc)
- return rc;
-
- vlan_info->qinq_mcam_idx = 0;
- dev->vlan_info.qinq_on = 0;
- vlan_info->qinq_before_def = 0;
- return 0;
- }
-
- if (eth_dev->data->promiscuous)
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH);
- else
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0,
- QINQ_F_MATCH | MAC_ADDR_MATCH);
- if (mcam_idx < 0)
- return mcam_idx;
-
- if (!vlan_info->def_rx_mcam_idx)
- vlan_info->qinq_before_def = 1;
-
- vlan_info->qinq_mcam_idx = mcam_idx;
- dev->vlan_info.qinq_on = 1;
- return 0;
-}
-
-int
-otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t offloads = dev->rx_offloads;
- struct rte_eth_rxmode *rxmode;
- int rc = 0;
-
- rxmode = ð_dev->data->dev_conf.rxmode;
-
- if (mask & RTE_ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
- rc = nix_vlan_hw_strip(eth_dev, true);
- } else {
- offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
- rc = nix_vlan_hw_strip(eth_dev, false);
- }
- if (rc)
- goto done;
- }
-
- if (mask & RTE_ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- rc = nix_vlan_hw_filter(eth_dev, true, 0);
- } else {
- offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- rc = nix_vlan_hw_filter(eth_dev, false, 0);
- }
- if (rc)
- goto done;
- }
-
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
- if (!dev->vlan_info.qinq_on) {
- offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
- rc = otx2_nix_config_double_vlan(eth_dev, true);
- if (rc)
- goto done;
- }
- } else {
- if (dev->vlan_info.qinq_on) {
- offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
- rc = otx2_nix_config_double_vlan(eth_dev, false);
- if (rc)
- goto done;
- }
- }
-
- if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
- dev->rx_offloads |= offloads;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(eth_dev);
- }
-
-done:
- return rc;
-}
-
-int
-otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, uint16_t tpid)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct nix_set_vlan_tpid *tpid_cfg;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
-
- tpid_cfg->tpid = tpid;
- if (type == RTE_ETH_VLAN_TYPE_OUTER)
- tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
- else
- tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- if (type == RTE_ETH_VLAN_TYPE_OUTER)
- dev->vlan_info.outer_vlan_tpid = tpid;
- else
- dev->vlan_info.inner_vlan_tpid = tpid;
- return 0;
-}
-
-int
-otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev);
- struct otx2_mbox *mbox = otx2_dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- struct nix_vtag_config_rsp *rsp;
- struct otx2_vlan_info *vlan;
- int rc, rc1, vtag_index = 0;
-
- if (vlan_id == 0) {
- otx2_err("vlan id can't be zero");
- return -EINVAL;
- }
-
- vlan = &otx2_dev->vlan_info;
-
- if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) {
- otx2_err("pvid %d is already enabled", vlan_id);
- return -EINVAL;
- }
-
- if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) {
- otx2_err("another pvid is enabled, disable that first");
- return -EINVAL;
- }
-
- /* No pvid active */
- if (!on && !vlan->pvid_insert_on)
- return 0;
-
- /* Given pvid already disabled */
- if (!on && vlan->pvid != vlan_id)
- return 0;
-
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
-
- if (on) {
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
-
- if (vlan->outer_vlan_tpid)
- vtag_cfg->tx.vtag0 = ((uint32_t)vlan->outer_vlan_tpid
- << 16) | vlan_id;
- else
- vtag_cfg->tx.vtag0 =
- ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id);
- vtag_cfg->tx.cfg_vtag0 = 1;
- } else {
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
-
- vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx;
- vtag_cfg->tx.free_vtag0 = 1;
- }
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (on) {
- vtag_index = rsp->vtag0_idx;
- } else {
- vlan->pvid = 0;
- vlan->pvid_insert_on = 0;
- vlan->outer_vlan_idx = 0;
- }
-
- rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
- vtag_index, on);
- if (rc < 0) {
- printf("Default tx entry failed with rc %d\n", rc);
- vtag_cfg->tx.vtag0_idx = vtag_index;
- vtag_cfg->tx.free_vtag0 = 1;
- vtag_cfg->tx.cfg_vtag0 = 0;
-
- rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc1)
- otx2_err("Vtag free failed");
-
- return rc;
- }
-
- if (on) {
- vlan->pvid = vlan_id;
- vlan->pvid_insert_on = 1;
- vlan->outer_vlan_idx = vtag_index;
- }
-
- return 0;
-}
-
-void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
- __rte_unused uint16_t queue,
- __rte_unused int on)
-{
- otx2_err("Not Supported");
-}
-
-static int
-nix_vlan_rx_mkex_offset(uint64_t mask)
-{
- int nib_count = 0;
-
- while (mask) {
- nib_count += mask & 1;
- mask >>= 1;
- }
-
- return nib_count * 4;
-}
-
-static int
-nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
-{
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- struct npc_xtract_info *x_info = NULL;
- uint64_t rx_keyx;
- otx2_dxcfg_t *p;
- int rc = -EINVAL;
-
- if (npc == NULL) {
- otx2_err("Missing npc mkex configuration");
- return rc;
- }
-
-#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL
-#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL
-#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL
-
- rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX];
- if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA)
- return rc;
-
- if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) !=
- NPC_KEX_LB_LTYPE_NIBBLE_ENA)
- return rc;
-
- mkex->lb_lt_offset =
- nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK);
-
- p = &npc->prx_dxcfg;
- x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
- memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info));
- x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0];
- memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info));
-
- return 0;
-}
-
-static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_entry *entry;
- int rc;
-
- /* VLAN filters can't be set without setting filtern on */
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true);
- if (rc) {
- otx2_err("Failed to reinstall vlan filters");
- return;
- }
-
- TAILQ_FOREACH(entry, &dev->vlan_info.fltr_tbl, next) {
- rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true);
- if (rc)
- otx2_err("Failed to reinstall filter for vlan:%d",
- entry->vlan_id);
- }
-}
-
-int
-otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, mask;
-
- /* Port initialized for first time or restarted */
- if (!dev->configured) {
- rc = nix_vlan_get_mkex_info(dev);
- if (rc) {
- otx2_err("Failed to get vlan mkex info rc=%d", rc);
- return rc;
- }
-
- TAILQ_INIT(&dev->vlan_info.fltr_tbl);
- } else {
- /* Reinstall all mcam entries now if filter offload is set */
- if (eth_dev->data->dev_conf.rxmode.offloads &
- RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
- nix_vlan_reinstall_vlan_filters(eth_dev);
- }
-
- mask =
- RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
- rc = otx2_nix_vlan_offload_set(eth_dev, mask);
- if (rc) {
- otx2_err("Failed to set vlan offload rc=%d", rc);
- return rc;
- }
-
- return 0;
-}
-
-int
-otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int rc;
-
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (!dev->configured) {
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- } else {
- /* MCAM entries freed by flow_fini & lf_free on
- * port stop.
- */
- entry->mcam_idx = 0;
- }
- }
-
- if (!dev->configured) {
- if (vlan->def_rx_mcam_idx) {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- }
- }
-
- otx2_nix_config_double_vlan(eth_dev, false);
- vlan->def_rx_mcam_idx = 0;
- return 0;
-}
diff --git a/drivers/net/octeontx2/version.map b/drivers/net/octeontx2/version.map
deleted file mode 100644
index c2e0723b4c..0000000000
--- a/drivers/net/octeontx2/version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_22 {
- local: *;
-};
diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h
index 9326925025..dc720368ab 100644
--- a/drivers/net/octeontx_ep/otx2_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.h
@@ -113,7 +113,7 @@
#define otx2_read64(addr) rte_read64_relaxed((void *)(addr))
#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr))
-#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */
+#define PCI_DEVID_CN9K_EP_NET_VF 0xB203 /* OCTEON 9 EP mode */
#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103
int
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index fd5e8ed263..8a59a1a194 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -150,7 +150,7 @@ struct otx_ep_iq_config {
/** The instruction (input) queue.
* The input queue is used to post raw (instruction) mode data or packet data
- * to OCTEON TX2 device from the host. Each IQ of a OTX_EP EP VF device has one
+ * to OCTEON 9 device from the host. Each IQ of a OTX_EP EP VF device has one
* such structure to represent it.
*/
struct otx_ep_instr_queue {
@@ -170,12 +170,12 @@ struct otx_ep_instr_queue {
/* Input ring index, where the driver should write the next packet */
uint32_t host_write_index;
- /* Input ring index, where the OCTEON TX2 should read the next packet */
+ /* Input ring index, where the OCTEON 9 should read the next packet */
uint32_t otx_read_index;
uint32_t reset_instr_cnt;
- /** This index aids in finding the window in the queue where OCTEON TX2
+ /** This index aids in finding the window in the queue where OCTEON 9
* has read the commands.
*/
uint32_t flush_index;
@@ -195,7 +195,7 @@ struct otx_ep_instr_queue {
/* OTX_EP instruction count register for this ring. */
void *inst_cnt_reg;
- /* Number of instructions pending to be posted to OCTEON TX2. */
+ /* Number of instructions pending to be posted to OCTEON 9. */
uint32_t fill_cnt;
/* Statistics for this input queue. */
@@ -230,8 +230,8 @@ union otx_ep_rh {
};
#define OTX_EP_RH_SIZE (sizeof(union otx_ep_rh))
-/** Information about packet DMA'ed by OCTEON TX2.
- * The format of the information available at Info Pointer after OCTEON TX2
+/** Information about packet DMA'ed by OCTEON 9.
+ * The format of the information available at Info Pointer after OCTEON 9
* has posted a packet. Not all descriptors have valid information. Only
* the Info field of the first descriptor for a packet has information
* about the packet.
@@ -295,7 +295,7 @@ struct otx_ep_droq {
/* Driver should read the next packet at this index */
uint32_t read_idx;
- /* OCTEON TX2 will write the next packet at this index */
+ /* OCTEON 9 will write the next packet at this index */
uint32_t write_idx;
/* At this index, the driver will refill the descriptor's buffer */
@@ -326,7 +326,7 @@ struct otx_ep_droq {
*/
void *pkts_credit_reg;
- /** Pointer to the mapped packet sent register. OCTEON TX2 writes the
+ /** Pointer to the mapped packet sent register. OCTEON 9 writes the
* number of packets DMA'ed to host memory in this register.
*/
void *pkts_sent_reg;
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index c3cec6d833..806add246b 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -102,7 +102,7 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
ret = otx_ep_vf_setup_device(otx_epvf);
otx_epvf->fn_list.disable_io_queues(otx_epvf);
break;
- case PCI_DEVID_OCTEONTX2_EP_NET_VF:
+ case PCI_DEVID_CN9K_EP_NET_VF:
case PCI_DEVID_CN98XX_EP_NET_VF:
otx_epvf->chip_id = dev_id;
ret = otx2_ep_vf_setup_device(otx_epvf);
@@ -137,7 +137,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
otx_epvf->eth_dev->rx_pkt_burst = &otx_ep_recv_pkts;
if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF)
otx_epvf->eth_dev->tx_pkt_burst = &otx_ep_xmit_pkts;
- else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX2_EP_NET_VF ||
+ else if (otx_epvf->chip_id == PCI_DEVID_CN9K_EP_NET_VF ||
otx_epvf->chip_id == PCI_DEVID_CN98XX_EP_NET_VF)
otx_epvf->eth_dev->tx_pkt_burst = &otx2_ep_xmit_pkts;
ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf);
@@ -422,7 +422,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
otx_epvf->pdev = pdev;
otx_epdev_init(otx_epvf);
- if (pdev->id.device_id == PCI_DEVID_OCTEONTX2_EP_NET_VF)
+ if (pdev->id.device_id == PCI_DEVID_CN9K_EP_NET_VF)
otx_epvf->pkind = SDP_OTX2_PKIND;
else
otx_epvf->pkind = SDP_PKIND;
@@ -450,7 +450,7 @@ otx_ep_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
/* Set of PCI devices this driver supports */
static const struct rte_pci_id pci_id_otx_ep_map[] = {
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX_EP_VF) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_EP_NET_VF) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN9K_EP_NET_VF) },
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN98XX_EP_NET_VF) },
{ .vendor_id = 0, /* sentinel */ }
};
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index 9338b30672..59df6ad857 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -85,7 +85,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq = otx_ep->instr_queue[iq_no];
q_size = conf->iq.instr_type * num_descs;
- /* IQ memory creation for Instruction submission to OCTEON TX2 */
+ /* IQ memory creation for Instruction submission to OCTEON 9 */
iq->iq_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev,
"instr_queue", iq_no, q_size,
OTX_EP_PCI_RING_ALIGN,
@@ -106,8 +106,8 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq->nb_desc = num_descs;
/* Create a IQ request list to hold requests that have been
- * posted to OCTEON TX2. This list will be used for freeing the IQ
- * data buffer(s) later once the OCTEON TX2 fetched the requests.
+ * posted to OCTEON 9. This list will be used for freeing the IQ
+ * data buffer(s) later once the OCTEON 9 fetched the requests.
*/
iq->req_list = rte_zmalloc_socket("request_list",
(iq->nb_desc * OTX_EP_IQREQ_LIST_SIZE),
@@ -450,7 +450,7 @@ post_iqcmd(struct otx_ep_instr_queue *iq, uint8_t *iqcmd)
uint8_t *iqptr, cmdsize;
/* This ensures that the read index does not wrap around to
- * the same position if queue gets full before OCTEON TX2 could
+ * the same position if queue gets full before OCTEON 9 could
* fetch any instr.
*/
if (iq->instr_pending > (iq->nb_desc - 1))
@@ -979,7 +979,7 @@ otx_ep_check_droq_pkts(struct otx_ep_droq *droq)
return new_pkts;
}
-/* Check for response arrival from OCTEON TX2
+/* Check for response arrival from OCTEON 9
* returns number of requests completed
*/
uint16_t
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 6cea732228..ace4627218 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -65,11 +65,11 @@
intel_ntb_icx = {'Class': '06', 'Vendor': '8086', 'Device': '347e',
'SVendor': None, 'SDevice': None}
-octeontx2_sso = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f9,a0fa',
+cnxk_sso = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f9,a0fa',
'SVendor': None, 'SDevice': None}
-octeontx2_npa = {'Class': '08', 'Vendor': '177d', 'Device': 'a0fb,a0fc',
+cnxk_npa = {'Class': '08', 'Vendor': '177d', 'Device': 'a0fb,a0fc',
'SVendor': None, 'SDevice': None}
-octeontx2_ree = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f4',
+cn9k_ree = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f4',
'SVendor': None, 'SDevice': None}
network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class]
@@ -77,10 +77,10 @@
crypto_devices = [encryption_class, intel_processor_class]
dma_devices = [cnxk_dma, hisilicon_dma,
intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx]
-eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, octeontx2_sso]
-mempool_devices = [cavium_fpa, octeontx2_npa]
+eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, cnxk_sso]
+mempool_devices = [cavium_fpa, cnxk_npa]
compress_devices = [cavium_zip]
-regex_devices = [octeontx2_ree]
+regex_devices = [cn9k_ree]
misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev,
intel_ntb_skx, intel_ntb_icx]
--
2.34.1
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control
2021-12-05 18:00 0% ` Stephen Hemminger
@ 2021-12-06 9:57 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2021-12-06 9:57 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Jerin Jacob, dpdk-dev, Ray Kinsella, Thomas Monjalon,
Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Andrew Boyer,
Beilei Xing, Richardson, Bruce, Chas Williams, Xia, Chenbo,
Ciara Loftus, Devendra Singh Rawat, Ed Czeck, Evgeny Schemeilin,
Gaetan Rivet, Gagandeep Singh, Guoyang Zhou, Haiyue Wang,
Harman Kalra, heinrich.kuhn, Hemant Agrawal, Hyong Youb Kim,
Igor Chauskin, Igor Russkikh, Jakub Grajciar, Jasvinder Singh,
Jian Wang, Jiawen Wu, Jingjing Wu, John Daley, John Miller,
John W. Linville, Wiles, Keith, Kiran Kumar K, Lijun Ou,
Liron Himi, Long Li, Marcin Wojtas, Martin Spinler, Matan Azrad,
Matt Peters, Maxime Coquelin, Michal Krawczyk, Min Hu (Connor,
Pradeep Kumar Nalla, Nithin Dabilpuram, Qiming Yang, Qi Zhang,
Radha Mohan Chintakuntla, Rahul Lakkireddy, Rasesh Mody,
Rosen Xu, Sachin Saxena, Satha Koteswara Rao Kottidi,
Shahed Shaikh, Shai Brandes, Shepard Siegel,
Somalapuram Amaranath, Somnath Kotur, Stephen Hemminger,
Steven Webster, Sunil Kumar Kori, Tetsuya Mukawa,
Veerasenareddy Burru, Viacheslav Ovsiienko, Xiao Wang,
Xiaoyun Wang, Yisen Zhuang, Yong Wang, Ziyang Xuan
On Sun, Dec 5, 2021 at 11:30 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Sun, 5 Dec 2021 12:33:57 +0530
> Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> > On Sat, Dec 4, 2021 at 11:08 PM Stephen Hemminger
> > <stephen@networkplumber.org> wrote:
> > >
> > > On Sat, 4 Dec 2021 22:54:58 +0530
> > > <jerinj@marvell.com> wrote:
> > >
> > > > + /**
> > > > + * Maximum supported traffic class as per PFC (802.1Qbb) specification.
> > > > + *
> > > > + * Based on device support and use-case need, there are two different
> > > > + * ways to enable PFC. The first case is the port level PFC
> > > > + * configuration, in this case, rte_eth_dev_priority_flow_ctrl_set()
> > > > + * API shall be used to configure the PFC, and PFC frames will be
> > > > + * generated using based on VLAN TC value.
> > > > + * The second case is the queue level PFC configuration, in this case,
> > > > + * Any packet field content can be used to steer the packet to the
> > > > + * specific queue using rte_flow or RSS and then use
> > > > + * rte_eth_dev_priority_flow_ctrl_queue_set() to set the TC mapping
> > > > + * on each queue. Based on congestion selected on the specific queue,
> > > > + * configured TC shall be used to generate PFC frames.
> > > > + *
> > > > + * When set to non zero value, application must use queue level
> > > > + * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
> > > > + * instead of port level PFC configuration via
> > > > + * rte_eth_dev_priority_flow_ctrl_set() API to realize
> > > > + * PFC configuration.
> > > > + */
> > > > + uint8_t pfc_queue_tc_max;
> > > > + uint8_t reserved_8s[7];
> > > > + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> > > > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > >
> > > Not sure you can claim ABI compatibility because the previous versions of DPDK
> > > did not enforce that reserved fields must be zero. The Linux kernel
> > > learned this when adding flags for new system calls; reserved fields only
> > > work if you enforce that application must set them to zero.
> >
> > In this case it rte_eth_dev_info is an out parameter and implementation of
> > rte_eth_dev_info_get() already memseting to 0.
> > Do you still see any other ABI issue?
> >
> > See rte_eth_dev_info_get()
> > /*
> > * Init dev_info before port_id check since caller does not have
> > * return status and does not know if get is successful or not.
> > */
> > memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
>
> The concern was from the misreading comment. It talks about what application should do.
> Could you reword the comment so that it describes what pfc_queue_tc_max is here
The comment is at rte_eth_dev_info::pfc_queue_tc_max. So it is implied
that get pararamter.
current comment
---
+ * When set to non zero value, application must use queue level
+ * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
+ * instead of port level PFC configuration via
+ * rte_eth_dev_priority_flow_ctrl_set() API to realize
+ * PFC configuration.
---
Is updating to following help to clarify. If so, I will send v2, if
not, Please suggest.
---
+ * When set to non zero value by the driver, application must use queue level
^^^^^^^^^^^
+ * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
+ * instead of port level PFC configuration via
+ * rte_eth_dev_priority_flow_ctrl_set() API to realize
+ * PFC configuration.
---
> and move the flow control set part of the comment to where the API for that is.
The comment is needed for rte_eth_dev_priority_flow_ctrl_set() and
rte_eth_dev_priority_flow_ctrl_queue_set().
Instead of duplicating the comments, I added the comment at
rte_eth_dev_info::pfc_queue_tc_max and
added "@see struct rte_eth_dev_info::pfc_queue_tc_max priority flow
control usage models"
in rte_eth_dev_priority_flow_ctrl_set() and
rte_eth_dev_priority_flow_ctrl_queue_set().
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
2021-12-06 8:35 1% [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers jerinj
@ 2021-12-06 13:35 3% ` Ferruh Yigit
2021-12-07 7:39 3% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-12-06 13:35 UTC (permalink / raw)
To: jerinj, dev, Thomas Monjalon, Akhil Goyal, Declan Doherty,
Ruifeng Wang, Jan Viktorin, Bruce Richardson, Ray Kinsella,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov
Cc: sburla, lironh
On 12/6/2021 8:35 AM, jerinj@marvell.com wrote:
> From: Jerin Jacob<jerinj@marvell.com>
>
> As per the deprecation notice, In the view of enabling unified driver
> for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
> drivers and replace with drivers/cnxk/ which
> supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
>
> This patch does the following
>
> - Replace drivers/common/octeontx2/ with drivers/common/cnxk/
> - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
> - Replace drivers/net/octeontx2/ with drivers/net/cnxk/
> - Replace drivers/event/octeontx2/ with drivers/event/cnxk/
> - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
> - Rename config/arm/arm64_octeontx2_linux_gcc as
> config/arm/arm64_cn9k_linux_gcc
> - Update the documentation and MAINTAINERS to reflect the same.
> - Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
> documentation is not accounted for this change as kernel documentation
> still uses OCTEONTX2.
>
> Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
> Signed-off-by: Jerin Jacob<jerinj@marvell.com>
> ---
> MAINTAINERS | 37 -
> app/test/meson.build | 1 -
> app/test/test_cryptodev.c | 7 -
> app/test/test_cryptodev.h | 1 -
> app/test/test_cryptodev_asym.c | 17 -
> app/test/test_eventdev.c | 8 -
> config/arm/arm64_cn10k_linux_gcc | 1 -
> ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
> config/arm/meson.build | 10 +-
> devtools/check-abi.sh | 4 +
> doc/guides/cryptodevs/features/octeontx2.ini | 87 -
> doc/guides/cryptodevs/index.rst | 1 -
> doc/guides/cryptodevs/octeontx2.rst | 188 -
> doc/guides/dmadevs/cnxk.rst | 2 +-
> doc/guides/eventdevs/features/octeontx2.ini | 30 -
> doc/guides/eventdevs/index.rst | 1 -
> doc/guides/eventdevs/octeontx2.rst | 178 -
> doc/guides/mempool/index.rst | 1 -
> doc/guides/mempool/octeontx2.rst | 92 -
> doc/guides/nics/cnxk.rst | 4 +-
> doc/guides/nics/features/octeontx2.ini | 97 -
> doc/guides/nics/features/octeontx2_vec.ini | 48 -
> doc/guides/nics/features/octeontx2_vf.ini | 45 -
> doc/guides/nics/index.rst | 1 -
> doc/guides/nics/octeontx2.rst | 465 ---
> doc/guides/nics/octeontx_ep.rst | 4 +-
> doc/guides/platform/cnxk.rst | 12 +
> .../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
> .../img/octeontx2_resource_virtualization.svg | 2418 ------------
> doc/guides/platform/index.rst | 1 -
> doc/guides/platform/octeontx2.rst | 520 ---
> doc/guides/rel_notes/deprecation.rst | 17 -
> doc/guides/rel_notes/release_19_08.rst | 12 +-
> doc/guides/rel_notes/release_19_11.rst | 6 +-
> doc/guides/rel_notes/release_20_02.rst | 8 +-
> doc/guides/rel_notes/release_20_05.rst | 4 +-
> doc/guides/rel_notes/release_20_08.rst | 6 +-
> doc/guides/rel_notes/release_20_11.rst | 8 +-
> doc/guides/rel_notes/release_21_02.rst | 10 +-
> doc/guides/rel_notes/release_21_05.rst | 6 +-
> doc/guides/rel_notes/release_21_11.rst | 2 +-
Not sure about updating old release notes files, using 'octeontx2' still can make
sense for the context of those releases.
Also search still gives some instances of 'octeontx2', like 'devtools/check-abi.sh'
one, can you please confirm if OK to have them:
$git grep -i octeontx2
Except from above items, agree with change in principal and build test looks good:
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
2021-12-06 13:35 3% ` Ferruh Yigit
@ 2021-12-07 7:39 3% ` Jerin Jacob
2021-12-07 11:01 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-12-07 7:39 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Jerin Jacob, dpdk-dev, Thomas Monjalon, Akhil Goyal,
Declan Doherty, Ruifeng Wang, Jan Viktorin, Bruce Richardson,
Ray Kinsella, Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov, Satananda Burla, Liron Himi
On Mon, Dec 6, 2021 at 7:05 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 12/6/2021 8:35 AM, jerinj@marvell.com wrote:
> > From: Jerin Jacob<jerinj@marvell.com>
> >
> > As per the deprecation notice, In the view of enabling unified driver
> > for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
> > drivers and replace with drivers/cnxk/ which
> > supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
> >
> > This patch does the following
> >
> > - Replace drivers/common/octeontx2/ with drivers/common/cnxk/
> > - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
> > - Replace drivers/net/octeontx2/ with drivers/net/cnxk/
> > - Replace drivers/event/octeontx2/ with drivers/event/cnxk/
> > - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
> > - Rename config/arm/arm64_octeontx2_linux_gcc as
> > config/arm/arm64_cn9k_linux_gcc
> > - Update the documentation and MAINTAINERS to reflect the same.
> > - Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
> > documentation is not accounted for this change as kernel documentation
> > still uses OCTEONTX2.
> >
> > Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
> > Signed-off-by: Jerin Jacob<jerinj@marvell.com>
> > ---
> > MAINTAINERS | 37 -
> > app/test/meson.build | 1 -
> > app/test/test_cryptodev.c | 7 -
> > app/test/test_cryptodev.h | 1 -
> > app/test/test_cryptodev_asym.c | 17 -
> > app/test/test_eventdev.c | 8 -
> > config/arm/arm64_cn10k_linux_gcc | 1 -
> > ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
> > config/arm/meson.build | 10 +-
> > devtools/check-abi.sh | 4 +
> > doc/guides/cryptodevs/features/octeontx2.ini | 87 -
> > doc/guides/cryptodevs/index.rst | 1 -
> > doc/guides/cryptodevs/octeontx2.rst | 188 -
> > doc/guides/dmadevs/cnxk.rst | 2 +-
> > doc/guides/eventdevs/features/octeontx2.ini | 30 -
> > doc/guides/eventdevs/index.rst | 1 -
> > doc/guides/eventdevs/octeontx2.rst | 178 -
> > doc/guides/mempool/index.rst | 1 -
> > doc/guides/mempool/octeontx2.rst | 92 -
> > doc/guides/nics/cnxk.rst | 4 +-
> > doc/guides/nics/features/octeontx2.ini | 97 -
> > doc/guides/nics/features/octeontx2_vec.ini | 48 -
> > doc/guides/nics/features/octeontx2_vf.ini | 45 -
> > doc/guides/nics/index.rst | 1 -
> > doc/guides/nics/octeontx2.rst | 465 ---
> > doc/guides/nics/octeontx_ep.rst | 4 +-
> > doc/guides/platform/cnxk.rst | 12 +
> > .../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
> > .../img/octeontx2_resource_virtualization.svg | 2418 ------------
> > doc/guides/platform/index.rst | 1 -
> > doc/guides/platform/octeontx2.rst | 520 ---
> > doc/guides/rel_notes/deprecation.rst | 17 -
> > doc/guides/rel_notes/release_19_08.rst | 12 +-
> > doc/guides/rel_notes/release_19_11.rst | 6 +-
> > doc/guides/rel_notes/release_20_02.rst | 8 +-
> > doc/guides/rel_notes/release_20_05.rst | 4 +-
> > doc/guides/rel_notes/release_20_08.rst | 6 +-
> > doc/guides/rel_notes/release_20_11.rst | 8 +-
> > doc/guides/rel_notes/release_21_02.rst | 10 +-
> > doc/guides/rel_notes/release_21_05.rst | 6 +-
> > doc/guides/rel_notes/release_21_11.rst | 2 +-
>
> Not sure about updating old release notes files, using 'octeontx2' still can make
> sense for the context of those releases.
OK. I will send v2 with keeping octeontx2 in OLD release notes.
>
> Also search still gives some instances of 'octeontx2', like 'devtools/check-abi.sh'
> one, can you please confirm if OK to have them:
> $git grep -i octeontx2
This change to skip octeontx2 driver for ABI check as it is removed.
This change is needed.
if grep -qE "\<librte_*.*_octeontx2" $dump; then
echo "Skipped removed driver $name."
>
> Except from above items, agree with change in principal and build test looks good:
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
2021-12-07 7:39 3% ` Jerin Jacob
@ 2021-12-07 11:01 0% ` Ferruh Yigit
2021-12-07 11:51 0% ` Kevin Traynor
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-12-07 11:01 UTC (permalink / raw)
To: Thomas Monjalon, John McNamara, David Marchand
Cc: Jerin Jacob, dpdk-dev, Akhil Goyal, Declan Doherty, Ruifeng Wang,
Jan Viktorin, Bruce Richardson, Ray Kinsella,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov, Satananda Burla, Liron Himi,
Jerin Jacob
On 12/7/2021 7:39 AM, Jerin Jacob wrote:
> On Mon, Dec 6, 2021 at 7:05 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>
>> On 12/6/2021 8:35 AM, jerinj@marvell.com wrote:
>>> From: Jerin Jacob<jerinj@marvell.com>
>>>
>>> As per the deprecation notice, In the view of enabling unified driver
>>> for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
>>> drivers and replace with drivers/cnxk/ which
>>> supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
>>>
>>> This patch does the following
>>>
>>> - Replace drivers/common/octeontx2/ with drivers/common/cnxk/
>>> - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
>>> - Replace drivers/net/octeontx2/ with drivers/net/cnxk/
>>> - Replace drivers/event/octeontx2/ with drivers/event/cnxk/
>>> - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
>>> - Rename config/arm/arm64_octeontx2_linux_gcc as
>>> config/arm/arm64_cn9k_linux_gcc
>>> - Update the documentation and MAINTAINERS to reflect the same.
>>> - Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
>>> documentation is not accounted for this change as kernel documentation
>>> still uses OCTEONTX2.
>>>
>>> Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
>>> Signed-off-by: Jerin Jacob<jerinj@marvell.com>
>>> ---
>>> MAINTAINERS | 37 -
>>> app/test/meson.build | 1 -
>>> app/test/test_cryptodev.c | 7 -
>>> app/test/test_cryptodev.h | 1 -
>>> app/test/test_cryptodev_asym.c | 17 -
>>> app/test/test_eventdev.c | 8 -
>>> config/arm/arm64_cn10k_linux_gcc | 1 -
>>> ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
>>> config/arm/meson.build | 10 +-
>>> devtools/check-abi.sh | 4 +
>>> doc/guides/cryptodevs/features/octeontx2.ini | 87 -
>>> doc/guides/cryptodevs/index.rst | 1 -
>>> doc/guides/cryptodevs/octeontx2.rst | 188 -
>>> doc/guides/dmadevs/cnxk.rst | 2 +-
>>> doc/guides/eventdevs/features/octeontx2.ini | 30 -
>>> doc/guides/eventdevs/index.rst | 1 -
>>> doc/guides/eventdevs/octeontx2.rst | 178 -
>>> doc/guides/mempool/index.rst | 1 -
>>> doc/guides/mempool/octeontx2.rst | 92 -
>>> doc/guides/nics/cnxk.rst | 4 +-
>>> doc/guides/nics/features/octeontx2.ini | 97 -
>>> doc/guides/nics/features/octeontx2_vec.ini | 48 -
>>> doc/guides/nics/features/octeontx2_vf.ini | 45 -
>>> doc/guides/nics/index.rst | 1 -
>>> doc/guides/nics/octeontx2.rst | 465 ---
>>> doc/guides/nics/octeontx_ep.rst | 4 +-
>>> doc/guides/platform/cnxk.rst | 12 +
>>> .../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
>>> .../img/octeontx2_resource_virtualization.svg | 2418 ------------
>>> doc/guides/platform/index.rst | 1 -
>>> doc/guides/platform/octeontx2.rst | 520 ---
>>> doc/guides/rel_notes/deprecation.rst | 17 -
>>> doc/guides/rel_notes/release_19_08.rst | 12 +-
>>> doc/guides/rel_notes/release_19_11.rst | 6 +-
>>> doc/guides/rel_notes/release_20_02.rst | 8 +-
>>> doc/guides/rel_notes/release_20_05.rst | 4 +-
>>> doc/guides/rel_notes/release_20_08.rst | 6 +-
>>> doc/guides/rel_notes/release_20_11.rst | 8 +-
>>> doc/guides/rel_notes/release_21_02.rst | 10 +-
>>> doc/guides/rel_notes/release_21_05.rst | 6 +-
>>> doc/guides/rel_notes/release_21_11.rst | 2 +-
>>
>> Not sure about updating old release notes files, using 'octeontx2' still can make
>> sense for the context of those releases.
>
> OK. I will send v2 with keeping octeontx2 in OLD release notes.
>
>
Not related with this set specifically, a more general question about updating
old release notes.
For me release notes should be frozen with the release and shouldn't be updated
at all afterwards, but there is no agreement on this and in practice old release
notes are updated.
My question is, is there any benefit to keep a separate release notes file for
each release, and need to maintain old ones.
What about having a single release file, 'release.rst', and reset it after each
release?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
2021-12-07 11:01 0% ` Ferruh Yigit
@ 2021-12-07 11:51 0% ` Kevin Traynor
0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2021-12-07 11:51 UTC (permalink / raw)
To: Ferruh Yigit, Thomas Monjalon, John McNamara, David Marchand
Cc: Jerin Jacob, dpdk-dev, Akhil Goyal, Declan Doherty, Ruifeng Wang,
Jan Viktorin, Bruce Richardson, Ray Kinsella,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov, Satananda Burla, Liron Himi,
Jerin Jacob
On 07/12/2021 11:01, Ferruh Yigit wrote:
> On 12/7/2021 7:39 AM, Jerin Jacob wrote:
>> On Mon, Dec 6, 2021 at 7:05 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>>
>>> On 12/6/2021 8:35 AM, jerinj@marvell.com wrote:
>>>> From: Jerin Jacob<jerinj@marvell.com>
>>>>
>>>> As per the deprecation notice, In the view of enabling unified driver
>>>> for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
>>>> drivers and replace with drivers/cnxk/ which
>>>> supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
>>>>
>>>> This patch does the following
>>>>
>>>> - Replace drivers/common/octeontx2/ with drivers/common/cnxk/
>>>> - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
>>>> - Replace drivers/net/octeontx2/ with drivers/net/cnxk/
>>>> - Replace drivers/event/octeontx2/ with drivers/event/cnxk/
>>>> - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
>>>> - Rename config/arm/arm64_octeontx2_linux_gcc as
>>>> config/arm/arm64_cn9k_linux_gcc
>>>> - Update the documentation and MAINTAINERS to reflect the same.
>>>> - Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
>>>> documentation is not accounted for this change as kernel documentation
>>>> still uses OCTEONTX2.
>>>>
>>>> Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
>>>> Signed-off-by: Jerin Jacob<jerinj@marvell.com>
>>>> ---
>>>> MAINTAINERS | 37 -
>>>> app/test/meson.build | 1 -
>>>> app/test/test_cryptodev.c | 7 -
>>>> app/test/test_cryptodev.h | 1 -
>>>> app/test/test_cryptodev_asym.c | 17 -
>>>> app/test/test_eventdev.c | 8 -
>>>> config/arm/arm64_cn10k_linux_gcc | 1 -
>>>> ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
>>>> config/arm/meson.build | 10 +-
>>>> devtools/check-abi.sh | 4 +
>>>> doc/guides/cryptodevs/features/octeontx2.ini | 87 -
>>>> doc/guides/cryptodevs/index.rst | 1 -
>>>> doc/guides/cryptodevs/octeontx2.rst | 188 -
>>>> doc/guides/dmadevs/cnxk.rst | 2 +-
>>>> doc/guides/eventdevs/features/octeontx2.ini | 30 -
>>>> doc/guides/eventdevs/index.rst | 1 -
>>>> doc/guides/eventdevs/octeontx2.rst | 178 -
>>>> doc/guides/mempool/index.rst | 1 -
>>>> doc/guides/mempool/octeontx2.rst | 92 -
>>>> doc/guides/nics/cnxk.rst | 4 +-
>>>> doc/guides/nics/features/octeontx2.ini | 97 -
>>>> doc/guides/nics/features/octeontx2_vec.ini | 48 -
>>>> doc/guides/nics/features/octeontx2_vf.ini | 45 -
>>>> doc/guides/nics/index.rst | 1 -
>>>> doc/guides/nics/octeontx2.rst | 465 ---
>>>> doc/guides/nics/octeontx_ep.rst | 4 +-
>>>> doc/guides/platform/cnxk.rst | 12 +
>>>> .../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
>>>> .../img/octeontx2_resource_virtualization.svg | 2418 ------------
>>>> doc/guides/platform/index.rst | 1 -
>>>> doc/guides/platform/octeontx2.rst | 520 ---
>>>> doc/guides/rel_notes/deprecation.rst | 17 -
>>>> doc/guides/rel_notes/release_19_08.rst | 12 +-
>>>> doc/guides/rel_notes/release_19_11.rst | 6 +-
>>>> doc/guides/rel_notes/release_20_02.rst | 8 +-
>>>> doc/guides/rel_notes/release_20_05.rst | 4 +-
>>>> doc/guides/rel_notes/release_20_08.rst | 6 +-
>>>> doc/guides/rel_notes/release_20_11.rst | 8 +-
>>>> doc/guides/rel_notes/release_21_02.rst | 10 +-
>>>> doc/guides/rel_notes/release_21_05.rst | 6 +-
>>>> doc/guides/rel_notes/release_21_11.rst | 2 +-
>>>
>>> Not sure about updating old release notes files, using 'octeontx2' still can make
>>> sense for the context of those releases.
>>
>> OK. I will send v2 with keeping octeontx2 in OLD release notes.
>>
>>
>
> Not related with this set specifically, a more general question about updating
> old release notes.
> For me release notes should be frozen with the release and shouldn't be updated
> at all afterwards, but there is no agreement on this and in practice old release
> notes are updated.
>
> My question is, is there any benefit to keep a separate release notes file for
> each release, and need to maintain old ones.
> What about having a single release file, 'release.rst', and reset it after each
> release?
>
I think there is a benefit to keeping them all - you can quickly
look/grep through the files for multiple releases. e.g. if you wanted to
check when a driver/feature was added etc. I agree it doesn't make sense
to update them, unless there was a mistake at the time of release.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4 0/4] regex/cn9k: use cnxk infrastructure
@ 2021-12-08 9:14 3% ` Jerin Jacob
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers jerinj
1 sibling, 0 replies; 200+ results
From: Jerin Jacob @ 2021-12-08 9:14 UTC (permalink / raw)
To: Liron Himi; +Cc: Jerin Jacob, dpdk-dev
On Wed, Dec 8, 2021 at 12:02 AM <lironh@marvell.com> wrote:
>
> From: Liron Himi <lironh@marvell.com>
>
> 3 patches add support for REE into cnkx infrastructure.
> the last patch change the octeontx2 driver to use
> the new cnxk code. in addition all references to
> octeontx2/otx2 were replaced with cn9k.
Series Acked-by: Jerin Jacob <jerinj@marvell.com>
There still an issue with check-abi.sh[1]
[1]
http://mails.dpdk.org/archives/test-report/2021-December/247701.html
I will send v5 with this fix and remove octeontx2 drivers patches as one series.
>
> v4:
> - squashed the 4th patch
>
> v3:
> - fix documentation issues
>
> v2:
> - fix review comments.
> - split original patch.
> - add the driver patch.
>
> Liron Himi (4):
> common/cnxk: add REE HW definitions
> common/cnxk: add REE mbox definitions
> common/cnxk: add REE support
> regex/cn9k: use cnxk infrastructure
>
> MAINTAINERS | 8 +-
> doc/guides/platform/cnxk.rst | 3 +
> doc/guides/platform/octeontx2.rst | 3 -
> .../regexdevs/{octeontx2.rst => cn9k.rst} | 20 +-
> .../features/{octeontx2.ini => cn9k.ini} | 2 +-
> doc/guides/regexdevs/index.rst | 2 +-
> doc/guides/rel_notes/release_20_11.rst | 2 +-
> drivers/common/cnxk/hw/ree.h | 126 ++++
> drivers/common/cnxk/hw/rvu.h | 5 +
> drivers/common/cnxk/meson.build | 1 +
> drivers/common/cnxk/roc_api.h | 4 +
> drivers/common/cnxk/roc_constants.h | 2 +
> drivers/common/cnxk/roc_mbox.h | 100 +++
> drivers/common/cnxk/roc_platform.c | 1 +
> drivers/common/cnxk/roc_platform.h | 2 +
> drivers/common/cnxk/roc_priv.h | 3 +
> drivers/common/cnxk/roc_ree.c | 647 ++++++++++++++++++
> drivers/common/cnxk/roc_ree.h | 137 ++++
> drivers/common/cnxk/roc_ree_priv.h | 18 +
> drivers/common/cnxk/version.map | 18 +-
> .../otx2_regexdev.c => cn9k/cn9k_regexdev.c} | 405 +++++------
> drivers/regex/cn9k/cn9k_regexdev.h | 44 ++
> .../cn9k_regexdev_compiler.c} | 34 +-
> drivers/regex/cn9k/cn9k_regexdev_compiler.h | 11 +
> drivers/regex/{octeontx2 => cn9k}/meson.build | 10 +-
> drivers/regex/{octeontx2 => cn9k}/version.map | 0
> drivers/regex/meson.build | 2 +-
> drivers/regex/octeontx2/otx2_regexdev.h | 109 ---
> .../regex/octeontx2/otx2_regexdev_compiler.h | 11 -
> .../regex/octeontx2/otx2_regexdev_hw_access.c | 167 -----
> .../regex/octeontx2/otx2_regexdev_hw_access.h | 202 ------
> drivers/regex/octeontx2/otx2_regexdev_mbox.c | 401 -----------
> drivers/regex/octeontx2/otx2_regexdev_mbox.h | 38 -
> 33 files changed, 1332 insertions(+), 1206 deletions(-)
> rename doc/guides/regexdevs/{octeontx2.rst => cn9k.rst} (69%)
> rename doc/guides/regexdevs/features/{octeontx2.ini => cn9k.ini} (80%)
> create mode 100644 drivers/common/cnxk/hw/ree.h
> create mode 100644 drivers/common/cnxk/roc_ree.c
> create mode 100644 drivers/common/cnxk/roc_ree.h
> create mode 100644 drivers/common/cnxk/roc_ree_priv.h
> rename drivers/regex/{octeontx2/otx2_regexdev.c => cn9k/cn9k_regexdev.c} (61%)
> create mode 100644 drivers/regex/cn9k/cn9k_regexdev.h
> rename drivers/regex/{octeontx2/otx2_regexdev_compiler.c => cn9k/cn9k_regexdev_compiler.c} (86%)
> create mode 100644 drivers/regex/cn9k/cn9k_regexdev_compiler.h
> rename drivers/regex/{octeontx2 => cn9k}/meson.build (65%)
> rename drivers/regex/{octeontx2 => cn9k}/version.map (100%)
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev.h
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_compiler.h
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.c
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.h
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.c
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.h
>
> --
> 2.28.0
>
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers
2021-12-08 9:14 3% ` Jerin Jacob
@ 2021-12-11 9:04 2% ` jerinj
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 4/5] regex/cn9k: use cnxk infrastructure jerinj
2021-12-11 9:04 1% ` [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers jerinj
1 sibling, 2 replies; 200+ results
From: jerinj @ 2021-12-11 9:04 UTC (permalink / raw)
To: dev; +Cc: thomas, david.marchand, ferruh.yigit, Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
This patch series enables the following deprecation notice
-------------------------------------------------------------
In the view of enabling unified driver for octeontx2(cn9k)/
octeontx3(cn10k), removing drivers/octeontx2 drivers and
replace with drivers/cnxk/ which supports both octeontx2(cn9k)
and octeontx3(cn10k) SoCs.
This deprecation notice is to do following actions in DPDK v22.02 version.
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename drivers/regex/octeontx2/ as drivers/regex/cn9k/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
Last two actions are to align naming convention as cnxk scheme.
-----------------------------------------------------------------
v5:
- Fixed issues related devtools/check-abi.sh
- Include http://patches.dpdk.org/project/dpdk/patch/20211206083542.3115019-1-jerinj@marvell.com/
patches in this series
- Removal touching old release notes in
http://patches.dpdk.org/project/dpdk/patch/20211206083542.3115019-1-jerinj@marvell.com/
v4:
- squashed the 4th patch
v3:
- fix documentation issues
v2:
- fix review comments.
- split original patch.
- add the driver patch.
Jerin Jacob (1):
drivers: remove octeontx2 drivers
Liron Himi (4):
common/cnxk: add REE HW definitions
common/cnxk: add REE mbox definitions
common/cnxk: add REE support
regex/cn9k: use cnxk infrastructure
MAINTAINERS | 45 +-
app/test/meson.build | 1 -
app/test/test_cryptodev.c | 7 -
app/test/test_cryptodev.h | 1 -
app/test/test_cryptodev_asym.c | 17 -
app/test/test_eventdev.c | 8 -
config/arm/arm64_cn10k_linux_gcc | 1 -
...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
config/arm/meson.build | 10 +-
devtools/check-abi.sh | 4 +
doc/guides/cryptodevs/features/octeontx2.ini | 87 -
doc/guides/cryptodevs/index.rst | 1 -
doc/guides/cryptodevs/octeontx2.rst | 188 -
doc/guides/dmadevs/cnxk.rst | 2 +-
doc/guides/eventdevs/features/octeontx2.ini | 30 -
doc/guides/eventdevs/index.rst | 1 -
doc/guides/eventdevs/octeontx2.rst | 178 -
doc/guides/mempool/index.rst | 1 -
doc/guides/mempool/octeontx2.rst | 92 -
doc/guides/nics/cnxk.rst | 4 +-
doc/guides/nics/features/octeontx2.ini | 97 -
doc/guides/nics/features/octeontx2_vec.ini | 48 -
doc/guides/nics/features/octeontx2_vf.ini | 45 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/octeontx2.rst | 465 ---
doc/guides/nics/octeontx_ep.rst | 4 +-
doc/guides/platform/cnxk.rst | 15 +
.../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
.../img/octeontx2_resource_virtualization.svg | 2418 ------------
doc/guides/platform/index.rst | 1 -
doc/guides/platform/octeontx2.rst | 523 ---
.../regexdevs/{octeontx2.rst => cn9k.rst} | 20 +-
.../features/{octeontx2.ini => cn9k.ini} | 2 +-
doc/guides/regexdevs/index.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 17 -
doc/guides/rel_notes/release_19_08.rst | 8 +-
doc/guides/rel_notes/release_19_11.rst | 2 +-
doc/guides/rel_notes/release_20_11.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 1 -
drivers/common/cnxk/hw/ree.h | 126 +
drivers/common/cnxk/hw/rvu.h | 5 +
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_api.h | 4 +
drivers/common/cnxk/roc_constants.h | 2 +
drivers/common/cnxk/roc_mbox.h | 100 +
drivers/common/cnxk/roc_platform.c | 1 +
drivers/common/cnxk/roc_platform.h | 2 +
drivers/common/cnxk/roc_priv.h | 3 +
drivers/common/cnxk/roc_ree.c | 647 ++++
drivers/common/cnxk/roc_ree.h | 137 +
drivers/common/cnxk/roc_ree_priv.h | 18 +
drivers/common/cnxk/version.map | 18 +-
drivers/common/meson.build | 1 -
drivers/common/octeontx2/hw/otx2_nix.h | 1391 -------
drivers/common/octeontx2/hw/otx2_npa.h | 305 --
drivers/common/octeontx2/hw/otx2_npc.h | 503 ---
drivers/common/octeontx2/hw/otx2_ree.h | 27 -
drivers/common/octeontx2/hw/otx2_rvu.h | 219 --
drivers/common/octeontx2/hw/otx2_sdp.h | 184 -
drivers/common/octeontx2/hw/otx2_sso.h | 209 --
drivers/common/octeontx2/hw/otx2_ssow.h | 56 -
drivers/common/octeontx2/hw/otx2_tim.h | 34 -
drivers/common/octeontx2/meson.build | 24 -
drivers/common/octeontx2/otx2_common.c | 216 --
drivers/common/octeontx2/otx2_common.h | 179 -
drivers/common/octeontx2/otx2_dev.c | 1074 ------
drivers/common/octeontx2/otx2_dev.h | 161 -
drivers/common/octeontx2/otx2_io_arm64.h | 114 -
drivers/common/octeontx2/otx2_io_generic.h | 75 -
drivers/common/octeontx2/otx2_irq.c | 288 --
drivers/common/octeontx2/otx2_irq.h | 28 -
drivers/common/octeontx2/otx2_mbox.c | 465 ---
drivers/common/octeontx2/otx2_mbox.h | 1958 ----------
drivers/common/octeontx2/otx2_sec_idev.c | 183 -
drivers/common/octeontx2/otx2_sec_idev.h | 43 -
drivers/common/octeontx2/version.map | 44 -
drivers/crypto/meson.build | 1 -
drivers/crypto/octeontx2/meson.build | 30 -
drivers/crypto/octeontx2/otx2_cryptodev.c | 188 -
drivers/crypto/octeontx2/otx2_cryptodev.h | 63 -
.../octeontx2/otx2_cryptodev_capabilities.c | 924 -----
.../octeontx2/otx2_cryptodev_capabilities.h | 45 -
.../octeontx2/otx2_cryptodev_hw_access.c | 225 --
.../octeontx2/otx2_cryptodev_hw_access.h | 161 -
.../crypto/octeontx2/otx2_cryptodev_mbox.c | 285 --
.../crypto/octeontx2/otx2_cryptodev_mbox.h | 37 -
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 1438 -------
drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 15 -
.../octeontx2/otx2_cryptodev_ops_helper.h | 82 -
drivers/crypto/octeontx2/otx2_cryptodev_qp.h | 46 -
drivers/crypto/octeontx2/otx2_cryptodev_sec.c | 655 ----
drivers/crypto/octeontx2/otx2_cryptodev_sec.h | 64 -
.../crypto/octeontx2/otx2_ipsec_anti_replay.h | 227 --
drivers/crypto/octeontx2/otx2_ipsec_fp.h | 371 --
drivers/crypto/octeontx2/otx2_ipsec_po.h | 447 ---
drivers/crypto/octeontx2/otx2_ipsec_po_ops.h | 167 -
drivers/crypto/octeontx2/otx2_security.h | 37 -
drivers/crypto/octeontx2/version.map | 13 -
drivers/event/cnxk/cn9k_eventdev.c | 10 +
drivers/event/meson.build | 1 -
drivers/event/octeontx2/meson.build | 26 -
drivers/event/octeontx2/otx2_evdev.c | 1900 ----------
drivers/event/octeontx2/otx2_evdev.h | 430 ---
drivers/event/octeontx2/otx2_evdev_adptr.c | 656 ----
.../event/octeontx2/otx2_evdev_crypto_adptr.c | 132 -
.../octeontx2/otx2_evdev_crypto_adptr_rx.h | 77 -
.../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 -
drivers/event/octeontx2/otx2_evdev_irq.c | 272 --
drivers/event/octeontx2/otx2_evdev_selftest.c | 1517 --------
drivers/event/octeontx2/otx2_evdev_stats.h | 286 --
drivers/event/octeontx2/otx2_tim_evdev.c | 735 ----
drivers/event/octeontx2/otx2_tim_evdev.h | 256 --
drivers/event/octeontx2/otx2_tim_worker.c | 192 -
drivers/event/octeontx2/otx2_tim_worker.h | 598 ---
drivers/event/octeontx2/otx2_worker.c | 372 --
drivers/event/octeontx2/otx2_worker.h | 339 --
drivers/event/octeontx2/otx2_worker_dual.c | 345 --
drivers/event/octeontx2/otx2_worker_dual.h | 110 -
drivers/mempool/cnxk/cnxk_mempool.c | 56 +-
drivers/mempool/meson.build | 1 -
drivers/mempool/octeontx2/meson.build | 18 -
drivers/mempool/octeontx2/otx2_mempool.c | 457 ---
drivers/mempool/octeontx2/otx2_mempool.h | 221 --
.../mempool/octeontx2/otx2_mempool_debug.c | 135 -
drivers/mempool/octeontx2/otx2_mempool_irq.c | 303 --
drivers/mempool/octeontx2/otx2_mempool_ops.c | 901 -----
drivers/mempool/octeontx2/version.map | 8 -
drivers/net/cnxk/cn9k_ethdev.c | 15 +
drivers/net/meson.build | 1 -
drivers/net/octeontx2/meson.build | 47 -
drivers/net/octeontx2/otx2_ethdev.c | 2814 --------------
drivers/net/octeontx2/otx2_ethdev.h | 619 ---
drivers/net/octeontx2/otx2_ethdev_debug.c | 811 ----
drivers/net/octeontx2/otx2_ethdev_devargs.c | 215 --
drivers/net/octeontx2/otx2_ethdev_irq.c | 493 ---
drivers/net/octeontx2/otx2_ethdev_ops.c | 589 ---
drivers/net/octeontx2/otx2_ethdev_sec.c | 923 -----
drivers/net/octeontx2/otx2_ethdev_sec.h | 130 -
drivers/net/octeontx2/otx2_ethdev_sec_tx.h | 182 -
drivers/net/octeontx2/otx2_flow.c | 1189 ------
drivers/net/octeontx2/otx2_flow.h | 414 --
drivers/net/octeontx2/otx2_flow_ctrl.c | 252 --
drivers/net/octeontx2/otx2_flow_dump.c | 595 ---
drivers/net/octeontx2/otx2_flow_parse.c | 1239 ------
drivers/net/octeontx2/otx2_flow_utils.c | 969 -----
drivers/net/octeontx2/otx2_link.c | 287 --
drivers/net/octeontx2/otx2_lookup.c | 352 --
drivers/net/octeontx2/otx2_mac.c | 151 -
drivers/net/octeontx2/otx2_mcast.c | 339 --
drivers/net/octeontx2/otx2_ptp.c | 450 ---
drivers/net/octeontx2/otx2_rss.c | 427 ---
drivers/net/octeontx2/otx2_rx.c | 430 ---
drivers/net/octeontx2/otx2_rx.h | 583 ---
drivers/net/octeontx2/otx2_stats.c | 397 --
drivers/net/octeontx2/otx2_tm.c | 3317 -----------------
drivers/net/octeontx2/otx2_tm.h | 176 -
drivers/net/octeontx2/otx2_tx.c | 1077 ------
drivers/net/octeontx2/otx2_tx.h | 791 ----
drivers/net/octeontx2/otx2_vlan.c | 1035 -----
drivers/net/octeontx2/version.map | 3 -
drivers/net/octeontx_ep/otx2_ep_vf.h | 2 +-
drivers/net/octeontx_ep/otx_ep_common.h | 16 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 10 +-
.../otx2_regexdev.c => cn9k/cn9k_regexdev.c} | 405 +-
drivers/regex/cn9k/cn9k_regexdev.h | 44 +
.../cn9k_regexdev_compiler.c} | 34 +-
drivers/regex/cn9k/cn9k_regexdev_compiler.h | 11 +
drivers/regex/{octeontx2 => cn9k}/meson.build | 10 +-
.../octeontx2 => regex/cn9k}/version.map | 0
drivers/regex/meson.build | 2 +-
drivers/regex/octeontx2/otx2_regexdev.h | 109 -
.../regex/octeontx2/otx2_regexdev_compiler.h | 11 -
.../regex/octeontx2/otx2_regexdev_hw_access.c | 167 -
.../regex/octeontx2/otx2_regexdev_hw_access.h | 202 -
drivers/regex/octeontx2/otx2_regexdev_mbox.c | 401 --
drivers/regex/octeontx2/otx2_regexdev_mbox.h | 38 -
drivers/regex/octeontx2/version.map | 3 -
usertools/dpdk-devbind.py | 12 +-
179 files changed, 1427 insertions(+), 53329 deletions(-)
rename config/arm/{arm64_octeontx2_linux_gcc => arm64_cn9k_linux_gcc} (84%)
delete mode 100644 doc/guides/cryptodevs/features/octeontx2.ini
delete mode 100644 doc/guides/cryptodevs/octeontx2.rst
delete mode 100644 doc/guides/eventdevs/features/octeontx2.ini
delete mode 100644 doc/guides/eventdevs/octeontx2.rst
delete mode 100644 doc/guides/mempool/octeontx2.rst
delete mode 100644 doc/guides/nics/features/octeontx2.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vec.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vf.ini
delete mode 100644 doc/guides/nics/octeontx2.rst
delete mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
delete mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg
delete mode 100644 doc/guides/platform/octeontx2.rst
rename doc/guides/regexdevs/{octeontx2.rst => cn9k.rst} (69%)
rename doc/guides/regexdevs/features/{octeontx2.ini => cn9k.ini} (80%)
create mode 100644 drivers/common/cnxk/hw/ree.h
create mode 100644 drivers/common/cnxk/roc_ree.c
create mode 100644 drivers/common/cnxk/roc_ree.h
create mode 100644 drivers/common/cnxk/roc_ree_priv.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_nix.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npa.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npc.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ree.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sdp.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sso.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_tim.h
delete mode 100644 drivers/common/octeontx2/meson.build
delete mode 100644 drivers/common/octeontx2/otx2_common.c
delete mode 100644 drivers/common/octeontx2/otx2_common.h
delete mode 100644 drivers/common/octeontx2/otx2_dev.c
delete mode 100644 drivers/common/octeontx2/otx2_dev.h
delete mode 100644 drivers/common/octeontx2/otx2_io_arm64.h
delete mode 100644 drivers/common/octeontx2/otx2_io_generic.h
delete mode 100644 drivers/common/octeontx2/otx2_irq.c
delete mode 100644 drivers/common/octeontx2/otx2_irq.h
delete mode 100644 drivers/common/octeontx2/otx2_mbox.c
delete mode 100644 drivers/common/octeontx2/otx2_mbox.h
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.c
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.h
delete mode 100644 drivers/common/octeontx2/version.map
delete mode 100644 drivers/crypto/octeontx2/meson.build
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_qp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_fp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_security.h
delete mode 100644 drivers/crypto/octeontx2/version.map
delete mode 100644 drivers/event/octeontx2/meson.build
delete mode 100644 drivers/event/octeontx2/otx2_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.c
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.h
delete mode 100644 drivers/mempool/octeontx2/meson.build
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.h
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c
delete mode 100644 drivers/mempool/octeontx2/version.map
delete mode 100644 drivers/net/octeontx2/meson.build
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_flow.c
delete mode 100644 drivers/net/octeontx2/otx2_flow.h
delete mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_dump.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
delete mode 100644 drivers/net/octeontx2/otx2_link.c
delete mode 100644 drivers/net/octeontx2/otx2_lookup.c
delete mode 100644 drivers/net/octeontx2/otx2_mac.c
delete mode 100644 drivers/net/octeontx2/otx2_mcast.c
delete mode 100644 drivers/net/octeontx2/otx2_ptp.c
delete mode 100644 drivers/net/octeontx2/otx2_rss.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.h
delete mode 100644 drivers/net/octeontx2/otx2_stats.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.h
delete mode 100644 drivers/net/octeontx2/otx2_tx.c
delete mode 100644 drivers/net/octeontx2/otx2_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_vlan.c
delete mode 100644 drivers/net/octeontx2/version.map
rename drivers/regex/{octeontx2/otx2_regexdev.c => cn9k/cn9k_regexdev.c} (61%)
create mode 100644 drivers/regex/cn9k/cn9k_regexdev.h
rename drivers/regex/{octeontx2/otx2_regexdev_compiler.c => cn9k/cn9k_regexdev_compiler.c} (86%)
create mode 100644 drivers/regex/cn9k/cn9k_regexdev_compiler.h
rename drivers/regex/{octeontx2 => cn9k}/meson.build (65%)
rename drivers/{event/octeontx2 => regex/cn9k}/version.map (100%)
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_compiler.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.c
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.c
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.h
delete mode 100644 drivers/regex/octeontx2/version.map
--
2.34.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v5 4/5] regex/cn9k: use cnxk infrastructure
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers jerinj
@ 2021-12-11 9:04 2% ` jerinj
2021-12-11 9:04 1% ` [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers jerinj
1 sibling, 0 replies; 200+ results
From: jerinj @ 2021-12-11 9:04 UTC (permalink / raw)
To: dev, Thomas Monjalon, Ray Kinsella, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Jerin Jacob,
Liron Himi
Cc: david.marchand, ferruh.yigit
From: Liron Himi <lironh@marvell.com>
update driver to use the REE cnxk code
replace octeontx2/otx2 with cn9k
Signed-off-by: Liron Himi <lironh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
MAINTAINERS | 8 +-
devtools/check-abi.sh | 4 +
doc/guides/platform/cnxk.rst | 3 +
doc/guides/platform/octeontx2.rst | 3 -
.../regexdevs/{octeontx2.rst => cn9k.rst} | 20 +-
.../features/{octeontx2.ini => cn9k.ini} | 2 +-
doc/guides/regexdevs/index.rst | 2 +-
doc/guides/rel_notes/release_20_11.rst | 2 +-
.../otx2_regexdev.c => cn9k/cn9k_regexdev.c} | 405 ++++++++----------
drivers/regex/cn9k/cn9k_regexdev.h | 44 ++
.../cn9k_regexdev_compiler.c} | 34 +-
drivers/regex/cn9k/cn9k_regexdev_compiler.h | 11 +
drivers/regex/{octeontx2 => cn9k}/meson.build | 10 +-
drivers/regex/{octeontx2 => cn9k}/version.map | 0
drivers/regex/meson.build | 2 +-
drivers/regex/octeontx2/otx2_regexdev.h | 109 -----
.../regex/octeontx2/otx2_regexdev_compiler.h | 11 -
.../regex/octeontx2/otx2_regexdev_hw_access.c | 167 --------
.../regex/octeontx2/otx2_regexdev_hw_access.h | 202 ---------
drivers/regex/octeontx2/otx2_regexdev_mbox.c | 401 -----------------
drivers/regex/octeontx2/otx2_regexdev_mbox.h | 38 --
21 files changed, 273 insertions(+), 1205 deletions(-)
rename doc/guides/regexdevs/{octeontx2.rst => cn9k.rst} (69%)
rename doc/guides/regexdevs/features/{octeontx2.ini => cn9k.ini} (80%)
rename drivers/regex/{octeontx2/otx2_regexdev.c => cn9k/cn9k_regexdev.c} (61%)
create mode 100644 drivers/regex/cn9k/cn9k_regexdev.h
rename drivers/regex/{octeontx2/otx2_regexdev_compiler.c => cn9k/cn9k_regexdev_compiler.c} (86%)
create mode 100644 drivers/regex/cn9k/cn9k_regexdev_compiler.h
rename drivers/regex/{octeontx2 => cn9k}/meson.build (65%)
rename drivers/regex/{octeontx2 => cn9k}/version.map (100%)
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_compiler.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.c
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.c
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 18d9edaf88..854b81f2a3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1236,11 +1236,11 @@ F: doc/guides/dmadevs/dpaa.rst
RegEx Drivers
-------------
-Marvell OCTEON TX2 regex
+Marvell OCTEON CN9K regex
M: Liron Himi <lironh@marvell.com>
-F: drivers/regex/octeontx2/
-F: doc/guides/regexdevs/octeontx2.rst
-F: doc/guides/regexdevs/features/octeontx2.ini
+F: drivers/regex/cn9k/
+F: doc/guides/regexdevs/cn9k.rst
+F: doc/guides/regexdevs/features/cn9k.ini
Mellanox mlx5
M: Ori Kam <orika@nvidia.com>
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ca523eb94c..5e654189a8 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -48,6 +48,10 @@ for dump in $(find $refdir -name "*.dump"); do
echo "Skipped removed driver $name."
continue
fi
+ if grep -qE "\<librte_regex_octeontx2" $dump; then
+ echo "Skipped removed driver $name."
+ continue
+ fi
dump2=$(find $newdir -name $name)
if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
echo "Error: cannot find $name in $newdir" >&2
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index 88995cc70c..5213df3ccd 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -156,6 +156,9 @@ This section lists dataplane H/W block(s) available in cnxk SoC.
#. **Dmadev Driver**
See :doc:`../dmadevs/cnxk` for DPI Dmadev driver information.
+#. **Regex Device Driver**
+ See :doc:`../regexdevs/cn9k` for REE Regex device driver information.
+
Procedure to Setup Platform
---------------------------
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
index 3a3d28571c..5ab43abbdd 100644
--- a/doc/guides/platform/octeontx2.rst
+++ b/doc/guides/platform/octeontx2.rst
@@ -155,9 +155,6 @@ This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
#. **Crypto Device Driver**
See :doc:`../cryptodevs/octeontx2` for CPT crypto device driver information.
-#. **Regex Device Driver**
- See :doc:`../regexdevs/octeontx2` for REE regex device driver information.
-
Procedure to Setup Platform
---------------------------
diff --git a/doc/guides/regexdevs/octeontx2.rst b/doc/guides/regexdevs/cn9k.rst
similarity index 69%
rename from doc/guides/regexdevs/octeontx2.rst
rename to doc/guides/regexdevs/cn9k.rst
index b39d457d60..c23c295b93 100644
--- a/doc/guides/regexdevs/octeontx2.rst
+++ b/doc/guides/regexdevs/cn9k.rst
@@ -1,20 +1,20 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2020 Marvell International Ltd.
-OCTEON TX2 REE Regexdev Driver
+CN9K REE Regexdev Driver
==============================
-The OCTEON TX2 REE PMD (**librte_regex_octeontx2**) provides poll mode
-regexdev driver support for the inbuilt regex device found in the **Marvell OCTEON TX2**
+The CN9K REE PMD (**librte_regex_cn9k**) provides poll mode
+regexdev driver support for the inbuilt regex device found in the **Marvell CN9K**
SoC family.
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
+More information about CN9K SoC can be found at `Marvell Official Website
<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
Features
--------
-Features of the OCTEON TX2 REE PMD are:
+Features of the CN9K REE PMD are:
- 36 queues
- Up to 254 matches for each regex operation
@@ -22,12 +22,12 @@ Features of the OCTEON TX2 REE PMD are:
Prerequisites and Compilation procedure
---------------------------------------
- See :doc:`../platform/octeontx2` for setup information.
+ See :doc:`../platform/cnxk` for setup information.
Device Setup
------------
-The OCTEON TX2 REE devices will need to be bound to a user-space IO driver
+The CN9K REE devices will need to be bound to a user-space IO driver
for use. The script ``dpdk-devbind.py`` script included with DPDK can be
used to view the state of the devices and to bind them to a suitable
DPDK-supported kernel driver. When querying the status of the devices,
@@ -38,12 +38,12 @@ those devices alone.
Debugging Options
-----------------
-.. _table_octeontx2_regex_debug_options:
+.. _table_cn9k_regex_debug_options:
-.. table:: OCTEON TX2 regex device debug options
+.. table:: CN9K regex device debug options
+---+------------+-------------------------------------------------------+
| # | Component | EAL log command |
+===+============+=======================================================+
- | 1 | REE | --log-level='pmd\.regex\.octeontx2,8' |
+ | 1 | REE | --log-level='pmd\.regex\.cn9k,8' |
+---+------------+-------------------------------------------------------+
diff --git a/doc/guides/regexdevs/features/octeontx2.ini b/doc/guides/regexdevs/features/cn9k.ini
similarity index 80%
rename from doc/guides/regexdevs/features/octeontx2.ini
rename to doc/guides/regexdevs/features/cn9k.ini
index c9b421a16d..b029af8ac2 100644
--- a/doc/guides/regexdevs/features/octeontx2.ini
+++ b/doc/guides/regexdevs/features/cn9k.ini
@@ -1,5 +1,5 @@
;
-; Supported features of the 'octeontx2' regex driver.
+; Supported features of the 'cn9k' regex driver.
;
; Refer to default.ini for the full list of available driver features.
;
diff --git a/doc/guides/regexdevs/index.rst b/doc/guides/regexdevs/index.rst
index b1abc826bd..11a33fc09e 100644
--- a/doc/guides/regexdevs/index.rst
+++ b/doc/guides/regexdevs/index.rst
@@ -13,4 +13,4 @@ which can be used from an application through RegEx API.
features_overview
mlx5
- octeontx2
+ cn9k
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index af7ce90ba3..7fd15398e4 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -290,7 +290,7 @@ New Features
Added a new PMD for the hardware regex offload block for OCTEON TX2 SoC.
- See the :doc:`../regexdevs/octeontx2` for more details.
+ See ``regexdevs/octeontx2`` for more details.
* **Updated Software Eventdev driver.**
diff --git a/drivers/regex/octeontx2/otx2_regexdev.c b/drivers/regex/cn9k/cn9k_regexdev.c
similarity index 61%
rename from drivers/regex/octeontx2/otx2_regexdev.c
rename to drivers/regex/cn9k/cn9k_regexdev.c
index b6e55853e9..32d20c1be8 100644
--- a/drivers/regex/octeontx2/otx2_regexdev.c
+++ b/drivers/regex/cn9k/cn9k_regexdev.c
@@ -13,12 +13,8 @@
/* REE common headers */
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_regexdev.h"
-#include "otx2_regexdev_compiler.h"
-#include "otx2_regexdev_hw_access.h"
-#include "otx2_regexdev_mbox.h"
+#include "cn9k_regexdev.h"
+#include "cn9k_regexdev_compiler.h"
/* HW matches are at offset 0x80 from RES_PTR_ADDR
@@ -35,9 +31,6 @@
#define REE_MAX_RULES_PER_GROUP 0xFFFF
#define REE_MAX_GROUPS 0xFFFF
-/* This is temporarily here */
-#define REE0_PF 19
-#define REE1_PF 20
#define REE_RULE_DB_VERSION 2
#define REE_RULE_DB_REVISION 0
@@ -58,32 +51,32 @@ struct ree_rule_db {
static void
qp_memzone_name_get(char *name, int size, int dev_id, int qp_id)
{
- snprintf(name, size, "otx2_ree_lf_mem_%u:%u", dev_id, qp_id);
+ snprintf(name, size, "cn9k_ree_lf_mem_%u:%u", dev_id, qp_id);
}
-static struct otx2_ree_qp *
+static struct roc_ree_qp *
ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
{
- struct otx2_ree_data *data = dev->data->dev_private;
+ struct cn9k_ree_data *data = dev->data->dev_private;
uint64_t pg_sz = sysconf(_SC_PAGESIZE);
- struct otx2_ree_vf *vf = &data->vf;
+ struct roc_ree_vf *vf = &data->vf;
const struct rte_memzone *lf_mem;
uint32_t len, iq_len, size_div2;
char name[RTE_MEMZONE_NAMESIZE];
uint64_t used_len, iova;
- struct otx2_ree_qp *qp;
+ struct roc_ree_qp *qp;
uint8_t *va;
int ret;
/* Allocate queue pair */
- qp = rte_zmalloc("OCTEON TX2 Regex PMD Queue Pair", sizeof(*qp),
- OTX2_ALIGN);
+ qp = rte_zmalloc("CN9K Regex PMD Queue Pair", sizeof(*qp),
+ ROC_ALIGN);
if (qp == NULL) {
- otx2_err("Could not allocate queue pair");
+ cn9k_err("Could not allocate queue pair");
return NULL;
}
- iq_len = OTX2_REE_IQ_LEN;
+ iq_len = REE_IQ_LEN;
/*
* Queue size must be in units of 128B 2 * REE_INST_S (which is 64B),
@@ -93,13 +86,13 @@ ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
size_div2 = iq_len >> 1;
/* For pending queue */
- len = iq_len * RTE_ALIGN(sizeof(struct otx2_ree_rid), 8);
+ len = iq_len * RTE_ALIGN(sizeof(struct roc_ree_rid), 8);
/* So that instruction queues start as pg size aligned */
len = RTE_ALIGN(len, pg_sz);
/* For instruction queues */
- len += OTX2_REE_IQ_LEN * sizeof(union otx2_ree_inst);
+ len += REE_IQ_LEN * sizeof(union roc_ree_inst);
/* Waste after instruction queues */
len = RTE_ALIGN(len, pg_sz);
@@ -107,11 +100,11 @@ ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
qp_id);
- lf_mem = rte_memzone_reserve_aligned(name, len, vf->otx2_dev.node,
+ lf_mem = rte_memzone_reserve_aligned(name, len, rte_socket_id(),
RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB,
RTE_CACHE_LINE_SIZE);
if (lf_mem == NULL) {
- otx2_err("Could not allocate reserved memzone");
+ cn9k_err("Could not allocate reserved memzone");
goto qp_free;
}
@@ -121,24 +114,24 @@ ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
memset(va, 0, len);
/* Initialize pending queue */
- qp->pend_q.rid_queue = (struct otx2_ree_rid *)va;
+ qp->pend_q.rid_queue = (struct roc_ree_rid *)va;
qp->pend_q.enq_tail = 0;
qp->pend_q.deq_head = 0;
qp->pend_q.pending_count = 0;
- used_len = iq_len * RTE_ALIGN(sizeof(struct otx2_ree_rid), 8);
+ used_len = iq_len * RTE_ALIGN(sizeof(struct roc_ree_rid), 8);
used_len = RTE_ALIGN(used_len, pg_sz);
iova += used_len;
qp->iq_dma_addr = iova;
qp->id = qp_id;
- qp->base = OTX2_REE_LF_BAR2(vf, qp_id);
- qp->otx2_regexdev_jobid = 0;
+ qp->base = roc_ree_qp_get_base(vf, qp_id);
+ qp->roc_regexdev_jobid = 0;
qp->write_offset = 0;
- ret = otx2_ree_iq_enable(dev, qp, OTX2_REE_QUEUE_HI_PRIO, size_div2);
+ ret = roc_ree_iq_enable(vf, qp, REE_QUEUE_HI_PRIO, size_div2);
if (ret) {
- otx2_err("Could not enable instruction queue");
+ cn9k_err("Could not enable instruction queue");
goto qp_free;
}
@@ -150,13 +143,13 @@ ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
}
static int
-ree_qp_destroy(const struct rte_regexdev *dev, struct otx2_ree_qp *qp)
+ree_qp_destroy(const struct rte_regexdev *dev, struct roc_ree_qp *qp)
{
const struct rte_memzone *lf_mem;
char name[RTE_MEMZONE_NAMESIZE];
int ret;
- otx2_ree_iq_disable(qp);
+ roc_ree_iq_disable(qp);
qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
qp->id);
@@ -175,8 +168,8 @@ ree_qp_destroy(const struct rte_regexdev *dev, struct otx2_ree_qp *qp)
static int
ree_queue_pair_release(struct rte_regexdev *dev, uint16_t qp_id)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_qp *qp = data->queue_pairs[qp_id];
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_qp *qp = data->queue_pairs[qp_id];
int ret;
ree_func_trace("Queue=%d", qp_id);
@@ -186,7 +179,7 @@ ree_queue_pair_release(struct rte_regexdev *dev, uint16_t qp_id)
ret = ree_qp_destroy(dev, qp);
if (ret) {
- otx2_err("Could not destroy queue pair %d", qp_id);
+ cn9k_err("Could not destroy queue pair %d", qp_id);
return ret;
}
@@ -200,12 +193,12 @@ ree_dev_register(const char *name)
{
struct rte_regexdev *dev;
- otx2_ree_dbg("Creating regexdev %s\n", name);
+ cn9k_ree_dbg("Creating regexdev %s\n", name);
/* allocate device structure */
dev = rte_regexdev_register(name);
if (dev == NULL) {
- otx2_err("Failed to allocate regex device for %s", name);
+ cn9k_err("Failed to allocate regex device for %s", name);
return NULL;
}
@@ -213,12 +206,12 @@ ree_dev_register(const char *name)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
dev->data->dev_private =
rte_zmalloc_socket("regexdev device private",
- sizeof(struct otx2_ree_data),
+ sizeof(struct cn9k_ree_data),
RTE_CACHE_LINE_SIZE,
rte_socket_id());
if (dev->data->dev_private == NULL) {
- otx2_err("Cannot allocate memory for dev %s private data",
+ cn9k_err("Cannot allocate memory for dev %s private data",
name);
rte_regexdev_unregister(dev);
@@ -232,7 +225,7 @@ ree_dev_register(const char *name)
static int
ree_dev_unregister(struct rte_regexdev *dev)
{
- otx2_ree_dbg("Closing regex device %s", dev->device->name);
+ cn9k_ree_dbg("Closing regex device %s", dev->device->name);
/* free regex device */
rte_regexdev_unregister(dev);
@@ -246,8 +239,8 @@ ree_dev_unregister(struct rte_regexdev *dev)
static int
ree_dev_fini(struct rte_regexdev *dev)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct rte_pci_device *pci_dev;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
int i, ret;
ree_func_trace();
@@ -258,9 +251,9 @@ ree_dev_fini(struct rte_regexdev *dev)
return ret;
}
- ret = otx2_ree_queues_detach(dev);
+ ret = roc_ree_queues_detach(vf);
if (ret)
- otx2_err("Could not detach queues");
+ cn9k_err("Could not detach queues");
/* TEMP : should be in lib */
if (data->queue_pairs)
@@ -268,33 +261,32 @@ ree_dev_fini(struct rte_regexdev *dev)
if (data->rules)
rte_free(data->rules);
- pci_dev = container_of(dev->device, struct rte_pci_device, device);
- otx2_dev_fini(pci_dev, &(data->vf.otx2_dev));
+ roc_ree_dev_fini(vf);
ret = ree_dev_unregister(dev);
if (ret)
- otx2_err("Could not destroy PMD");
+ cn9k_err("Could not destroy PMD");
return ret;
}
static inline int
-ree_enqueue(struct otx2_ree_qp *qp, struct rte_regex_ops *op,
- struct otx2_ree_pending_queue *pend_q)
+ree_enqueue(struct roc_ree_qp *qp, struct rte_regex_ops *op,
+ struct roc_ree_pending_queue *pend_q)
{
- union otx2_ree_inst inst;
- union otx2_ree_res *res;
+ union roc_ree_inst inst;
+ union ree_res *res;
uint32_t offset;
- if (unlikely(pend_q->pending_count >= OTX2_REE_DEFAULT_CMD_QLEN)) {
- otx2_err("Pending count %" PRIu64 " is greater than Q size %d",
- pend_q->pending_count, OTX2_REE_DEFAULT_CMD_QLEN);
+ if (unlikely(pend_q->pending_count >= REE_DEFAULT_CMD_QLEN)) {
+ cn9k_err("Pending count %" PRIu64 " is greater than Q size %d",
+ pend_q->pending_count, REE_DEFAULT_CMD_QLEN);
return -EAGAIN;
}
- if (unlikely(op->mbuf->data_len > OTX2_REE_MAX_PAYLOAD_SIZE ||
+ if (unlikely(op->mbuf->data_len > REE_MAX_PAYLOAD_SIZE ||
op->mbuf->data_len == 0)) {
- otx2_err("Packet length %d is greater than MAX payload %d",
- op->mbuf->data_len, OTX2_REE_MAX_PAYLOAD_SIZE);
+ cn9k_err("Packet length %d is greater than MAX payload %d",
+ op->mbuf->data_len, REE_MAX_PAYLOAD_SIZE);
return -EAGAIN;
}
@@ -324,7 +316,7 @@ ree_enqueue(struct otx2_ree_qp *qp, struct rte_regex_ops *op,
inst.cn98xx.ree_job_ctrl = (0x1 << 8);
else
inst.cn98xx.ree_job_ctrl = 0;
- inst.cn98xx.ree_job_id = qp->otx2_regexdev_jobid;
+ inst.cn98xx.ree_job_id = qp->roc_regexdev_jobid;
/* W 7 */
inst.cn98xx.ree_job_subset_id_0 = op->group_id0;
if (op->req_flags & RTE_REGEX_OPS_REQ_GROUP_ID1_VALID_F)
@@ -348,33 +340,33 @@ ree_enqueue(struct otx2_ree_qp *qp, struct rte_regex_ops *op,
pend_q->rid_queue[pend_q->enq_tail].user_id = op->user_id;
/* Mark result as not done */
- res = (union otx2_ree_res *)(op);
+ res = (union ree_res *)(op);
res->s.done = 0;
res->s.ree_err = 0;
/* We will use soft queue length here to limit requests */
- REE_MOD_INC(pend_q->enq_tail, OTX2_REE_DEFAULT_CMD_QLEN);
+ REE_MOD_INC(pend_q->enq_tail, REE_DEFAULT_CMD_QLEN);
pend_q->pending_count += 1;
- REE_MOD_INC(qp->otx2_regexdev_jobid, 0xFFFFFF);
- REE_MOD_INC(qp->write_offset, OTX2_REE_IQ_LEN);
+ REE_MOD_INC(qp->roc_regexdev_jobid, 0xFFFFFF);
+ REE_MOD_INC(qp->write_offset, REE_IQ_LEN);
return 0;
}
static uint16_t
-otx2_ree_enqueue_burst(struct rte_regexdev *dev, uint16_t qp_id,
+cn9k_ree_enqueue_burst(struct rte_regexdev *dev, uint16_t qp_id,
struct rte_regex_ops **ops, uint16_t nb_ops)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_qp *qp = data->queue_pairs[qp_id];
- struct otx2_ree_pending_queue *pend_q;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_qp *qp = data->queue_pairs[qp_id];
+ struct roc_ree_pending_queue *pend_q;
uint16_t nb_allowed, count = 0;
struct rte_regex_ops *op;
int ret;
pend_q = &qp->pend_q;
- nb_allowed = OTX2_REE_DEFAULT_CMD_QLEN - pend_q->pending_count;
+ nb_allowed = REE_DEFAULT_CMD_QLEN - pend_q->pending_count;
if (nb_ops > nb_allowed)
nb_ops = nb_allowed;
@@ -392,7 +384,7 @@ otx2_ree_enqueue_burst(struct rte_regexdev *dev, uint16_t qp_id,
rte_io_wmb();
/* Update Doorbell */
- otx2_write64(count, qp->base + OTX2_REE_LF_DOORBELL);
+ plt_write64(count, qp->base + REE_LF_DOORBELL);
return count;
}
@@ -422,15 +414,15 @@ ree_dequeue_post_process(struct rte_regex_ops *ops)
}
if (unlikely(ree_res_status != REE_TYPE_RESULT_DESC)) {
- if (ree_res_status & OTX2_REE_STATUS_PMI_SOJ_BIT)
+ if (ree_res_status & REE_STATUS_PMI_SOJ_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_PMI_SOJ_F;
- if (ree_res_status & OTX2_REE_STATUS_PMI_EOJ_BIT)
+ if (ree_res_status & REE_STATUS_PMI_EOJ_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_PMI_EOJ_F;
- if (ree_res_status & OTX2_REE_STATUS_ML_CNT_DET_BIT)
+ if (ree_res_status & REE_STATUS_ML_CNT_DET_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_MAX_SCAN_TIMEOUT_F;
- if (ree_res_status & OTX2_REE_STATUS_MM_CNT_DET_BIT)
+ if (ree_res_status & REE_STATUS_MM_CNT_DET_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_MAX_MATCH_F;
- if (ree_res_status & OTX2_REE_STATUS_MP_CNT_DET_BIT)
+ if (ree_res_status & REE_STATUS_MP_CNT_DET_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_MAX_PREFIX_F;
}
if (ops->nb_matches > 0) {
@@ -439,22 +431,22 @@ ree_dequeue_post_process(struct rte_regex_ops *ops)
ops->nb_matches : REE_NUM_MATCHES_ALIGN);
match = (uint64_t)ops + REE_MATCH_OFFSET;
match += (ops->nb_matches - off) *
- sizeof(union otx2_ree_match);
+ sizeof(union ree_match);
memcpy((void *)ops->matches, (void *)match,
- off * sizeof(union otx2_ree_match));
+ off * sizeof(union ree_match));
}
}
static uint16_t
-otx2_ree_dequeue_burst(struct rte_regexdev *dev, uint16_t qp_id,
+cn9k_ree_dequeue_burst(struct rte_regexdev *dev, uint16_t qp_id,
struct rte_regex_ops **ops, uint16_t nb_ops)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_qp *qp = data->queue_pairs[qp_id];
- struct otx2_ree_pending_queue *pend_q;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_qp *qp = data->queue_pairs[qp_id];
+ struct roc_ree_pending_queue *pend_q;
int i, nb_pending, nb_completed = 0;
volatile struct ree_res_s_98 *res;
- struct otx2_ree_rid *rid;
+ struct roc_ree_rid *rid;
pend_q = &qp->pend_q;
@@ -474,7 +466,7 @@ otx2_ree_dequeue_burst(struct rte_regexdev *dev, uint16_t qp_id,
ops[i] = (struct rte_regex_ops *)(rid->rid);
ops[i]->user_id = rid->user_id;
- REE_MOD_INC(pend_q->deq_head, OTX2_REE_DEFAULT_CMD_QLEN);
+ REE_MOD_INC(pend_q->deq_head, REE_DEFAULT_CMD_QLEN);
pend_q->pending_count -= 1;
}
@@ -487,10 +479,10 @@ otx2_ree_dequeue_burst(struct rte_regexdev *dev, uint16_t qp_id,
}
static int
-otx2_ree_dev_info_get(struct rte_regexdev *dev, struct rte_regexdev_info *info)
+cn9k_ree_dev_info_get(struct rte_regexdev *dev, struct rte_regexdev_info *info)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
ree_func_trace();
@@ -502,7 +494,7 @@ otx2_ree_dev_info_get(struct rte_regexdev *dev, struct rte_regexdev_info *info)
info->max_queue_pairs = vf->max_queues;
info->max_matches = vf->max_matches;
- info->max_payload_size = OTX2_REE_MAX_PAYLOAD_SIZE;
+ info->max_payload_size = REE_MAX_PAYLOAD_SIZE;
info->max_rules_per_group = data->max_rules_per_group;
info->max_groups = data->max_groups;
info->regexdev_capa = data->regexdev_capa;
@@ -512,11 +504,11 @@ otx2_ree_dev_info_get(struct rte_regexdev *dev, struct rte_regexdev_info *info)
}
static int
-otx2_ree_dev_config(struct rte_regexdev *dev,
+cn9k_ree_dev_config(struct rte_regexdev *dev,
const struct rte_regexdev_config *cfg)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
const struct ree_rule_db *rule_db;
uint32_t rule_db_len;
int ret;
@@ -524,29 +516,29 @@ otx2_ree_dev_config(struct rte_regexdev *dev,
ree_func_trace();
if (cfg->nb_queue_pairs > vf->max_queues) {
- otx2_err("Invalid number of queue pairs requested");
+ cn9k_err("Invalid number of queue pairs requested");
return -EINVAL;
}
if (cfg->nb_max_matches != vf->max_matches) {
- otx2_err("Invalid number of max matches requested");
+ cn9k_err("Invalid number of max matches requested");
return -EINVAL;
}
if (cfg->dev_cfg_flags != 0) {
- otx2_err("Invalid device configuration flags requested");
+ cn9k_err("Invalid device configuration flags requested");
return -EINVAL;
}
/* Unregister error interrupts */
if (vf->err_intr_registered)
- otx2_ree_err_intr_unregister(dev);
+ roc_ree_err_intr_unregister(vf);
/* Detach queues */
if (vf->nb_queues) {
- ret = otx2_ree_queues_detach(dev);
+ ret = roc_ree_queues_detach(vf);
if (ret) {
- otx2_err("Could not detach REE queues");
+ cn9k_err("Could not detach REE queues");
return ret;
}
}
@@ -559,7 +551,7 @@ otx2_ree_dev_config(struct rte_regexdev *dev,
if (data->queue_pairs == NULL) {
data->nb_queue_pairs = 0;
- otx2_err("Failed to get memory for qp meta data, nb_queues %u",
+ cn9k_err("Failed to get memory for qp meta data, nb_queues %u",
cfg->nb_queue_pairs);
return -ENOMEM;
}
@@ -579,7 +571,7 @@ otx2_ree_dev_config(struct rte_regexdev *dev,
qp = rte_realloc(qp, sizeof(qp[0]) * cfg->nb_queue_pairs,
RTE_CACHE_LINE_SIZE);
if (qp == NULL) {
- otx2_err("Failed to realloc qp meta data, nb_queues %u",
+ cn9k_err("Failed to realloc qp meta data, nb_queues %u",
cfg->nb_queue_pairs);
return -ENOMEM;
}
@@ -594,52 +586,52 @@ otx2_ree_dev_config(struct rte_regexdev *dev,
data->nb_queue_pairs = cfg->nb_queue_pairs;
/* Attach queues */
- otx2_ree_dbg("Attach %d queues", cfg->nb_queue_pairs);
- ret = otx2_ree_queues_attach(dev, cfg->nb_queue_pairs);
+ cn9k_ree_dbg("Attach %d queues", cfg->nb_queue_pairs);
+ ret = roc_ree_queues_attach(vf, cfg->nb_queue_pairs);
if (ret) {
- otx2_err("Could not attach queues");
+ cn9k_err("Could not attach queues");
return -ENODEV;
}
- ret = otx2_ree_msix_offsets_get(dev);
+ ret = roc_ree_msix_offsets_get(vf);
if (ret) {
- otx2_err("Could not get MSI-X offsets");
+ cn9k_err("Could not get MSI-X offsets");
goto queues_detach;
}
if (cfg->rule_db && cfg->rule_db_len) {
- otx2_ree_dbg("rule_db length %d", cfg->rule_db_len);
+ cn9k_ree_dbg("rule_db length %d", cfg->rule_db_len);
rule_db = (const struct ree_rule_db *)cfg->rule_db;
rule_db_len = rule_db->number_of_entries *
sizeof(struct ree_rule_db_entry);
- otx2_ree_dbg("rule_db number of entries %d",
+ cn9k_ree_dbg("rule_db number of entries %d",
rule_db->number_of_entries);
if (rule_db_len > cfg->rule_db_len) {
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
ret = -EINVAL;
goto queues_detach;
}
- ret = otx2_ree_rule_db_prog(dev, (const char *)rule_db->entries,
- rule_db_len, NULL, OTX2_REE_NON_INC_PROG);
+ ret = roc_ree_rule_db_prog(vf, (const char *)rule_db->entries,
+ rule_db_len, NULL, REE_NON_INC_PROG);
if (ret) {
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
goto queues_detach;
}
}
- dev->enqueue = otx2_ree_enqueue_burst;
- dev->dequeue = otx2_ree_dequeue_burst;
+ dev->enqueue = cn9k_ree_enqueue_burst;
+ dev->dequeue = cn9k_ree_dequeue_burst;
rte_mb();
return 0;
queues_detach:
- otx2_ree_queues_detach(dev);
+ roc_ree_queues_detach(vf);
return ret;
}
static int
-otx2_ree_stop(struct rte_regexdev *dev)
+cn9k_ree_stop(struct rte_regexdev *dev)
{
RTE_SET_USED(dev);
@@ -648,18 +640,20 @@ otx2_ree_stop(struct rte_regexdev *dev)
}
static int
-otx2_ree_start(struct rte_regexdev *dev)
+cn9k_ree_start(struct rte_regexdev *dev)
{
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
uint32_t rule_db_len = 0;
int ret;
ree_func_trace();
- ret = otx2_ree_rule_db_len_get(dev, &rule_db_len, NULL);
+ ret = roc_ree_rule_db_len_get(vf, &rule_db_len, NULL);
if (ret)
return ret;
if (rule_db_len == 0) {
- otx2_err("Rule db not programmed");
+ cn9k_err("Rule db not programmed");
return -EFAULT;
}
@@ -667,56 +661,55 @@ otx2_ree_start(struct rte_regexdev *dev)
}
static int
-otx2_ree_close(struct rte_regexdev *dev)
+cn9k_ree_close(struct rte_regexdev *dev)
{
return ree_dev_fini(dev);
}
static int
-otx2_ree_queue_pair_setup(struct rte_regexdev *dev, uint16_t qp_id,
+cn9k_ree_queue_pair_setup(struct rte_regexdev *dev, uint16_t qp_id,
const struct rte_regexdev_qp_conf *qp_conf)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_qp *qp;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_qp *qp;
ree_func_trace("Queue=%d", qp_id);
if (data->queue_pairs[qp_id] != NULL)
ree_queue_pair_release(dev, qp_id);
- if (qp_conf->nb_desc > OTX2_REE_DEFAULT_CMD_QLEN) {
- otx2_err("Could not setup queue pair for %u descriptors",
+ if (qp_conf->nb_desc > REE_DEFAULT_CMD_QLEN) {
+ cn9k_err("Could not setup queue pair for %u descriptors",
qp_conf->nb_desc);
return -EINVAL;
}
if (qp_conf->qp_conf_flags != 0) {
- otx2_err("Could not setup queue pair with configuration flags 0x%x",
+ cn9k_err("Could not setup queue pair with configuration flags 0x%x",
qp_conf->qp_conf_flags);
return -EINVAL;
}
qp = ree_qp_create(dev, qp_id);
if (qp == NULL) {
- otx2_err("Could not create queue pair %d", qp_id);
+ cn9k_err("Could not create queue pair %d", qp_id);
return -ENOMEM;
}
- qp->cb = qp_conf->cb;
data->queue_pairs[qp_id] = qp;
return 0;
}
static int
-otx2_ree_rule_db_compile_activate(struct rte_regexdev *dev)
+cn9k_ree_rule_db_compile_activate(struct rte_regexdev *dev)
{
- return otx2_ree_rule_db_compile_prog(dev);
+ return cn9k_ree_rule_db_compile_prog(dev);
}
static int
-otx2_ree_rule_db_update(struct rte_regexdev *dev,
+cn9k_ree_rule_db_update(struct rte_regexdev *dev,
const struct rte_regexdev_rule *rules, uint16_t nb_rules)
{
- struct otx2_ree_data *data = dev->data->dev_private;
+ struct cn9k_ree_data *data = dev->data->dev_private;
struct rte_regexdev_rule *old_ptr;
uint32_t i, sum_nb_rules;
@@ -770,10 +763,11 @@ otx2_ree_rule_db_update(struct rte_regexdev *dev,
}
static int
-otx2_ree_rule_db_import(struct rte_regexdev *dev, const char *rule_db,
+cn9k_ree_rule_db_import(struct rte_regexdev *dev, const char *rule_db,
uint32_t rule_db_len)
{
-
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
const struct ree_rule_db *ree_rule_db;
uint32_t ree_rule_db_len;
int ret;
@@ -784,21 +778,23 @@ otx2_ree_rule_db_import(struct rte_regexdev *dev, const char *rule_db,
ree_rule_db_len = ree_rule_db->number_of_entries *
sizeof(struct ree_rule_db_entry);
if (ree_rule_db_len > rule_db_len) {
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
return -EINVAL;
}
- ret = otx2_ree_rule_db_prog(dev, (const char *)ree_rule_db->entries,
- ree_rule_db_len, NULL, OTX2_REE_NON_INC_PROG);
+ ret = roc_ree_rule_db_prog(vf, (const char *)ree_rule_db->entries,
+ ree_rule_db_len, NULL, REE_NON_INC_PROG);
if (ret) {
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
return -ENOSPC;
}
return 0;
}
static int
-otx2_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
+cn9k_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
{
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
struct ree_rule_db *ree_rule_db;
uint32_t rule_dbi_len;
uint32_t rule_db_len;
@@ -806,7 +802,7 @@ otx2_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
ree_func_trace();
- ret = otx2_ree_rule_db_len_get(dev, &rule_db_len, &rule_dbi_len);
+ ret = roc_ree_rule_db_len_get(vf, &rule_db_len, &rule_dbi_len);
if (ret)
return ret;
@@ -816,10 +812,10 @@ otx2_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
}
ree_rule_db = (struct ree_rule_db *)rule_db;
- ret = otx2_ree_rule_db_get(dev, (char *)ree_rule_db->entries,
+ ret = roc_ree_rule_db_get(vf, (char *)ree_rule_db->entries,
rule_db_len, NULL, 0);
if (ret) {
- otx2_err("Could not export rule db");
+ cn9k_err("Could not export rule db");
return -EFAULT;
}
ree_rule_db->number_of_entries =
@@ -830,55 +826,44 @@ otx2_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
return 0;
}
-static int
-ree_get_blkaddr(struct otx2_dev *dev)
-{
- int pf;
-
- pf = otx2_get_pf(dev->pf_func);
- if (pf == REE0_PF)
- return RVU_BLOCK_ADDR_REE0;
- else if (pf == REE1_PF)
- return RVU_BLOCK_ADDR_REE1;
- else
- return 0;
-}
-
-static struct rte_regexdev_ops otx2_ree_ops = {
- .dev_info_get = otx2_ree_dev_info_get,
- .dev_configure = otx2_ree_dev_config,
- .dev_qp_setup = otx2_ree_queue_pair_setup,
- .dev_start = otx2_ree_start,
- .dev_stop = otx2_ree_stop,
- .dev_close = otx2_ree_close,
- .dev_attr_get = NULL,
- .dev_attr_set = NULL,
- .dev_rule_db_update = otx2_ree_rule_db_update,
- .dev_rule_db_compile_activate =
- otx2_ree_rule_db_compile_activate,
- .dev_db_import = otx2_ree_rule_db_import,
- .dev_db_export = otx2_ree_rule_db_export,
- .dev_xstats_names_get = NULL,
- .dev_xstats_get = NULL,
- .dev_xstats_by_name_get = NULL,
- .dev_xstats_reset = NULL,
- .dev_selftest = NULL,
- .dev_dump = NULL,
+static struct rte_regexdev_ops cn9k_ree_ops = {
+ .dev_info_get = cn9k_ree_dev_info_get,
+ .dev_configure = cn9k_ree_dev_config,
+ .dev_qp_setup = cn9k_ree_queue_pair_setup,
+ .dev_start = cn9k_ree_start,
+ .dev_stop = cn9k_ree_stop,
+ .dev_close = cn9k_ree_close,
+ .dev_attr_get = NULL,
+ .dev_attr_set = NULL,
+ .dev_rule_db_update = cn9k_ree_rule_db_update,
+ .dev_rule_db_compile_activate =
+ cn9k_ree_rule_db_compile_activate,
+ .dev_db_import = cn9k_ree_rule_db_import,
+ .dev_db_export = cn9k_ree_rule_db_export,
+ .dev_xstats_names_get = NULL,
+ .dev_xstats_get = NULL,
+ .dev_xstats_by_name_get = NULL,
+ .dev_xstats_reset = NULL,
+ .dev_selftest = NULL,
+ .dev_dump = NULL,
};
static int
-otx2_ree_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+cn9k_ree_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
{
char name[RTE_REGEXDEV_NAME_MAX_LEN];
- struct otx2_ree_data *data;
- struct otx2_dev *otx2_dev;
+ struct cn9k_ree_data *data;
struct rte_regexdev *dev;
- uint8_t max_matches = 0;
- struct otx2_ree_vf *vf;
- uint16_t nb_queues = 0;
+ struct roc_ree_vf *vf;
int ret;
+ ret = roc_plt_init();
+ if (ret < 0) {
+ plt_err("Failed to initialize platform model");
+ return ret;
+ }
+
rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
dev = ree_dev_register(name);
@@ -887,63 +872,19 @@ otx2_ree_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
goto exit;
}
- dev->dev_ops = &otx2_ree_ops;
+ dev->dev_ops = &cn9k_ree_ops;
dev->device = &pci_dev->device;
/* Get private data space allocated */
data = dev->data->dev_private;
vf = &data->vf;
-
- otx2_dev = &vf->otx2_dev;
-
- /* Initialize the base otx2_dev object */
- ret = otx2_dev_init(pci_dev, otx2_dev);
+ vf->pci_dev = pci_dev;
+ ret = roc_ree_dev_init(vf);
if (ret) {
- otx2_err("Could not initialize otx2_dev");
+ plt_err("Failed to initialize roc cpt rc=%d", ret);
goto dev_unregister;
}
- /* Get REE block address */
- vf->block_address = ree_get_blkaddr(otx2_dev);
- if (!vf->block_address) {
- otx2_err("Could not determine block PF number");
- goto otx2_dev_fini;
- }
- /* Get number of queues available on the device */
- ret = otx2_ree_available_queues_get(dev, &nb_queues);
- if (ret) {
- otx2_err("Could not determine the number of queues available");
- goto otx2_dev_fini;
- }
-
- /* Don't exceed the limits set per VF */
- nb_queues = RTE_MIN(nb_queues, OTX2_REE_MAX_QUEUES_PER_VF);
-
- if (nb_queues == 0) {
- otx2_err("No free queues available on the device");
- goto otx2_dev_fini;
- }
-
- vf->max_queues = nb_queues;
-
- otx2_ree_dbg("Max queues supported by device: %d", vf->max_queues);
-
- /* Get number of maximum matches supported on the device */
- ret = otx2_ree_max_matches_get(dev, &max_matches);
- if (ret) {
- otx2_err("Could not determine the maximum matches supported");
- goto otx2_dev_fini;
- }
- /* Don't exceed the limits set per VF */
- max_matches = RTE_MIN(max_matches, OTX2_REE_MAX_MATCHES_PER_VF);
- if (max_matches == 0) {
- otx2_err("Could not determine the maximum matches supported");
- goto otx2_dev_fini;
- }
-
- vf->max_matches = max_matches;
-
- otx2_ree_dbg("Max matches supported by device: %d", vf->max_matches);
data->rule_flags = RTE_REGEX_PCRE_RULE_ALLOW_EMPTY_F |
RTE_REGEX_PCRE_RULE_ANCHORED_F;
data->regexdev_capa = 0;
@@ -954,18 +895,16 @@ otx2_ree_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
dev->state = RTE_REGEXDEV_READY;
return 0;
-otx2_dev_fini:
- otx2_dev_fini(pci_dev, otx2_dev);
dev_unregister:
ree_dev_unregister(dev);
exit:
- otx2_err("Could not create device (vendor_id: 0x%x device_id: 0x%x)",
+ cn9k_err("Could not create device (vendor_id: 0x%x device_id: 0x%x)",
pci_dev->id.vendor_id, pci_dev->id.device_id);
return ret;
}
static int
-otx2_ree_pci_remove(struct rte_pci_device *pci_dev)
+cn9k_ree_pci_remove(struct rte_pci_device *pci_dev)
{
char name[RTE_REGEXDEV_NAME_MAX_LEN];
struct rte_regexdev *dev = NULL;
@@ -986,20 +925,20 @@ otx2_ree_pci_remove(struct rte_pci_device *pci_dev)
static struct rte_pci_id pci_id_ree_table[] = {
{
RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_REE_PF)
+ PCI_DEVID_CNXK_RVU_REE_PF)
},
{
.vendor_id = 0,
}
};
-static struct rte_pci_driver otx2_regexdev_pmd = {
+static struct rte_pci_driver cn9k_regexdev_pmd = {
.id_table = pci_id_ree_table,
.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = otx2_ree_pci_probe,
- .remove = otx2_ree_pci_remove,
+ .probe = cn9k_ree_pci_probe,
+ .remove = cn9k_ree_pci_remove,
};
-RTE_PMD_REGISTER_PCI(REGEXDEV_NAME_OCTEONTX2_PMD, otx2_regexdev_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(REGEXDEV_NAME_OCTEONTX2_PMD, pci_id_ree_table);
+RTE_PMD_REGISTER_PCI(REGEXDEV_NAME_CN9K_PMD, cn9k_regexdev_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(REGEXDEV_NAME_CN9K_PMD, pci_id_ree_table);
diff --git a/drivers/regex/cn9k/cn9k_regexdev.h b/drivers/regex/cn9k/cn9k_regexdev.h
new file mode 100644
index 0000000000..c715502167
--- /dev/null
+++ b/drivers/regex/cn9k/cn9k_regexdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _CN9K_REGEXDEV_H_
+#define _CN9K_REGEXDEV_H_
+
+#include <rte_common.h>
+#include <rte_regexdev.h>
+
+#include "roc_api.h"
+
+#define cn9k_ree_dbg plt_ree_dbg
+#define cn9k_err plt_err
+
+#define ree_func_trace cn9k_ree_dbg
+
+/* Marvell CN9K Regex PMD device name */
+#define REGEXDEV_NAME_CN9K_PMD regex_cn9k
+
+/**
+ * Device private data
+ */
+struct cn9k_ree_data {
+ uint32_t regexdev_capa;
+ uint64_t rule_flags;
+ /**< Feature flags exposes HW/SW features for the given device */
+ uint16_t max_rules_per_group;
+ /**< Maximum rules supported per subset by this device */
+ uint16_t max_groups;
+ /**< Maximum subset supported by this device */
+ void **queue_pairs;
+ /**< Array of pointers to queue pairs. */
+ uint16_t nb_queue_pairs;
+ /**< Number of device queue pairs. */
+ struct roc_ree_vf vf;
+ /**< vf data */
+ struct rte_regexdev_rule *rules;
+ /**< rules to be compiled */
+ uint16_t nb_rules;
+ /**< number of rules */
+} __rte_cache_aligned;
+
+#endif /* _CN9K_REGEXDEV_H_ */
diff --git a/drivers/regex/octeontx2/otx2_regexdev_compiler.c b/drivers/regex/cn9k/cn9k_regexdev_compiler.c
similarity index 86%
rename from drivers/regex/octeontx2/otx2_regexdev_compiler.c
rename to drivers/regex/cn9k/cn9k_regexdev_compiler.c
index 785459f741..935b8a53b4 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_compiler.c
+++ b/drivers/regex/cn9k/cn9k_regexdev_compiler.c
@@ -5,9 +5,8 @@
#include <rte_malloc.h>
#include <rte_regexdev.h>
-#include "otx2_regexdev.h"
-#include "otx2_regexdev_compiler.h"
-#include "otx2_regexdev_mbox.h"
+#include "cn9k_regexdev.h"
+#include "cn9k_regexdev_compiler.h"
#ifdef REE_COMPILER_SDK
#include <rxp-compiler.h>
@@ -65,7 +64,7 @@ ree_rule_db_compile(const struct rte_regexdev_rule *rules,
nb_rules*sizeof(struct rxp_rule_entry), 0);
if (ruleset.rules == NULL) {
- otx2_err("Could not allocate memory for rule compilation\n");
+ cn9k_err("Could not allocate memory for rule compilation\n");
return -EFAULT;
}
if (rof_for_incremental_compile)
@@ -126,9 +125,10 @@ ree_rule_db_compile(const struct rte_regexdev_rule *rules,
}
int
-otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
+cn9k_ree_rule_db_compile_prog(struct rte_regexdev *dev)
{
- struct otx2_ree_data *data = dev->data->dev_private;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
char compiler_version[] = "20.5.2.eda0fa2";
char timestamp[] = "19700101_000001";
uint32_t rule_db_len, rule_dbi_len;
@@ -144,25 +144,25 @@ otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
ree_func_trace();
- ret = otx2_ree_rule_db_len_get(dev, &rule_db_len, &rule_dbi_len);
+ ret = roc_ree_rule_db_len_get(vf, &rule_db_len, &rule_dbi_len);
if (ret != 0) {
- otx2_err("Could not get rule db length");
+ cn9k_err("Could not get rule db length");
return ret;
}
if (rule_db_len > 0) {
- otx2_ree_dbg("Incremental compile, rule db len %d rule dbi len %d",
+ cn9k_ree_dbg("Incremental compile, rule db len %d rule dbi len %d",
rule_db_len, rule_dbi_len);
rule_db = rte_malloc("ree_rule_db", rule_db_len, 0);
if (!rule_db) {
- otx2_err("Could not allocate memory for rule db");
+ cn9k_err("Could not allocate memory for rule db");
return -EFAULT;
}
- ret = otx2_ree_rule_db_get(dev, rule_db, rule_db_len,
+ ret = roc_ree_rule_db_get(vf, rule_db, rule_db_len,
(char *)rule_dbi, rule_dbi_len);
if (ret) {
- otx2_err("Could not read rule db");
+ cn9k_err("Could not read rule db");
rte_free(rule_db);
return -EFAULT;
}
@@ -188,7 +188,7 @@ otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
ret = ree_rule_db_compile(data->rules, data->nb_rules, &rof,
&rofi, &rof_inc, rofi_inc_p);
if (rofi->number_of_entries == 0) {
- otx2_ree_dbg("No change to rule db");
+ cn9k_ree_dbg("No change to rule db");
ret = 0;
goto free_structs;
}
@@ -201,14 +201,14 @@ otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
&rofi, NULL, NULL);
}
if (ret != 0) {
- otx2_err("Could not compile rule db");
+ cn9k_err("Could not compile rule db");
goto free_structs;
}
rule_db_len = rof->number_of_entries * sizeof(struct rxp_rof_entry);
- ret = otx2_ree_rule_db_prog(dev, (char *)rof->rof_entries, rule_db_len,
+ ret = roc_ree_rule_db_prog(vf, (char *)rof->rof_entries, rule_db_len,
rofi_rof_entries, rule_dbi_len);
if (ret)
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
free_structs:
rxp_free_structs(NULL, NULL, NULL, NULL, NULL, &rof, NULL, &rofi, NULL,
@@ -221,7 +221,7 @@ otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
}
#else
int
-otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
+cn9k_ree_rule_db_compile_prog(struct rte_regexdev *dev)
{
RTE_SET_USED(dev);
return -ENOTSUP;
diff --git a/drivers/regex/cn9k/cn9k_regexdev_compiler.h b/drivers/regex/cn9k/cn9k_regexdev_compiler.h
new file mode 100644
index 0000000000..4c29a69ada
--- /dev/null
+++ b/drivers/regex/cn9k/cn9k_regexdev_compiler.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _CN9K_REGEXDEV_COMPILER_H_
+#define _CN9K_REGEXDEV_COMPILER_H_
+
+int
+cn9k_ree_rule_db_compile_prog(struct rte_regexdev *dev);
+
+#endif /* _CN9K_REGEXDEV_COMPILER_H_ */
diff --git a/drivers/regex/octeontx2/meson.build b/drivers/regex/cn9k/meson.build
similarity index 65%
rename from drivers/regex/octeontx2/meson.build
rename to drivers/regex/cn9k/meson.build
index 3f81add5bf..bb0504fba1 100644
--- a/drivers/regex/octeontx2/meson.build
+++ b/drivers/regex/cn9k/meson.build
@@ -16,12 +16,10 @@ if lib.found()
endif
sources = files(
- 'otx2_regexdev.c',
- 'otx2_regexdev_compiler.c',
- 'otx2_regexdev_hw_access.c',
- 'otx2_regexdev_mbox.c',
+ 'cn9k_regexdev.c',
+ 'cn9k_regexdev_compiler.c',
)
-deps += ['bus_pci', 'common_octeontx2', 'regexdev']
+deps += ['bus_pci', 'regexdev']
+deps += ['common_cnxk', 'mempool_cnxk']
-includes += include_directories('../../common/octeontx2')
diff --git a/drivers/regex/octeontx2/version.map b/drivers/regex/cn9k/version.map
similarity index 100%
rename from drivers/regex/octeontx2/version.map
rename to drivers/regex/cn9k/version.map
diff --git a/drivers/regex/meson.build b/drivers/regex/meson.build
index 94222e55fe..7ad55af8ca 100644
--- a/drivers/regex/meson.build
+++ b/drivers/regex/meson.build
@@ -3,6 +3,6 @@
drivers = [
'mlx5',
- 'octeontx2',
+ 'cn9k',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
diff --git a/drivers/regex/octeontx2/otx2_regexdev.h b/drivers/regex/octeontx2/otx2_regexdev.h
deleted file mode 100644
index d710535f5f..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev.h
+++ /dev/null
@@ -1,109 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_REGEXDEV_H_
-#define _OTX2_REGEXDEV_H_
-
-#include <rte_common.h>
-#include <rte_regexdev.h>
-
-#include "otx2_dev.h"
-
-#define ree_func_trace otx2_ree_dbg
-
-/* Marvell OCTEON TX2 Regex PMD device name */
-#define REGEXDEV_NAME_OCTEONTX2_PMD regex_octeontx2
-
-#define OTX2_REE_MAX_LFS 36
-#define OTX2_REE_MAX_QUEUES_PER_VF 36
-#define OTX2_REE_MAX_MATCHES_PER_VF 254
-
-#define OTX2_REE_MAX_PAYLOAD_SIZE (1 << 14)
-
-#define OTX2_REE_NON_INC_PROG 0
-#define OTX2_REE_INC_PROG 1
-
-#define REE_MOD_INC(i, l) ((i) == (l - 1) ? (i) = 0 : (i)++)
-
-
-/**
- * Device vf data
- */
-struct otx2_ree_vf {
- struct otx2_dev otx2_dev;
- /**< Base class */
- uint16_t max_queues;
- /**< Max queues supported */
- uint8_t nb_queues;
- /**< Number of regex queues attached */
- uint16_t max_matches;
- /**< Max matches supported*/
- uint16_t lf_msixoff[OTX2_REE_MAX_LFS];
- /**< MSI-X offsets */
- uint8_t block_address;
- /**< REE Block Address */
- uint8_t err_intr_registered:1;
- /**< Are error interrupts registered? */
-};
-
-/**
- * Device private data
- */
-struct otx2_ree_data {
- uint32_t regexdev_capa;
- uint64_t rule_flags;
- /**< Feature flags exposes HW/SW features for the given device */
- uint16_t max_rules_per_group;
- /**< Maximum rules supported per subset by this device */
- uint16_t max_groups;
- /**< Maximum subset supported by this device */
- void **queue_pairs;
- /**< Array of pointers to queue pairs. */
- uint16_t nb_queue_pairs;
- /**< Number of device queue pairs. */
- struct otx2_ree_vf vf;
- /**< vf data */
- struct rte_regexdev_rule *rules;
- /**< rules to be compiled */
- uint16_t nb_rules;
- /**< number of rules */
-} __rte_cache_aligned;
-
-struct otx2_ree_rid {
- uintptr_t rid;
- /** Request id of a ree operation */
- uint64_t user_id;
- /* Client data */
- /**< IOVA address of the pattern to be matched. */
-};
-
-struct otx2_ree_pending_queue {
- uint64_t pending_count;
- /** Pending requests count */
- struct otx2_ree_rid *rid_queue;
- /** Array of pending requests */
- uint16_t enq_tail;
- /** Tail of queue to be used for enqueue */
- uint16_t deq_head;
- /** Head of queue to be used for dequeue */
-};
-
-struct otx2_ree_qp {
- uint32_t id;
- /**< Queue pair id */
- uintptr_t base;
- /**< Base address where BAR is mapped */
- struct otx2_ree_pending_queue pend_q;
- /**< Pending queue */
- rte_iova_t iq_dma_addr;
- /**< Instruction queue address */
- uint32_t otx2_regexdev_jobid;
- /**< Job ID */
- uint32_t write_offset;
- /**< write offset */
- regexdev_stop_flush_t cb;
- /**< Callback function called during rte_regex_dev_stop()*/
-};
-
-#endif /* _OTX2_REGEXDEV_H_ */
diff --git a/drivers/regex/octeontx2/otx2_regexdev_compiler.h b/drivers/regex/octeontx2/otx2_regexdev_compiler.h
deleted file mode 100644
index 8d2625bf7f..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_compiler.h
+++ /dev/null
@@ -1,11 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_REGEXDEV_COMPILER_H_
-#define _OTX2_REGEXDEV_COMPILER_H_
-
-int
-otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev);
-
-#endif /* _OTX2_REGEXDEV_COMPILER_H_ */
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
deleted file mode 100644
index f8031d0f72..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ /dev/null
@@ -1,167 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_regexdev_hw_access.h"
-#include "otx2_regexdev_mbox.h"
-
-static void
-ree_lf_err_intr_handler(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t lf_id;
- uint64_t intr;
-
- lf_id = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + OTX2_REE_LF_MISC_INT);
- if (intr == 0)
- return;
-
- otx2_ree_dbg("LF %d MISC_INT: 0x%" PRIx64 "", lf_id, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + OTX2_REE_LF_MISC_INT);
-}
-
-static void
-ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
-
- otx2_unregister_irq(handle, ree_lf_err_intr_handler, (void *)base,
- msix_off);
-}
-
-void
-otx2_ree_err_intr_unregister(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- uintptr_t base;
- uint32_t i;
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_REE_LF_BAR2(vf, i);
- ree_lf_err_intr_unregister(dev, vf->lf_msixoff[i], base);
- }
-
- vf->err_intr_registered = 0;
-}
-
-static int
-ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int ret;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
-
- /* Register error interrupt handler */
- ret = otx2_register_irq(handle, ree_lf_err_intr_handler, (void *)base,
- msix_off);
- if (ret)
- return ret;
-
- /* Enable error interrupts */
- otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1S);
-
- return 0;
-}
-
-int
-otx2_ree_err_intr_register(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- uint32_t i, j, ret;
- uintptr_t base;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid REE LF MSI-X offset: 0x%x",
- vf->lf_msixoff[i]);
- return -EINVAL;
- }
- }
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_REE_LF_BAR2(vf, i);
- ret = ree_lf_err_intr_register(dev, vf->lf_msixoff[i], base);
- if (ret)
- goto intr_unregister;
- }
-
- vf->err_intr_registered = 1;
- return 0;
-
-intr_unregister:
- /* Unregister the ones already registered */
- for (j = 0; j < i; j++) {
- base = OTX2_REE_LF_BAR2(vf, j);
- ree_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base);
- }
- return ret;
-}
-
-int
-otx2_ree_iq_enable(const struct rte_regexdev *dev, const struct otx2_ree_qp *qp,
- uint8_t pri, uint32_t size_div2)
-{
- union otx2_ree_lf_sbuf_addr base;
- union otx2_ree_lf_ena lf_ena;
-
- /* Set instruction queue size and priority */
- otx2_ree_config_lf(dev, qp->id, pri, size_div2);
-
- /* Set instruction queue base address */
- /* Should be written after SBUF_CTL and before LF_ENA */
-
- base.u = otx2_read64(qp->base + OTX2_REE_LF_SBUF_ADDR);
- base.s.ptr = qp->iq_dma_addr >> 7;
- otx2_write64(base.u, qp->base + OTX2_REE_LF_SBUF_ADDR);
-
- /* Enable instruction queue */
-
- lf_ena.u = otx2_read64(qp->base + OTX2_REE_LF_ENA);
- lf_ena.s.ena = 1;
- otx2_write64(lf_ena.u, qp->base + OTX2_REE_LF_ENA);
-
- return 0;
-}
-
-void
-otx2_ree_iq_disable(struct otx2_ree_qp *qp)
-{
- union otx2_ree_lf_ena lf_ena;
-
- /* Stop instruction execution */
- lf_ena.u = otx2_read64(qp->base + OTX2_REE_LF_ENA);
- lf_ena.s.ena = 0x0;
- otx2_write64(lf_ena.u, qp->base + OTX2_REE_LF_ENA);
-}
-
-int
-otx2_ree_max_matches_get(const struct rte_regexdev *dev, uint8_t *max_matches)
-{
- union otx2_ree_af_reexm_max_match reexm_max_match;
- int ret;
-
- ret = otx2_ree_af_reg_read(dev, REE_AF_REEXM_MAX_MATCH,
- &reexm_max_match.u);
- if (ret)
- return ret;
-
- *max_matches = reexm_max_match.s.max;
- return 0;
-}
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.h b/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
deleted file mode 100644
index dedf5f3282..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
+++ /dev/null
@@ -1,202 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_REGEXDEV_HW_ACCESS_H_
-#define _OTX2_REGEXDEV_HW_ACCESS_H_
-
-#include <stdint.h>
-
-#include "otx2_regexdev.h"
-
-/* REE instruction queue length */
-#define OTX2_REE_IQ_LEN (1 << 13)
-
-#define OTX2_REE_DEFAULT_CMD_QLEN OTX2_REE_IQ_LEN
-
-/* Status register bits */
-#define OTX2_REE_STATUS_PMI_EOJ_BIT (1 << 14)
-#define OTX2_REE_STATUS_PMI_SOJ_BIT (1 << 13)
-#define OTX2_REE_STATUS_MP_CNT_DET_BIT (1 << 7)
-#define OTX2_REE_STATUS_MM_CNT_DET_BIT (1 << 6)
-#define OTX2_REE_STATUS_ML_CNT_DET_BIT (1 << 5)
-#define OTX2_REE_STATUS_MST_CNT_DET_BIT (1 << 4)
-#define OTX2_REE_STATUS_MPT_CNT_DET_BIT (1 << 3)
-
-/* Register offsets */
-/* REE LF registers */
-#define OTX2_REE_LF_DONE_INT 0x120ull
-#define OTX2_REE_LF_DONE_INT_W1S 0x130ull
-#define OTX2_REE_LF_DONE_INT_ENA_W1S 0x138ull
-#define OTX2_REE_LF_DONE_INT_ENA_W1C 0x140ull
-#define OTX2_REE_LF_MISC_INT 0x300ull
-#define OTX2_REE_LF_MISC_INT_W1S 0x310ull
-#define OTX2_REE_LF_MISC_INT_ENA_W1S 0x320ull
-#define OTX2_REE_LF_MISC_INT_ENA_W1C 0x330ull
-#define OTX2_REE_LF_ENA 0x10ull
-#define OTX2_REE_LF_SBUF_ADDR 0x20ull
-#define OTX2_REE_LF_DONE 0x100ull
-#define OTX2_REE_LF_DONE_ACK 0x110ull
-#define OTX2_REE_LF_DONE_WAIT 0x148ull
-#define OTX2_REE_LF_DOORBELL 0x400ull
-#define OTX2_REE_LF_OUTSTAND_JOB 0x410ull
-
-/* BAR 0 */
-#define OTX2_REE_AF_QUE_SBUF_CTL(a) (0x1200ull | (uint64_t)(a) << 3)
-#define OTX2_REE_PRIV_LF_CFG(a) (0x41000ull | (uint64_t)(a) << 3)
-
-#define OTX2_REE_LF_BAR2(vf, q_id) \
- ((vf)->otx2_dev.bar2 + \
- (((vf)->block_address << 20) | ((q_id) << 12)))
-
-
-#define OTX2_REE_QUEUE_HI_PRIO 0x1
-
-enum ree_desc_type_e {
- REE_TYPE_JOB_DESC = 0x0,
- REE_TYPE_RESULT_DESC = 0x1,
- REE_TYPE_ENUM_LAST = 0x2
-};
-
-union otx2_ree_priv_lf_cfg {
- uint64_t u;
- struct {
- uint64_t slot : 8;
- uint64_t pf_func : 16;
- uint64_t reserved_24_62 : 39;
- uint64_t ena : 1;
- } s;
-};
-
-
-union otx2_ree_lf_sbuf_addr {
- uint64_t u;
- struct {
- uint64_t off : 7;
- uint64_t ptr : 46;
- uint64_t reserved_53_63 : 11;
- } s;
-};
-
-union otx2_ree_lf_ena {
- uint64_t u;
- struct {
- uint64_t ena : 1;
- uint64_t reserved_1_63 : 63;
- } s;
-};
-
-union otx2_ree_af_reexm_max_match {
- uint64_t u;
- struct {
- uint64_t max : 8;
- uint64_t reserved_8_63 : 56;
- } s;
-};
-
-union otx2_ree_lf_done {
- uint64_t u;
- struct {
- uint64_t done : 20;
- uint64_t reserved_20_63 : 44;
- } s;
-};
-
-union otx2_ree_inst {
- uint64_t u[8];
- struct {
- uint64_t doneint : 1;
- uint64_t reserved_1_3 : 3;
- uint64_t dg : 1;
- uint64_t reserved_5_7 : 3;
- uint64_t ooj : 1;
- uint64_t reserved_9_15 : 7;
- uint64_t reserved_16_63 : 48;
- uint64_t inp_ptr_addr : 64;
- uint64_t inp_ptr_ctl : 64;
- uint64_t res_ptr_addr : 64;
- uint64_t wq_ptr : 64;
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t ggrp : 10;
- uint64_t reserved_364_383 : 20;
- uint64_t reserved_384_391 : 8;
- uint64_t ree_job_id : 24;
- uint64_t ree_job_ctrl : 16;
- uint64_t ree_job_length : 15;
- uint64_t reserved_447_447 : 1;
- uint64_t ree_job_subset_id_0 : 16;
- uint64_t ree_job_subset_id_1 : 16;
- uint64_t ree_job_subset_id_2 : 16;
- uint64_t ree_job_subset_id_3 : 16;
- } cn98xx;
-};
-
-union otx2_ree_res_status {
- uint64_t u;
- struct {
- uint64_t job_type : 3;
- uint64_t mpt_cnt_det : 1;
- uint64_t mst_cnt_det : 1;
- uint64_t ml_cnt_det : 1;
- uint64_t mm_cnt_det : 1;
- uint64_t mp_cnt_det : 1;
- uint64_t mode : 2;
- uint64_t reserved_10_11 : 2;
- uint64_t reserved_12_12 : 1;
- uint64_t pmi_soj : 1;
- uint64_t pmi_eoj : 1;
- uint64_t reserved_15_15 : 1;
- uint64_t reserved_16_63 : 48;
- } s;
-};
-
-union otx2_ree_res {
- uint64_t u[8];
- struct ree_res_s_98 {
- uint64_t done : 1;
- uint64_t hwjid : 7;
- uint64_t ree_res_job_id : 24;
- uint64_t ree_res_status : 16;
- uint64_t ree_res_dmcnt : 8;
- uint64_t ree_res_mcnt : 8;
- uint64_t ree_meta_ptcnt : 16;
- uint64_t ree_meta_icnt : 16;
- uint64_t ree_meta_lcnt : 16;
- uint64_t ree_pmi_min_byte_ptr : 16;
- uint64_t ree_err : 1;
- uint64_t reserved_129_190 : 62;
- uint64_t doneint : 1;
- uint64_t reserved_192_255 : 64;
- uint64_t reserved_256_319 : 64;
- uint64_t reserved_320_383 : 64;
- uint64_t reserved_384_447 : 64;
- uint64_t reserved_448_511 : 64;
- } s;
-};
-
-union otx2_ree_match {
- uint64_t u;
- struct {
- uint64_t ree_rule_id : 32;
- uint64_t start_ptr : 14;
- uint64_t reserved_46_47 : 2;
- uint64_t match_length : 15;
- uint64_t reserved_63_63 : 1;
- } s;
-};
-
-void otx2_ree_err_intr_unregister(const struct rte_regexdev *dev);
-
-int otx2_ree_err_intr_register(const struct rte_regexdev *dev);
-
-int otx2_ree_iq_enable(const struct rte_regexdev *dev,
- const struct otx2_ree_qp *qp,
- uint8_t pri, uint32_t size_div128);
-
-void otx2_ree_iq_disable(struct otx2_ree_qp *qp);
-
-int otx2_ree_max_matches_get(const struct rte_regexdev *dev,
- uint8_t *max_matches);
-
-#endif /* _OTX2_REGEXDEV_HW_ACCESS_H_ */
diff --git a/drivers/regex/octeontx2/otx2_regexdev_mbox.c b/drivers/regex/octeontx2/otx2_regexdev_mbox.c
deleted file mode 100644
index 6d58d367d4..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_mbox.c
+++ /dev/null
@@ -1,401 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_regexdev_mbox.h"
-#include "otx2_regexdev.h"
-
-int
-otx2_ree_available_queues_get(const struct rte_regexdev *dev,
- uint16_t *nb_queues)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct free_rsrcs_rsp *rsp;
- struct otx2_dev *otx2_dev;
- int ret;
-
- otx2_dev = &vf->otx2_dev;
- otx2_mbox_alloc_msg_free_rsrc_cnt(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- if (vf->block_address == RVU_BLOCK_ADDR_REE0)
- *nb_queues = rsp->ree0;
- else
- *nb_queues = rsp->ree1;
- return 0;
-}
-
-int
-otx2_ree_queues_attach(const struct rte_regexdev *dev, uint8_t nb_queues)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct rsrc_attach_req *req;
- struct otx2_mbox *mbox;
-
- /* Ask AF to attach required LFs */
- mbox = vf->otx2_dev.mbox;
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
-
- /* 1 LF = 1 queue */
- req->reelfs = nb_queues;
- req->ree_blkaddr = vf->block_address;
-
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
-
- /* Update number of attached queues */
- vf->nb_queues = nb_queues;
-
- return 0;
-}
-
-int
-otx2_ree_queues_detach(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct rsrc_detach_req *req;
- struct otx2_mbox *mbox;
-
- mbox = vf->otx2_dev.mbox;
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->reelfs = true;
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
-
- /* Queues have been detached */
- vf->nb_queues = 0;
-
- return 0;
-}
-
-int
-otx2_ree_msix_offsets_get(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct msix_offset_rsp *rsp;
- struct otx2_mbox *mbox;
- uint32_t i, ret;
-
- /* Get REE MSI-X vector offsets */
- mbox = vf->otx2_dev.mbox;
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->block_address == RVU_BLOCK_ADDR_REE0)
- vf->lf_msixoff[i] = rsp->ree0_lf_msixoff[i];
- else
- vf->lf_msixoff[i] = rsp->ree1_lf_msixoff[i];
- otx2_ree_dbg("lf_msixoff[%d] 0x%x", i, vf->lf_msixoff[i]);
- }
-
- return 0;
-}
-
-static int
-ree_send_mbox_msg(struct otx2_ree_vf *vf)
-{
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int ret;
-
- otx2_mbox_msg_send(mbox, 0);
-
- ret = otx2_mbox_wait_for_rsp(mbox, 0);
- if (ret < 0) {
- otx2_err("Could not get mailbox response");
- return ret;
- }
-
- return 0;
-}
-
-int
-otx2_ree_config_lf(const struct rte_regexdev *dev, uint8_t lf, uint8_t pri,
- uint32_t size)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_lf_req_msg *req;
- struct otx2_mbox *mbox;
- int ret;
-
- mbox = vf->otx2_dev.mbox;
- req = otx2_mbox_alloc_msg_ree_config_lf(mbox);
-
- req->lf = lf;
- req->pri = pri ? 1 : 0;
- req->size = size;
- req->blkaddr = vf->block_address;
-
- ret = otx2_mbox_process(mbox);
- if (ret < 0) {
- otx2_err("Could not get mailbox response");
- return ret;
- }
- return 0;
-}
-
-int
-otx2_ree_af_reg_read(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t *val)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_rd_wr_reg_msg *msg;
- struct otx2_mbox_dev *mdev;
- struct otx2_mbox *mbox;
- int ret, off;
-
- mbox = vf->otx2_dev.mbox;
- mdev = &mbox->dev[0];
- msg = (struct ree_rd_wr_reg_msg *)otx2_mbox_alloc_msg_rsp(mbox, 0,
- sizeof(*msg), sizeof(*msg));
- if (msg == NULL) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_REE_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 0;
- msg->reg_offset = reg;
- msg->ret_val = val;
- msg->blkaddr = vf->block_address;
-
- ret = ree_send_mbox_msg(vf);
- if (ret < 0)
- return ret;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msg = (struct ree_rd_wr_reg_msg *) ((uintptr_t)mdev->mbase + off);
-
- *val = msg->val;
-
- return 0;
-}
-
-int
-otx2_ree_af_reg_write(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t val)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_rd_wr_reg_msg *msg;
- struct otx2_mbox *mbox;
-
- mbox = vf->otx2_dev.mbox;
- msg = (struct ree_rd_wr_reg_msg *)otx2_mbox_alloc_msg_rsp(mbox, 0,
- sizeof(*msg), sizeof(*msg));
- if (msg == NULL) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_REE_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 1;
- msg->reg_offset = reg;
- msg->val = val;
- msg->blkaddr = vf->block_address;
-
- return ree_send_mbox_msg(vf);
-}
-
-int
-otx2_ree_rule_db_get(const struct rte_regexdev *dev, char *rule_db,
- uint32_t rule_db_len, char *rule_dbi, uint32_t rule_dbi_len)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct ree_rule_db_get_req_msg *req;
- struct ree_rule_db_get_rsp_msg *rsp;
- char *rule_db_ptr = (char *)rule_db;
- struct otx2_ree_vf *vf = &data->vf;
- struct otx2_mbox *mbox;
- int ret, last = 0;
- uint32_t len = 0;
-
- mbox = vf->otx2_dev.mbox;
- if (!rule_db) {
- otx2_err("Couldn't return rule db due to NULL pointer");
- return -EFAULT;
- }
-
- while (!last) {
- req = (struct ree_rule_db_get_req_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req),
- sizeof(*rsp));
- if (!req) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- req->hdr.id = MBOX_MSG_REE_RULE_DB_GET;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = vf->otx2_dev.pf_func;
- req->blkaddr = vf->block_address;
- req->is_dbi = 0;
- req->offset = len;
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
- if (rule_db_len < len + rsp->len) {
- otx2_err("Rule db size is too small");
- return -EFAULT;
- }
- otx2_mbox_memcpy(rule_db_ptr, rsp->rule_db, rsp->len);
- len += rsp->len;
- rule_db_ptr = rule_db_ptr + rsp->len;
- last = rsp->is_last;
- }
-
- if (rule_dbi) {
- req = (struct ree_rule_db_get_req_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req),
- sizeof(*rsp));
- if (!req) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- req->hdr.id = MBOX_MSG_REE_RULE_DB_GET;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = vf->otx2_dev.pf_func;
- req->blkaddr = vf->block_address;
- req->is_dbi = 1;
- req->offset = 0;
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
- if (rule_dbi_len < rsp->len) {
- otx2_err("Rule dbi size is too small");
- return -EFAULT;
- }
- otx2_mbox_memcpy(rule_dbi, rsp->rule_db, rsp->len);
- }
- return 0;
-}
-
-int
-otx2_ree_rule_db_len_get(const struct rte_regexdev *dev,
- uint32_t *rule_db_len,
- uint32_t *rule_dbi_len)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct ree_rule_db_len_rsp_msg *rsp;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_req_msg *req;
- struct otx2_mbox *mbox;
- int ret;
-
- mbox = vf->otx2_dev.mbox;
- req = (struct ree_req_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), sizeof(*rsp));
- if (!req) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- req->hdr.id = MBOX_MSG_REE_RULE_DB_LEN_GET;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = vf->otx2_dev.pf_func;
- req->blkaddr = vf->block_address;
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
- if (rule_db_len != NULL)
- *rule_db_len = rsp->len;
- if (rule_dbi_len != NULL)
- *rule_dbi_len = rsp->inc_len;
-
- return 0;
-}
-
-static int
-ree_db_msg(const struct rte_regexdev *dev, const char *db, uint32_t db_len,
- int inc, int dbi)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- uint32_t len_left = db_len, offset = 0;
- struct ree_rule_db_prog_req_msg *req;
- struct otx2_ree_vf *vf = &data->vf;
- const char *rule_db_ptr = db;
- struct otx2_mbox *mbox;
- struct msg_rsp *rsp;
- int ret;
-
- mbox = vf->otx2_dev.mbox;
- while (len_left) {
- req = (struct ree_rule_db_prog_req_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req),
- sizeof(*rsp));
- if (!req) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
- req->hdr.id = MBOX_MSG_REE_RULE_DB_PROG;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = vf->otx2_dev.pf_func;
- req->offset = offset;
- req->total_len = db_len;
- req->len = REE_RULE_DB_REQ_BLOCK_SIZE;
- req->is_incremental = inc;
- req->is_dbi = dbi;
- req->blkaddr = vf->block_address;
-
- if (len_left < REE_RULE_DB_REQ_BLOCK_SIZE) {
- req->is_last = true;
- req->len = len_left;
- }
- otx2_mbox_memcpy(req->rule_db, rule_db_ptr, req->len);
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret) {
- otx2_err("Programming mailbox processing failed");
- return ret;
- }
- len_left -= req->len;
- offset += req->len;
- rule_db_ptr = rule_db_ptr + req->len;
- }
- return 0;
-}
-
-int
-otx2_ree_rule_db_prog(const struct rte_regexdev *dev, const char *rule_db,
- uint32_t rule_db_len, const char *rule_dbi,
- uint32_t rule_dbi_len)
-{
- int inc, ret;
-
- if (rule_db_len == 0) {
- otx2_err("Couldn't program empty rule db");
- return -EFAULT;
- }
- inc = (rule_dbi_len != 0);
- if ((rule_db == NULL) || (inc && (rule_dbi == NULL))) {
- otx2_err("Couldn't program NULL rule db");
- return -EFAULT;
- }
- if (inc) {
- ret = ree_db_msg(dev, rule_dbi, rule_dbi_len, inc, 1);
- if (ret)
- return ret;
- }
- return ree_db_msg(dev, rule_db, rule_db_len, inc, 0);
-}
diff --git a/drivers/regex/octeontx2/otx2_regexdev_mbox.h b/drivers/regex/octeontx2/otx2_regexdev_mbox.h
deleted file mode 100644
index 953efa6724..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_mbox.h
+++ /dev/null
@@ -1,38 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_REGEXDEV_MBOX_H_
-#define _OTX2_REGEXDEV_MBOX_H_
-
-#include <rte_regexdev.h>
-
-int otx2_ree_available_queues_get(const struct rte_regexdev *dev,
- uint16_t *nb_queues);
-
-int otx2_ree_queues_attach(const struct rte_regexdev *dev, uint8_t nb_queues);
-
-int otx2_ree_queues_detach(const struct rte_regexdev *dev);
-
-int otx2_ree_msix_offsets_get(const struct rte_regexdev *dev);
-
-int otx2_ree_config_lf(const struct rte_regexdev *dev, uint8_t lf, uint8_t pri,
- uint32_t size);
-
-int otx2_ree_af_reg_read(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t *val);
-
-int otx2_ree_af_reg_write(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t val);
-
-int otx2_ree_rule_db_get(const struct rte_regexdev *dev, char *rule_db,
- uint32_t rule_db_len, char *rule_dbi, uint32_t rule_dbi_len);
-
-int otx2_ree_rule_db_len_get(const struct rte_regexdev *dev,
- uint32_t *rule_db_len, uint32_t *rule_dbi_len);
-
-int otx2_ree_rule_db_prog(const struct rte_regexdev *dev, const char *rule_db,
- uint32_t rule_db_len, const char *rule_dbi,
- uint32_t rule_dbi_len);
-
-#endif /* _OTX2_REGEXDEV_MBOX_H_ */
--
2.34.1
^ permalink raw reply [relevance 2%]
* RE: [RFC] cryptodev: asymmetric crypto random number source
2021-12-03 10:03 3% [RFC] cryptodev: asymmetric crypto random number source Kusztal, ArkadiuszX
@ 2021-12-13 8:14 3% ` Akhil Goyal
2021-12-13 9:27 0% ` Ramkumar Balu
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-12-13 8:14 UTC (permalink / raw)
To: Kusztal, ArkadiuszX, Anoob Joseph, Zhang, Roy Fan; +Cc: dev, Ramkumar Balu
[-- Attachment #1: Type: text/plain, Size: 1147 bytes --]
++Ram for openssl
ECDSA op:
rte_crypto_param k;
/**< The ECDSA per-message secret number, which is an integer
* in the interval (1, n-1)
*/
DSA op:
No 'k'.
This one I think have described some time ago:
Only PMD that verifies ECDSA is OCTEON which apparently needs 'k' provided by user.
Only PMD that verifies DSA is OpenSSL PMD which will generate its own random number internally.
So in case PMD supports one of these options (or especially when supports both) we need to give some information here.
The most obvious option would be to change rte_crypto_param k -> rte_crypto_param *k
In case (k == NULL) PMD should generate it itself if possible, otherwise it should push crypto_op to the response ring with appropriate error code.
Another options would be:
* Extend rte_cryptodev_config and rte_cryptodev_info with information about random number generator for specific device (though it would be ABI breakage)
* Provide some kind of callback to get random number from user (which could be useful for other things like RSA padding as well)
[-- Attachment #2: Type: text/html, Size: 9006 bytes --]
^ permalink raw reply [relevance 3%]
* RE: [RFC] cryptodev: asymmetric crypto random number source
2021-12-13 8:14 3% ` Akhil Goyal
@ 2021-12-13 9:27 0% ` Ramkumar Balu
2021-12-17 15:26 0% ` Kusztal, ArkadiuszX
0 siblings, 1 reply; 200+ results
From: Ramkumar Balu @ 2021-12-13 9:27 UTC (permalink / raw)
To: Akhil Goyal, Kusztal, ArkadiuszX, Anoob Joseph, Zhang, Roy Fan; +Cc: dev
> ++Ram for openssl
>
> > ECDSA op:
> > rte_crypto_param k;
> > /**< The ECDSA per-message secret number, which is an integer
> > * in the interval (1, n-1)
> > */
> > DSA op:
> > No 'k'.
> >
> > This one I think have described some time ago:
> > Only PMD that verifies ECDSA is OCTEON which apparently needs 'k' provided by user.
> > Only PMD that verifies DSA is OpenSSL PMD which will generate its own random number internally.
> >
> > So in case PMD supports one of these options (or especially when supports both) we need to give some information here.
We can have a standard way to represent if a particular rte_crypto_param is set by the application or not. Then, it is up to the PMD to perform the op or return error code if unable to proceed.
> >
> > The most obvious option would be to change rte_crypto_param k -> rte_crypto_param *k
> > In case (k == NULL) PMD should generate it itself if possible, otherwise it should push crypto_op to the response ring with appropriate error code.
This case could occur for other params as well. Having a few as nested variables and others as pointers could be confusing for memory alloc/dealloc. However, the rte_crypto_param already has a data pointer inside it which can be used in same manner. For example, in this case (k.data == NULL), PMD should generate random number if possible or push to response ring with error code. This can be done without breaking backward compatibility.
This can be the standard way for PMDs to find if a particular rte_crypto_param is valid or NULL.
> >
> > Another options would be:
> > - Extend rte_cryptodev_config and rte_cryptodev_info with information about random number generator for specific device (though it would be ABI breakage)
> > - Provide some kind of callback to get random number from user (which could be useful for other things like RSA padding as well)
I think the previous solution itself is more straightforward and simpler unless we want to have functionality to configure random number generator for each device.
Thanks,
Ramkumar Balu
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers jerinj
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 4/5] regex/cn9k: use cnxk infrastructure jerinj
@ 2021-12-11 9:04 1% ` jerinj
1 sibling, 0 replies; 200+ results
From: jerinj @ 2021-12-11 9:04 UTC (permalink / raw)
To: dev, Thomas Monjalon, Akhil Goyal, Declan Doherty, Jerin Jacob,
Ruifeng Wang, Jan Viktorin, Bruce Richardson, Ray Kinsella,
Ankur Dwivedi, Anoob Joseph, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Pavan Nikhilesh, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Nalla Pradeep,
Ciara Power, Shijith Thotton, Ashwin Sekhar T K, Anatoly Burakov
Cc: david.marchand, ferruh.yigit
From: Jerin Jacob <jerinj@marvell.com>
As per the deprecation notice, In the view of enabling unified driver
for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
drivers and replace with drivers/cnxk/ which
supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
This patch does the following
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
- Update the documentation and MAINTAINERS to reflect the same.
- Change the reference to OCTEONTX2 as OCTEON 9. Old release notes and
the kernel related documentation is not accounted for this change.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
MAINTAINERS | 37 -
app/test/meson.build | 1 -
app/test/test_cryptodev.c | 7 -
app/test/test_cryptodev.h | 1 -
app/test/test_cryptodev_asym.c | 17 -
app/test/test_eventdev.c | 8 -
config/arm/arm64_cn10k_linux_gcc | 1 -
...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
config/arm/meson.build | 10 +-
devtools/check-abi.sh | 2 +-
doc/guides/cryptodevs/features/octeontx2.ini | 87 -
doc/guides/cryptodevs/index.rst | 1 -
doc/guides/cryptodevs/octeontx2.rst | 188 -
doc/guides/dmadevs/cnxk.rst | 2 +-
doc/guides/eventdevs/features/octeontx2.ini | 30 -
doc/guides/eventdevs/index.rst | 1 -
doc/guides/eventdevs/octeontx2.rst | 178 -
doc/guides/mempool/index.rst | 1 -
doc/guides/mempool/octeontx2.rst | 92 -
doc/guides/nics/cnxk.rst | 4 +-
doc/guides/nics/features/octeontx2.ini | 97 -
doc/guides/nics/features/octeontx2_vec.ini | 48 -
doc/guides/nics/features/octeontx2_vf.ini | 45 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/octeontx2.rst | 465 ---
doc/guides/nics/octeontx_ep.rst | 4 +-
doc/guides/platform/cnxk.rst | 12 +
.../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
.../img/octeontx2_resource_virtualization.svg | 2418 ------------
doc/guides/platform/index.rst | 1 -
doc/guides/platform/octeontx2.rst | 520 ---
doc/guides/rel_notes/deprecation.rst | 17 -
doc/guides/rel_notes/release_19_08.rst | 8 +-
doc/guides/rel_notes/release_19_11.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 1 -
drivers/common/meson.build | 1 -
drivers/common/octeontx2/hw/otx2_nix.h | 1391 -------
drivers/common/octeontx2/hw/otx2_npa.h | 305 --
drivers/common/octeontx2/hw/otx2_npc.h | 503 ---
drivers/common/octeontx2/hw/otx2_ree.h | 27 -
drivers/common/octeontx2/hw/otx2_rvu.h | 219 --
drivers/common/octeontx2/hw/otx2_sdp.h | 184 -
drivers/common/octeontx2/hw/otx2_sso.h | 209 --
drivers/common/octeontx2/hw/otx2_ssow.h | 56 -
drivers/common/octeontx2/hw/otx2_tim.h | 34 -
drivers/common/octeontx2/meson.build | 24 -
drivers/common/octeontx2/otx2_common.c | 216 --
drivers/common/octeontx2/otx2_common.h | 179 -
drivers/common/octeontx2/otx2_dev.c | 1074 ------
drivers/common/octeontx2/otx2_dev.h | 161 -
drivers/common/octeontx2/otx2_io_arm64.h | 114 -
drivers/common/octeontx2/otx2_io_generic.h | 75 -
drivers/common/octeontx2/otx2_irq.c | 288 --
drivers/common/octeontx2/otx2_irq.h | 28 -
drivers/common/octeontx2/otx2_mbox.c | 465 ---
drivers/common/octeontx2/otx2_mbox.h | 1958 ----------
drivers/common/octeontx2/otx2_sec_idev.c | 183 -
drivers/common/octeontx2/otx2_sec_idev.h | 43 -
drivers/common/octeontx2/version.map | 44 -
drivers/crypto/meson.build | 1 -
drivers/crypto/octeontx2/meson.build | 30 -
drivers/crypto/octeontx2/otx2_cryptodev.c | 188 -
drivers/crypto/octeontx2/otx2_cryptodev.h | 63 -
.../octeontx2/otx2_cryptodev_capabilities.c | 924 -----
.../octeontx2/otx2_cryptodev_capabilities.h | 45 -
.../octeontx2/otx2_cryptodev_hw_access.c | 225 --
.../octeontx2/otx2_cryptodev_hw_access.h | 161 -
.../crypto/octeontx2/otx2_cryptodev_mbox.c | 285 --
.../crypto/octeontx2/otx2_cryptodev_mbox.h | 37 -
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 1438 -------
drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 15 -
.../octeontx2/otx2_cryptodev_ops_helper.h | 82 -
drivers/crypto/octeontx2/otx2_cryptodev_qp.h | 46 -
drivers/crypto/octeontx2/otx2_cryptodev_sec.c | 655 ----
drivers/crypto/octeontx2/otx2_cryptodev_sec.h | 64 -
.../crypto/octeontx2/otx2_ipsec_anti_replay.h | 227 --
drivers/crypto/octeontx2/otx2_ipsec_fp.h | 371 --
drivers/crypto/octeontx2/otx2_ipsec_po.h | 447 ---
drivers/crypto/octeontx2/otx2_ipsec_po_ops.h | 167 -
drivers/crypto/octeontx2/otx2_security.h | 37 -
drivers/crypto/octeontx2/version.map | 13 -
drivers/event/cnxk/cn9k_eventdev.c | 10 +
drivers/event/meson.build | 1 -
drivers/event/octeontx2/meson.build | 26 -
drivers/event/octeontx2/otx2_evdev.c | 1900 ----------
drivers/event/octeontx2/otx2_evdev.h | 430 ---
drivers/event/octeontx2/otx2_evdev_adptr.c | 656 ----
.../event/octeontx2/otx2_evdev_crypto_adptr.c | 132 -
.../octeontx2/otx2_evdev_crypto_adptr_rx.h | 77 -
.../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 -
drivers/event/octeontx2/otx2_evdev_irq.c | 272 --
drivers/event/octeontx2/otx2_evdev_selftest.c | 1517 --------
drivers/event/octeontx2/otx2_evdev_stats.h | 286 --
drivers/event/octeontx2/otx2_tim_evdev.c | 735 ----
drivers/event/octeontx2/otx2_tim_evdev.h | 256 --
drivers/event/octeontx2/otx2_tim_worker.c | 192 -
drivers/event/octeontx2/otx2_tim_worker.h | 598 ---
drivers/event/octeontx2/otx2_worker.c | 372 --
drivers/event/octeontx2/otx2_worker.h | 339 --
drivers/event/octeontx2/otx2_worker_dual.c | 345 --
drivers/event/octeontx2/otx2_worker_dual.h | 110 -
drivers/event/octeontx2/version.map | 3 -
drivers/mempool/cnxk/cnxk_mempool.c | 56 +-
drivers/mempool/meson.build | 1 -
drivers/mempool/octeontx2/meson.build | 18 -
drivers/mempool/octeontx2/otx2_mempool.c | 457 ---
drivers/mempool/octeontx2/otx2_mempool.h | 221 --
.../mempool/octeontx2/otx2_mempool_debug.c | 135 -
drivers/mempool/octeontx2/otx2_mempool_irq.c | 303 --
drivers/mempool/octeontx2/otx2_mempool_ops.c | 901 -----
drivers/mempool/octeontx2/version.map | 8 -
drivers/net/cnxk/cn9k_ethdev.c | 15 +
drivers/net/meson.build | 1 -
drivers/net/octeontx2/meson.build | 47 -
drivers/net/octeontx2/otx2_ethdev.c | 2814 --------------
drivers/net/octeontx2/otx2_ethdev.h | 619 ---
drivers/net/octeontx2/otx2_ethdev_debug.c | 811 ----
drivers/net/octeontx2/otx2_ethdev_devargs.c | 215 --
drivers/net/octeontx2/otx2_ethdev_irq.c | 493 ---
drivers/net/octeontx2/otx2_ethdev_ops.c | 589 ---
drivers/net/octeontx2/otx2_ethdev_sec.c | 923 -----
drivers/net/octeontx2/otx2_ethdev_sec.h | 130 -
drivers/net/octeontx2/otx2_ethdev_sec_tx.h | 182 -
drivers/net/octeontx2/otx2_flow.c | 1189 ------
drivers/net/octeontx2/otx2_flow.h | 414 --
drivers/net/octeontx2/otx2_flow_ctrl.c | 252 --
drivers/net/octeontx2/otx2_flow_dump.c | 595 ---
drivers/net/octeontx2/otx2_flow_parse.c | 1239 ------
drivers/net/octeontx2/otx2_flow_utils.c | 969 -----
drivers/net/octeontx2/otx2_link.c | 287 --
drivers/net/octeontx2/otx2_lookup.c | 352 --
drivers/net/octeontx2/otx2_mac.c | 151 -
drivers/net/octeontx2/otx2_mcast.c | 339 --
drivers/net/octeontx2/otx2_ptp.c | 450 ---
| 427 ---
drivers/net/octeontx2/otx2_rx.c | 430 ---
drivers/net/octeontx2/otx2_rx.h | 583 ---
drivers/net/octeontx2/otx2_stats.c | 397 --
drivers/net/octeontx2/otx2_tm.c | 3317 -----------------
drivers/net/octeontx2/otx2_tm.h | 176 -
drivers/net/octeontx2/otx2_tx.c | 1077 ------
drivers/net/octeontx2/otx2_tx.h | 791 ----
drivers/net/octeontx2/otx2_vlan.c | 1035 -----
drivers/net/octeontx2/version.map | 3 -
drivers/net/octeontx_ep/otx2_ep_vf.h | 2 +-
drivers/net/octeontx_ep/otx_ep_common.h | 16 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 10 +-
usertools/dpdk-devbind.py | 12 +-
149 files changed, 92 insertions(+), 52124 deletions(-)
rename config/arm/{arm64_octeontx2_linux_gcc => arm64_cn9k_linux_gcc} (84%)
delete mode 100644 doc/guides/cryptodevs/features/octeontx2.ini
delete mode 100644 doc/guides/cryptodevs/octeontx2.rst
delete mode 100644 doc/guides/eventdevs/features/octeontx2.ini
delete mode 100644 doc/guides/eventdevs/octeontx2.rst
delete mode 100644 doc/guides/mempool/octeontx2.rst
delete mode 100644 doc/guides/nics/features/octeontx2.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vec.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vf.ini
delete mode 100644 doc/guides/nics/octeontx2.rst
delete mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
delete mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg
delete mode 100644 doc/guides/platform/octeontx2.rst
delete mode 100644 drivers/common/octeontx2/hw/otx2_nix.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npa.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npc.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ree.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sdp.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sso.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_tim.h
delete mode 100644 drivers/common/octeontx2/meson.build
delete mode 100644 drivers/common/octeontx2/otx2_common.c
delete mode 100644 drivers/common/octeontx2/otx2_common.h
delete mode 100644 drivers/common/octeontx2/otx2_dev.c
delete mode 100644 drivers/common/octeontx2/otx2_dev.h
delete mode 100644 drivers/common/octeontx2/otx2_io_arm64.h
delete mode 100644 drivers/common/octeontx2/otx2_io_generic.h
delete mode 100644 drivers/common/octeontx2/otx2_irq.c
delete mode 100644 drivers/common/octeontx2/otx2_irq.h
delete mode 100644 drivers/common/octeontx2/otx2_mbox.c
delete mode 100644 drivers/common/octeontx2/otx2_mbox.h
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.c
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.h
delete mode 100644 drivers/common/octeontx2/version.map
delete mode 100644 drivers/crypto/octeontx2/meson.build
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_qp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_fp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_security.h
delete mode 100644 drivers/crypto/octeontx2/version.map
delete mode 100644 drivers/event/octeontx2/meson.build
delete mode 100644 drivers/event/octeontx2/otx2_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.c
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.h
delete mode 100644 drivers/event/octeontx2/version.map
delete mode 100644 drivers/mempool/octeontx2/meson.build
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.h
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c
delete mode 100644 drivers/mempool/octeontx2/version.map
delete mode 100644 drivers/net/octeontx2/meson.build
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_flow.c
delete mode 100644 drivers/net/octeontx2/otx2_flow.h
delete mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_dump.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
delete mode 100644 drivers/net/octeontx2/otx2_link.c
delete mode 100644 drivers/net/octeontx2/otx2_lookup.c
delete mode 100644 drivers/net/octeontx2/otx2_mac.c
delete mode 100644 drivers/net/octeontx2/otx2_mcast.c
delete mode 100644 drivers/net/octeontx2/otx2_ptp.c
delete mode 100644 drivers/net/octeontx2/otx2_rss.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.h
delete mode 100644 drivers/net/octeontx2/otx2_stats.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.h
delete mode 100644 drivers/net/octeontx2/otx2_tx.c
delete mode 100644 drivers/net/octeontx2/otx2_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_vlan.c
delete mode 100644 drivers/net/octeontx2/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 854b81f2a3..336bbb3547 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -534,15 +534,6 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
-Marvell OCTEON TX2
-M: Jerin Jacob <jerinj@marvell.com>
-M: Nithin Dabilpuram <ndabilpuram@marvell.com>
-F: drivers/common/octeontx2/
-F: drivers/mempool/octeontx2/
-F: doc/guides/platform/img/octeontx2_*
-F: doc/guides/platform/octeontx2.rst
-F: doc/guides/mempool/octeontx2.rst
-
Bus Drivers
-----------
@@ -795,21 +786,6 @@ F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
F: doc/guides/nics/features/mvneta.ini
-Marvell OCTEON TX2
-M: Jerin Jacob <jerinj@marvell.com>
-M: Nithin Dabilpuram <ndabilpuram@marvell.com>
-M: Kiran Kumar K <kirankumark@marvell.com>
-T: git://dpdk.org/next/dpdk-next-net-mrvl
-F: drivers/net/octeontx2/
-F: doc/guides/nics/features/octeontx2*.ini
-F: doc/guides/nics/octeontx2.rst
-
-Marvell OCTEON TX2 - security
-M: Anoob Joseph <anoobj@marvell.com>
-T: git://dpdk.org/next/dpdk-next-crypto
-F: drivers/common/octeontx2/otx2_sec*
-F: drivers/net/octeontx2/otx2_ethdev_sec*
-
Marvell OCTEON TX EP - endpoint
M: Nalla Pradeep <pnalla@marvell.com>
M: Radha Mohan Chintakuntla <radhac@marvell.com>
@@ -1115,13 +1091,6 @@ F: drivers/crypto/nitrox/
F: doc/guides/cryptodevs/nitrox.rst
F: doc/guides/cryptodevs/features/nitrox.ini
-Marvell OCTEON TX2 crypto
-M: Ankur Dwivedi <adwivedi@marvell.com>
-M: Anoob Joseph <anoobj@marvell.com>
-F: drivers/crypto/octeontx2/
-F: doc/guides/cryptodevs/octeontx2.rst
-F: doc/guides/cryptodevs/features/octeontx2.ini
-
Mellanox mlx5
M: Matan Azrad <matan@nvidia.com>
F: drivers/crypto/mlx5/
@@ -1298,12 +1267,6 @@ M: Shijith Thotton <sthotton@marvell.com>
F: drivers/event/cnxk/
F: doc/guides/eventdevs/cnxk.rst
-Marvell OCTEON TX2
-M: Pavan Nikhilesh <pbhagavatula@marvell.com>
-M: Jerin Jacob <jerinj@marvell.com>
-F: drivers/event/octeontx2/
-F: doc/guides/eventdevs/octeontx2.rst
-
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 2b480adfba..344a609a4d 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -341,7 +341,6 @@ driver_test_names = [
'cryptodev_dpaa_sec_autotest',
'cryptodev_dpaa2_sec_autotest',
'cryptodev_null_autotest',
- 'cryptodev_octeontx2_autotest',
'cryptodev_openssl_autotest',
'cryptodev_openssl_asym_autotest',
'cryptodev_qat_autotest',
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 10b48cdadb..293f59b48c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -15615,12 +15615,6 @@ test_cryptodev_octeontx(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX_SYM_PMD));
}
-static int
-test_cryptodev_octeontx2(void)
-{
- return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD));
-}
-
static int
test_cryptodev_caam_jr(void)
{
@@ -15733,7 +15727,6 @@ REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
REGISTER_TEST_COMMAND(cryptodev_ccp_autotest, test_cryptodev_ccp);
REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio);
REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
-REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 90c8287365..70f23a3f67 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -68,7 +68,6 @@
#define CRYPTODEV_NAME_CCP_PMD crypto_ccp
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
#define CRYPTODEV_NAME_OCTEONTX_SYM_PMD crypto_octeontx
-#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9d19a6d6d9..68f4d8e7a6 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -2375,20 +2375,6 @@ test_cryptodev_octeontx_asym(void)
return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
}
-static int
-test_cryptodev_octeontx2_asym(void)
-{
- gbl_driver_id = rte_cryptodev_driver_id_get(
- RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD));
- if (gbl_driver_id == -1) {
- RTE_LOG(ERR, USER1, "OCTEONTX2 PMD must be loaded.\n");
- return TEST_FAILED;
- }
-
- /* Use test suite registered for crypto_octeontx PMD */
- return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
-}
-
static int
test_cryptodev_cn9k_asym(void)
{
@@ -2424,8 +2410,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_TEST_COMMAND(cryptodev_octeontx_asym_autotest,
test_cryptodev_octeontx_asym);
-
-REGISTER_TEST_COMMAND(cryptodev_octeontx2_asym_autotest,
- test_cryptodev_octeontx2_asym);
REGISTER_TEST_COMMAND(cryptodev_cn9k_asym_autotest, test_cryptodev_cn9k_asym);
REGISTER_TEST_COMMAND(cryptodev_cn10k_asym_autotest, test_cryptodev_cn10k_asym);
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 843d9766b0..10028fe11d 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1018,12 +1018,6 @@ test_eventdev_selftest_octeontx(void)
return test_eventdev_selftest_impl("event_octeontx", "");
}
-static int
-test_eventdev_selftest_octeontx2(void)
-{
- return test_eventdev_selftest_impl("event_octeontx2", "");
-}
-
static int
test_eventdev_selftest_dpaa2(void)
{
@@ -1052,8 +1046,6 @@ REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
test_eventdev_selftest_octeontx);
-REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
- test_eventdev_selftest_octeontx2);
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
diff --git a/config/arm/arm64_cn10k_linux_gcc b/config/arm/arm64_cn10k_linux_gcc
index 88e5f10945..a3578c03a1 100644
--- a/config/arm/arm64_cn10k_linux_gcc
+++ b/config/arm/arm64_cn10k_linux_gcc
@@ -14,4 +14,3 @@ endian = 'little'
[properties]
platform = 'cn10k'
-disable_drivers = 'common/octeontx2'
diff --git a/config/arm/arm64_octeontx2_linux_gcc b/config/arm/arm64_cn9k_linux_gcc
similarity index 84%
rename from config/arm/arm64_octeontx2_linux_gcc
rename to config/arm/arm64_cn9k_linux_gcc
index 8fbdd3868d..a94b44a551 100644
--- a/config/arm/arm64_octeontx2_linux_gcc
+++ b/config/arm/arm64_cn9k_linux_gcc
@@ -13,5 +13,4 @@ cpu = 'armv8-a'
endian = 'little'
[properties]
-platform = 'octeontx2'
-disable_drivers = 'common/cnxk'
+platform = 'cn9k'
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 213324d262..16e808cdd5 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -139,7 +139,7 @@ implementer_cavium = {
'march_features': ['crc', 'crypto', 'lse'],
'compiler_options': ['-mcpu=octeontx2'],
'flags': [
- ['RTE_MACHINE', '"octeontx2"'],
+ ['RTE_MACHINE', '"cn9k"'],
['RTE_ARM_FEATURE_ATOMICS', true],
['RTE_USE_C11_MEM_MODEL', true],
['RTE_MAX_LCORE', 36],
@@ -340,8 +340,8 @@ soc_n2 = {
'numa': false
}
-soc_octeontx2 = {
- 'description': 'Marvell OCTEON TX2',
+soc_cn9k = {
+ 'description': 'Marvell OCTEON 9',
'implementer': '0x43',
'part_number': '0xb2',
'numa': false
@@ -377,6 +377,7 @@ generic_aarch32: Generic un-optimized build for armv8 aarch32 execution mode.
armada: Marvell ARMADA
bluefield: NVIDIA BlueField
centriq2400: Qualcomm Centriq 2400
+cn9k: Marvell OCTEON 9
cn10k: Marvell OCTEON 10
dpaa: NXP DPAA
emag: Ampere eMAG
@@ -385,7 +386,6 @@ kunpeng920: HiSilicon Kunpeng 920
kunpeng930: HiSilicon Kunpeng 930
n1sdp: Arm Neoverse N1SDP
n2: Arm Neoverse N2
-octeontx2: Marvell OCTEON TX2
stingray: Broadcom Stingray
thunderx2: Marvell ThunderX2 T99
thunderxt88: Marvell ThunderX T88
@@ -399,6 +399,7 @@ socs = {
'armada': soc_armada,
'bluefield': soc_bluefield,
'centriq2400': soc_centriq2400,
+ 'cn9k': soc_cn9k,
'cn10k' : soc_cn10k,
'dpaa': soc_dpaa,
'emag': soc_emag,
@@ -407,7 +408,6 @@ socs = {
'kunpeng930': soc_kunpeng930,
'n1sdp': soc_n1sdp,
'n2': soc_n2,
- 'octeontx2': soc_octeontx2,
'stingray': soc_stingray,
'thunderx2': soc_thunderx2,
'thunderxt88': soc_thunderxt88
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index 5e654189a8..675f10142e 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -48,7 +48,7 @@ for dump in $(find $refdir -name "*.dump"); do
echo "Skipped removed driver $name."
continue
fi
- if grep -qE "\<librte_regex_octeontx2" $dump; then
+ if grep -qE "\<librte_*.*_octeontx2" $dump; then
echo "Skipped removed driver $name."
continue
fi
diff --git a/doc/guides/cryptodevs/features/octeontx2.ini b/doc/guides/cryptodevs/features/octeontx2.ini
deleted file mode 100644
index c54dc9409c..0000000000
--- a/doc/guides/cryptodevs/features/octeontx2.ini
+++ /dev/null
@@ -1,87 +0,0 @@
-;
-; Supported features of the 'octeontx2' crypto driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Symmetric crypto = Y
-Asymmetric crypto = Y
-Sym operation chaining = Y
-HW Accelerated = Y
-Protocol offload = Y
-In Place SGL = Y
-OOP SGL In LB Out = Y
-OOP SGL In SGL Out = Y
-OOP LB In LB Out = Y
-RSA PRIV OP KEY QT = Y
-Digest encrypted = Y
-Symmetric sessionless = Y
-
-;
-; Supported crypto algorithms of 'octeontx2' crypto driver.
-;
-[Cipher]
-NULL = Y
-3DES CBC = Y
-3DES ECB = Y
-AES CBC (128) = Y
-AES CBC (192) = Y
-AES CBC (256) = Y
-AES CTR (128) = Y
-AES CTR (192) = Y
-AES CTR (256) = Y
-AES XTS (128) = Y
-AES XTS (256) = Y
-DES CBC = Y
-KASUMI F8 = Y
-SNOW3G UEA2 = Y
-ZUC EEA3 = Y
-
-;
-; Supported authentication algorithms of 'octeontx2' crypto driver.
-;
-[Auth]
-NULL = Y
-AES GMAC = Y
-KASUMI F9 = Y
-MD5 = Y
-MD5 HMAC = Y
-SHA1 = Y
-SHA1 HMAC = Y
-SHA224 = Y
-SHA224 HMAC = Y
-SHA256 = Y
-SHA256 HMAC = Y
-SHA384 = Y
-SHA384 HMAC = Y
-SHA512 = Y
-SHA512 HMAC = Y
-SNOW3G UIA2 = Y
-ZUC EIA3 = Y
-
-;
-; Supported AEAD algorithms of 'octeontx2' crypto driver.
-;
-[AEAD]
-AES GCM (128) = Y
-AES GCM (192) = Y
-AES GCM (256) = Y
-CHACHA20-POLY1305 = Y
-
-;
-; Supported Asymmetric algorithms of the 'octeontx2' crypto driver.
-;
-[Asymmetric]
-RSA = Y
-DSA =
-Modular Exponentiation = Y
-Modular Inversion =
-Diffie-hellman =
-ECDSA = Y
-ECPM = Y
-
-;
-; Supported Operating systems of the 'octeontx2' crypto driver.
-;
-[OS]
-Linux = Y
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 3dcc2ecd2e..39cca6dbde 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -22,7 +22,6 @@ Crypto Device Drivers
dpaa_sec
kasumi
octeontx
- octeontx2
openssl
mlx5
mvsam
diff --git a/doc/guides/cryptodevs/octeontx2.rst b/doc/guides/cryptodevs/octeontx2.rst
deleted file mode 100644
index 811e61a1f6..0000000000
--- a/doc/guides/cryptodevs/octeontx2.rst
+++ /dev/null
@@ -1,188 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-
-Marvell OCTEON TX2 Crypto Poll Mode Driver
-==========================================
-
-The OCTEON TX2 crypto poll mode driver provides support for offloading
-cryptographic operations to cryptographic accelerator units on the
-**OCTEON TX2** :sup:`®` family of processors (CN9XXX).
-
-More information about OCTEON TX2 SoCs may be obtained from `<https://www.marvell.com>`_
-
-Features
---------
-
-The OCTEON TX2 crypto PMD has support for:
-
-Symmetric Crypto Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Cipher algorithms:
-
-* ``RTE_CRYPTO_CIPHER_NULL``
-* ``RTE_CRYPTO_CIPHER_3DES_CBC``
-* ``RTE_CRYPTO_CIPHER_3DES_ECB``
-* ``RTE_CRYPTO_CIPHER_AES_CBC``
-* ``RTE_CRYPTO_CIPHER_AES_CTR``
-* ``RTE_CRYPTO_CIPHER_AES_XTS``
-* ``RTE_CRYPTO_CIPHER_DES_CBC``
-* ``RTE_CRYPTO_CIPHER_KASUMI_F8``
-* ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2``
-* ``RTE_CRYPTO_CIPHER_ZUC_EEA3``
-
-Hash algorithms:
-
-* ``RTE_CRYPTO_AUTH_NULL``
-* ``RTE_CRYPTO_AUTH_AES_GMAC``
-* ``RTE_CRYPTO_AUTH_KASUMI_F9``
-* ``RTE_CRYPTO_AUTH_MD5``
-* ``RTE_CRYPTO_AUTH_MD5_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA1``
-* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA224``
-* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA256``
-* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA384``
-* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA512``
-* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
-* ``RTE_CRYPTO_AUTH_SNOW3G_UIA2``
-* ``RTE_CRYPTO_AUTH_ZUC_EIA3``
-
-AEAD algorithms:
-
-* ``RTE_CRYPTO_AEAD_AES_GCM``
-* ``RTE_CRYPTO_AEAD_CHACHA20_POLY1305``
-
-Asymmetric Crypto Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-* ``RTE_CRYPTO_ASYM_XFORM_RSA``
-* ``RTE_CRYPTO_ASYM_XFORM_MODEX``
-
-
-Installation
-------------
-
-The OCTEON TX2 crypto PMD may be compiled natively on an OCTEON TX2 platform or
-cross-compiled on an x86 platform.
-
-Refer to :doc:`../platform/octeontx2` for instructions to build your DPDK
-application.
-
-.. note::
-
- The OCTEON TX2 crypto PMD uses services from the kernel mode OCTEON TX2
- crypto PF driver in linux. This driver is included in the OCTEON TX SDK.
-
-Initialization
---------------
-
-List the CPT PF devices available on your OCTEON TX2 platform:
-
-.. code-block:: console
-
- lspci -d:a0fd
-
-``a0fd`` is the CPT PF device id. You should see output similar to:
-
-.. code-block:: console
-
- 0002:10:00.0 Class 1080: Device 177d:a0fd
-
-Set ``sriov_numvfs`` on the CPT PF device, to create a VF:
-
-.. code-block:: console
-
- echo 1 > /sys/bus/pci/drivers/octeontx2-cpt/0002:10:00.0/sriov_numvfs
-
-Bind the CPT VF device to the vfio_pci driver:
-
-.. code-block:: console
-
- echo '177d a0fe' > /sys/bus/pci/drivers/vfio-pci/new_id
- echo 0002:10:00.1 > /sys/bus/pci/devices/0002:10:00.1/driver/unbind
- echo 0002:10:00.1 > /sys/bus/pci/drivers/vfio-pci/bind
-
-Another way to bind the VF would be to use the ``dpdk-devbind.py`` script:
-
-.. code-block:: console
-
- cd <dpdk directory>
- ./usertools/dpdk-devbind.py -u 0002:10:00.1
- ./usertools/dpdk-devbind.py -b vfio-pci 0002:10.00.1
-
-.. note::
-
- * For CN98xx SoC, it is recommended to use even and odd DBDF VFs to achieve
- higher performance as even VF uses one crypto engine and odd one uses
- another crypto engine.
-
- * Ensure that sufficient huge pages are available for your application::
-
- dpdk-hugepages.py --setup 4G --pagesize 512M
-
- Refer to :ref:`linux_gsg_hugepages` for more details.
-
-Debugging Options
------------------
-
-.. _table_octeontx2_crypto_debug_options:
-
-.. table:: OCTEON TX2 crypto PMD debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | CPT | --log-level='pmd\.crypto\.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
-
-Testing
--------
-
-The symmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test
-application:
-
-.. code-block:: console
-
- ./dpdk-test
- RTE>>cryptodev_octeontx2_autotest
-
-The asymmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test
-application:
-
-.. code-block:: console
-
- ./dpdk-test
- RTE>>cryptodev_octeontx2_asym_autotest
-
-
-Lookaside IPsec Support
------------------------
-
-The OCTEON TX2 SoC can accelerate IPsec traffic in lookaside protocol mode,
-with its **cryptographic accelerator (CPT)**. ``OCTEON TX2 crypto PMD`` implements
-this as an ``RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL`` offload.
-
-Refer to :doc:`../prog_guide/rte_security` for more details on protocol offloads.
-
-This feature can be tested with ipsec-secgw sample application.
-
-
-Features supported
-~~~~~~~~~~~~~~~~~~
-
-* IPv4
-* IPv6
-* ESP
-* Tunnel mode
-* Transport mode(IPv4)
-* ESN
-* Anti-replay
-* UDP Encapsulation
-* AES-128/192/256-GCM
-* AES-128/192/256-CBC-SHA1-HMAC
-* AES-128/192/256-CBC-SHA256-128-HMAC
diff --git a/doc/guides/dmadevs/cnxk.rst b/doc/guides/dmadevs/cnxk.rst
index da2dd59071..418b9a9d63 100644
--- a/doc/guides/dmadevs/cnxk.rst
+++ b/doc/guides/dmadevs/cnxk.rst
@@ -7,7 +7,7 @@ CNXK DMA Device Driver
======================
The ``cnxk`` dmadev driver provides a poll-mode driver (PMD) for Marvell DPI DMA
-Hardware Accelerator block found in OCTEONTX2 and OCTEONTX3 family of SoCs.
+Hardware Accelerator block found in OCTEON 9 and OCTEON 10 family of SoCs.
Each DMA queue is exposed as a VF function when SRIOV is enabled.
The block supports following modes of DMA transfers:
diff --git a/doc/guides/eventdevs/features/octeontx2.ini b/doc/guides/eventdevs/features/octeontx2.ini
deleted file mode 100644
index 05b84beb6e..0000000000
--- a/doc/guides/eventdevs/features/octeontx2.ini
+++ /dev/null
@@ -1,30 +0,0 @@
-;
-; Supported features of the 'octeontx2' eventdev driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Scheduling Features]
-queue_qos = Y
-distributed_sched = Y
-queue_all_types = Y
-nonseq_mode = Y
-runtime_port_link = Y
-multiple_queue_port = Y
-carry_flow_id = Y
-maintenance_free = Y
-
-[Eth Rx adapter Features]
-internal_port = Y
-multi_eventq = Y
-
-[Eth Tx adapter Features]
-internal_port = Y
-
-[Crypto adapter Features]
-internal_port_op_new = Y
-internal_port_op_fwd = Y
-internal_port_qp_ev_bind = Y
-
-[Timer adapter Features]
-internal_port = Y
-periodic = Y
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index b11657f7ae..eed19ad28c 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -19,5 +19,4 @@ application through the eventdev API.
dsw
sw
octeontx
- octeontx2
opdl
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
deleted file mode 100644
index 0fa57abfa3..0000000000
--- a/doc/guides/eventdevs/octeontx2.rst
+++ /dev/null
@@ -1,178 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-OCTEON TX2 SSO Eventdev Driver
-===============================
-
-The OCTEON TX2 SSO PMD (**librte_event_octeontx2**) provides poll mode
-eventdev driver support for the inbuilt event device found in the **Marvell OCTEON TX2**
-SoC family.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Features
---------
-
-Features of the OCTEON TX2 SSO PMD are:
-
-- 256 Event queues
-- 26 (dual) and 52 (single) Event ports
-- HW event scheduler
-- Supports 1M flows per event queue
-- Flow based event pipelining
-- Flow pinning support in flow based event pipelining
-- Queue based event pipelining
-- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
-- Event scheduling QoS based on event queue priority
-- Open system with configurable amount of outstanding events limited only by
- DRAM
-- HW accelerated dequeue timeout support to enable power management
-- HW managed event timers support through TIM, with high precision and
- time granularity of 2.5us.
-- Up to 256 TIM rings aka event timer adapters.
-- Up to 8 rings traversed in parallel.
-- HW managed packets enqueued from ethdev to eventdev exposed through event eth
- RX adapter.
-- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
- capability while maintaining receive packet order.
-- Full Rx/Tx offload support defined through ethdev queue config.
-
-Prerequisites and Compilation procedure
----------------------------------------
-
- See :doc:`../platform/octeontx2` for setup information.
-
-
-Runtime Config Options
-----------------------
-
-- ``Maximum number of in-flight events`` (default ``8192``)
-
- In **Marvell OCTEON TX2** the max number of in-flight events are only limited
- by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
- upper limit for in-flight events.
- For example::
-
- -a 0002:0e:00.0,xae_cnt=16384
-
-- ``Force legacy mode``
-
- The ``single_ws`` devargs parameter is introduced to force legacy mode i.e
- single workslot mode in SSO and disable the default dual workslot mode.
- For example::
-
- -a 0002:0e:00.0,single_ws=1
-
-- ``Event Group QoS support``
-
- SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
- events. By default the buffers are assigned to the SSO GGRPs to
- satisfy minimum HW requirements. SSO is free to assign the remaining
- buffers to GGRPs based on a preconfigured threshold.
- We can control the QoS of SSO GGRP by modifying the above mentioned
- thresholds. GGRPs that have higher importance can be assigned higher
- thresholds than the rest. The dictionary format is as follows
- [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
- default.
- For example::
-
- -a 0002:0e:00.0,qos=[1-50-50-50]
-
-- ``TIM disable NPA``
-
- By default chunks are allocated from NPA then TIM can automatically free
- them when traversing the list of chunks. The ``tim_disable_npa`` devargs
- parameter disables NPA and uses software mempool to manage chunks
- For example::
-
- -a 0002:0e:00.0,tim_disable_npa=1
-
-- ``TIM modify chunk slots``
-
- The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
- Chunks are used to store event timers, a chunk can be visualised as an array
- where the last element points to the next chunk and rest of them are used to
- store events. TIM traverses the list of chunks and enqueues the event timers
- to SSO. The default value is 255 and the max value is 4095.
- For example::
-
- -a 0002:0e:00.0,tim_chnk_slots=1023
-
-- ``TIM enable arm/cancel statistics``
-
- The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
- event timer adapter.
- For example::
-
- -a 0002:0e:00.0,tim_stats_ena=1
-
-- ``TIM limit max rings reserved``
-
- The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
- rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
- resources we can avoid starving other applications by not grabbing all the
- rings.
- For example::
-
- -a 0002:0e:00.0,tim_rings_lmt=5
-
-- ``TIM ring control internal parameters``
-
- When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
- control each TIM rings internal parameters uniquely. The following dict
- format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
- default values.
- For Example::
-
- -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:0e:00.0,npa_lock_mask=0xf
-
-- ``Force Rx Back pressure``
-
- Force Rx back pressure when same mempool is used across ethernet device
- connected to event device.
-
- For example::
-
- -a 0002:0e:00.0,force_rx_bp=1
-
-Debugging Options
------------------
-
-.. _table_octeontx2_event_debug_options:
-
-.. table:: OCTEON TX2 event device debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | SSO | --log-level='pmd\.event\.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | TIM | --log-level='pmd\.event\.octeontx2\.timer,8' |
- +---+------------+-------------------------------------------------------+
-
-Limitations
------------
-
-Rx adapter support
-~~~~~~~~~~~~~~~~~~
-
-Using the same mempool for all the ethernet device ports connected to
-event device would cause back pressure to be asserted only on the first
-ethernet device.
-Back pressure is automatically disabled when using same mempool for all the
-ethernet devices connected to event device to override this applications can
-use `force_rx_bp=1` device arguments.
-Using unique mempool per each ethernet device is recommended when they are
-connected to event device.
diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst
index ce53bc1ac7..e4b6ee7d31 100644
--- a/doc/guides/mempool/index.rst
+++ b/doc/guides/mempool/index.rst
@@ -13,6 +13,5 @@ application through the mempool API.
cnxk
octeontx
- octeontx2
ring
stack
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
deleted file mode 100644
index 1272c1e72b..0000000000
--- a/doc/guides/mempool/octeontx2.rst
+++ /dev/null
@@ -1,92 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-OCTEON TX2 NPA Mempool Driver
-=============================
-
-The OCTEON TX2 NPA PMD (**librte_mempool_octeontx2**) provides mempool
-driver support for the integrated mempool device found in **Marvell OCTEON TX2** SoC family.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Features
---------
-
-OCTEON TX2 NPA PMD supports:
-
-- Up to 128 NPA LFs
-- 1M Pools per LF
-- HW mempool manager
-- Ethdev Rx buffer allocation in HW to save CPU cycles in the Rx path.
-- Ethdev Tx buffer recycling in HW to save CPU cycles in the Tx path.
-
-Prerequisites and Compilation procedure
----------------------------------------
-
- See :doc:`../platform/octeontx2` for setup information.
-
-Pre-Installation Configuration
-------------------------------
-
-
-Runtime Config Options
-~~~~~~~~~~~~~~~~~~~~~~
-
-- ``Maximum number of mempools per application`` (default ``128``)
-
- The maximum number of mempools per application needs to be configured on
- HW during mempool driver initialization. HW can support up to 1M mempools,
- Since each mempool costs set of HW resources, the ``max_pools`` ``devargs``
- parameter is being introduced to configure the number of mempools required
- for the application.
- For example::
-
- -a 0002:02:00.0,max_pools=512
-
- With the above configuration, the driver will set up only 512 mempools for
- the given application to save HW resources.
-
-.. note::
-
- Since this configuration is per application, the end user needs to
- provide ``max_pools`` parameter to the first PCIe device probed by the given
- application.
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:02:00.0,npa_lock_mask=0xf
-
-Debugging Options
-~~~~~~~~~~~~~~~~~
-
-.. _table_octeontx2_mempool_debug_options:
-
-.. table:: OCTEON TX2 mempool debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | NPA | --log-level='pmd\.mempool.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
-
-Standalone mempool device
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
- The ``usertools/dpdk-devbind.py`` script shall enumerate all the mempool devices
- available in the system. In order to avoid, the end user to bind the mempool
- device prior to use ethdev and/or eventdev device, the respective driver
- configures an NPA LF and attach to the first probed ethdev or eventdev device.
- In case, if end user need to run mempool as a standalone device
- (without ethdev or eventdev), end user needs to bind a mempool device using
- ``usertools/dpdk-devbind.py``
-
- Example command to run ``mempool_autotest`` test with standalone OCTEONTX2 NPA device::
-
- echo "mempool_autotest" | <build_dir>/app/test/dpdk-test -c 0xf0 --mbuf-pool-ops-name="octeontx2_npa"
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 84f9865654..2119ba51c8 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -178,7 +178,7 @@ Runtime Config Options
* ``rss_adder<7:0> = flow_tag<7:0>``
Latter one aligns with standard NIC behavior vs former one is a legacy
- RSS adder scheme used in OCTEON TX2 products.
+ RSS adder scheme used in OCTEON 9 products.
By default, the driver runs in the latter mode.
Setting this flag to 1 to select the legacy mode.
@@ -291,7 +291,7 @@ Limitations
The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool manager.
``net_cnxk`` PMD only works with ``mempool_cnxk`` mempool handler
as it is performance wise most effective way for packet allocation and Tx buffer
-recycling on OCTEON TX2 SoC platform.
+recycling on OCTEON 9 SoC platform.
CRC stripping
~~~~~~~~~~~~~
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
deleted file mode 100644
index bf0c2890f2..0000000000
--- a/doc/guides/nics/features/octeontx2.ini
+++ /dev/null
@@ -1,97 +0,0 @@
-;
-; Supported features of the 'octeontx2' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Rx interrupt = Y
-Lock-free Tx queue = Y
-SR-IOV = Y
-Multiprocess aware = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-MTU update = Y
-TSO = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-Unicast MAC filter = Y
-Multicast MAC filter = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-Inline protocol = Y
-VLAN filter = Y
-Flow control = Y
-Rate limitation = Y
-Scattered Rx = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Timesync = Y
-Timestamp offload = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Stats per queue = Y
-Extended stats = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
-
-[rte_flow items]
-any = Y
-arp_eth_ipv4 = Y
-esp = Y
-eth = Y
-e_tag = Y
-geneve = Y
-gre = Y
-gre_key = Y
-gtpc = Y
-gtpu = Y
-higig2 = Y
-icmp = Y
-ipv4 = Y
-ipv6 = Y
-ipv6_ext = Y
-mpls = Y
-nvgre = Y
-raw = Y
-sctp = Y
-tcp = Y
-udp = Y
-vlan = Y
-vxlan = Y
-vxlan_gpe = Y
-
-[rte_flow actions]
-count = Y
-drop = Y
-flag = Y
-mark = Y
-of_pop_vlan = Y
-of_push_vlan = Y
-of_set_vlan_pcp = Y
-of_set_vlan_vid = Y
-pf = Y
-port_id = Y
-port_representor = Y
-queue = Y
-rss = Y
-security = Y
-vf = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
deleted file mode 100644
index c405db7cf9..0000000000
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ /dev/null
@@ -1,48 +0,0 @@
-;
-; Supported features of the 'octeontx2_vec' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Lock-free Tx queue = Y
-SR-IOV = Y
-Multiprocess aware = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-MTU update = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-Unicast MAC filter = Y
-Multicast MAC filter = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-VLAN filter = Y
-Flow control = Y
-Rate limitation = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Extended stats = Y
-Stats per queue = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
deleted file mode 100644
index 5ac7a49a5c..0000000000
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ /dev/null
@@ -1,45 +0,0 @@
-;
-; Supported features of the 'octeontx2_vf' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Lock-free Tx queue = Y
-Multiprocess aware = Y
-Rx interrupt = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-TSO = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-Inline protocol = Y
-VLAN filter = Y
-Rate limitation = Y
-Scattered Rx = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Extended stats = Y
-Stats per queue = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 1c94caccea..f48e9f815c 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -52,7 +52,6 @@ Network Interface Controller Drivers
ngbe
null
octeontx
- octeontx2
octeontx_ep
pfe
qede
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
deleted file mode 100644
index 4ce067f2c5..0000000000
--- a/doc/guides/nics/octeontx2.rst
+++ /dev/null
@@ -1,465 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(C) 2019 Marvell International Ltd.
-
-OCTEON TX2 Poll Mode driver
-===========================
-
-The OCTEON TX2 ETHDEV PMD (**librte_net_octeontx2**) provides poll mode ethdev
-driver support for the inbuilt network device found in **Marvell OCTEON TX2**
-SoC family as well as for their virtual functions (VF) in SR-IOV context.
-
-More information can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
-
-Features
---------
-
-Features of the OCTEON TX2 Ethdev PMD are:
-
-- Packet type information
-- Promiscuous mode
-- Jumbo frames
-- SR-IOV VF
-- Lock-free Tx queue
-- Multiple queues for TX and RX
-- Receiver Side Scaling (RSS)
-- MAC/VLAN filtering
-- Multicast MAC filtering
-- Generic flow API
-- Inner and Outer Checksum offload
-- VLAN/QinQ stripping and insertion
-- Port hardware statistics
-- Link state information
-- Link flow control
-- MTU update
-- Scatter-Gather IO support
-- Vector Poll mode driver
-- Debug utilities - Context dump and error interrupt support
-- IEEE1588 timestamping
-- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
-- Support Rx interrupt
-- Inline IPsec processing support
-- :ref:`Traffic Management API <otx2_tmapi>`
-
-Prerequisites
--------------
-
-See :doc:`../platform/octeontx2` for setup information.
-
-
-Driver compilation and testing
-------------------------------
-
-Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
-for details.
-
-#. Running testpmd:
-
- Follow instructions available in the document
- :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
- to run testpmd.
-
- Example output:
-
- .. code-block:: console
-
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
- EAL: Detected 24 lcore(s)
- EAL: Detected 1 NUMA nodes
- EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
- EAL: No available hugepages reported in hugepages-2048kB
- EAL: Probing VFIO support...
- EAL: VFIO support initialized
- EAL: PCI device 0002:02:00.0 on NUMA socket 0
- EAL: probe driver: 177d:a063 net_octeontx2
- EAL: using IOMMU type 1 (Type 1)
- testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
- testpmd: preferred mempool ops selected: octeontx2_npa
- Configuring Port 0 (socket 0)
- PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
-
- Port 0: link state change event
- Port 0: 36:10:66:88:7A:57
- Checking link statuses...
- Done
- No commandline core given, start packet forwarding
- io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
- Logical Core 9 (socket 0) forwards packets on 1 streams:
- RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
-
- io packet forwarding packets/burst=32
- nb forwarding cores=1 - nb forwarding ports=1
- port 0: RX queue number: 1 Tx queue number: 1
- Rx offloads=0x0 Tx offloads=0x10000
- RX queue: 0
- RX desc=512 - RX free threshold=0
- RX threshold registers: pthresh=0 hthresh=0 wthresh=0
- RX Offloads=0x0
- TX queue: 0
- TX desc=512 - TX free threshold=0
- TX threshold registers: pthresh=0 hthresh=0 wthresh=0
- TX offloads=0x10000 - TX RS bit threshold=0
- Press enter to exit
-
-Runtime Config Options
-----------------------
-
-- ``Rx&Tx scalar mode enable`` (default ``0``)
-
- Ethdev supports both scalar and vector mode, it may be selected at runtime
- using ``scalar_enable`` ``devargs`` parameter.
-
-- ``RSS reta size`` (default ``64``)
-
- RSS redirection table size may be configured during runtime using ``reta_size``
- ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,reta_size=256
-
- With the above configuration, reta table of size 256 is populated.
-
-- ``Flow priority levels`` (default ``3``)
-
- RTE Flow priority levels can be configured during runtime using
- ``flow_max_priority`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,flow_max_priority=10
-
- With the above configuration, priority level was set to 10 (0-9). Max
- priority level supported is 32.
-
-- ``Reserve Flow entries`` (default ``8``)
-
- RTE flow entries can be pre allocated and the size of pre allocation can be
- selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,flow_prealloc_size=4
-
- With the above configuration, pre alloc size was set to 4. Max pre alloc
- size supported is 32.
-
-- ``Max SQB buffer count`` (default ``512``)
-
- Send queue descriptor buffer count may be limited during runtime using
- ``max_sqb_count`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,max_sqb_count=64
-
- With the above configuration, each send queue's descriptor buffer count is
- limited to a maximum of 64 buffers.
-
-- ``Switch header enable`` (default ``none``)
-
- A port can be configured to a specific switch header type by using
- ``switch_header`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,switch_header="higig2"
-
- With the above configuration, higig2 will be enabled on that port and the
- traffic on this port should be higig2 traffic only. Supported switch header
- types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2" and "vlan_exdsa".
-
-- ``RSS tag as XOR`` (default ``0``)
-
- C0 HW revision onward, The HW gives an option to configure the RSS adder as
-
- * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
-
- * ``rss_adder<7:0> = flow_tag<7:0>``
-
- Latter one aligns with standard NIC behavior vs former one is a legacy
- RSS adder scheme used in OCTEON TX2 products.
-
- By default, the driver runs in the latter mode from C0 HW revision onward.
- Setting this flag to 1 to select the legacy mode.
-
- For example to select the legacy mode(RSS tag adder as XOR)::
-
- -a 0002:02:00.0,tag_as_xor=1
-
-- ``Max SPI for inbound inline IPsec`` (default ``1``)
-
- Max SPI supported for inbound inline IPsec processing can be specified by
- ``ipsec_in_max_spi`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,ipsec_in_max_spi=128
-
- With the above configuration, application can enable inline IPsec processing
- on 128 SAs (SPI 0-127).
-
-- ``Lock Rx contexts in NDC cache``
-
- Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
-
- For example::
-
- -a 0002:02:00.0,lock_rx_ctx=1
-
-- ``Lock Tx contexts in NDC cache``
-
- Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
-
- For example::
-
- -a 0002:02:00.0,lock_tx_ctx=1
-
-.. note::
-
- Above devarg parameters are configurable per device, user needs to pass the
- parameters to all the PCIe devices if application requires to configure on
- all the ethdev ports.
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:02:00.0,npa_lock_mask=0xf
-
-.. _otx2_tmapi:
-
-Traffic Management API
-----------------------
-
-OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
-configure the following features:
-
-#. Hierarchical scheduling
-#. Single rate - Two color, Two rate - Three color shaping
-
-Both DWRR and Static Priority(SP) hierarchical scheduling is supported.
-
-Every parent can have atmost 10 SP Children and unlimited DWRR children.
-
-Both PF & VF supports traffic management API with PF supporting 6 levels
-and VF supporting 5 levels of topology.
-
-Limitations
------------
-
-``mempool_octeontx2`` external mempool handler dependency
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
-``net_octeontx2`` PMD only works with ``mempool_octeontx2`` mempool handler
-as it is performance wise most effective way for packet allocation and Tx buffer
-recycling on OCTEON TX2 SoC platform.
-
-CRC stripping
-~~~~~~~~~~~~~
-
-The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
-the host interface irrespective of the offload configuration.
-
-Multicast MAC filtering
-~~~~~~~~~~~~~~~~~~~~~~~
-
-``net_octeontx2`` PMD supports multicast mac filtering feature only on physical
-function devices.
-
-SDP interface support
-~~~~~~~~~~~~~~~~~~~~~
-OCTEON TX2 SDP interface support is limited to PF device, No VF support.
-
-Inline Protocol Processing
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-``net_octeontx2`` PMD doesn't support the following features for packets to be
-inline protocol processed.
-- TSO offload
-- VLAN/QinQ offload
-- Fragmentation
-
-Debugging Options
------------------
-
-.. _table_octeontx2_ethdev_debug_options:
-
-.. table:: OCTEON TX2 ethdev debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
- +---+------------+-------------------------------------------------------+
-
-RTE Flow Support
-----------------
-
-The OCTEON TX2 SoC family NIC has support for the following patterns and
-actions.
-
-Patterns:
-
-.. _table_octeontx2_supported_flow_item_types:
-
-.. table:: Item types
-
- +----+--------------------------------+
- | # | Pattern Type |
- +====+================================+
- | 1 | RTE_FLOW_ITEM_TYPE_ETH |
- +----+--------------------------------+
- | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
- +----+--------------------------------+
- | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
- +----+--------------------------------+
- | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
- +----+--------------------------------+
- | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
- +----+--------------------------------+
- | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
- +----+--------------------------------+
- | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
- +----+--------------------------------+
- | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
- +----+--------------------------------+
- | 9 | RTE_FLOW_ITEM_TYPE_UDP |
- +----+--------------------------------+
- | 10 | RTE_FLOW_ITEM_TYPE_TCP |
- +----+--------------------------------+
- | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
- +----+--------------------------------+
- | 12 | RTE_FLOW_ITEM_TYPE_ESP |
- +----+--------------------------------+
- | 13 | RTE_FLOW_ITEM_TYPE_GRE |
- +----+--------------------------------+
- | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
- +----+--------------------------------+
- | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
- +----+--------------------------------+
- | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
- +----+--------------------------------+
- | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
- +----+--------------------------------+
- | 18 | RTE_FLOW_ITEM_TYPE_GENEVE |
- +----+--------------------------------+
- | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE |
- +----+--------------------------------+
- | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT |
- +----+--------------------------------+
- | 21 | RTE_FLOW_ITEM_TYPE_VOID |
- +----+--------------------------------+
- | 22 | RTE_FLOW_ITEM_TYPE_ANY |
- +----+--------------------------------+
- | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY |
- +----+--------------------------------+
- | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 |
- +----+--------------------------------+
- | 25 | RTE_FLOW_ITEM_TYPE_RAW |
- +----+--------------------------------+
-
-.. note::
-
- ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
- bits in the GRE header are equal to 0.
-
-Actions:
-
-.. _table_octeontx2_supported_ingress_action_types:
-
-.. table:: Ingress action types
-
- +----+-----------------------------------------+
- | # | Action Type |
- +====+=========================================+
- | 1 | RTE_FLOW_ACTION_TYPE_VOID |
- +----+-----------------------------------------+
- | 2 | RTE_FLOW_ACTION_TYPE_MARK |
- +----+-----------------------------------------+
- | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
- +----+-----------------------------------------+
- | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
- +----+-----------------------------------------+
- | 5 | RTE_FLOW_ACTION_TYPE_DROP |
- +----+-----------------------------------------+
- | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
- +----+-----------------------------------------+
- | 7 | RTE_FLOW_ACTION_TYPE_RSS |
- +----+-----------------------------------------+
- | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
- +----+-----------------------------------------+
- | 9 | RTE_FLOW_ACTION_TYPE_PF |
- +----+-----------------------------------------+
- | 10 | RTE_FLOW_ACTION_TYPE_VF |
- +----+-----------------------------------------+
- | 11 | RTE_FLOW_ACTION_TYPE_OF_POP_VLAN |
- +----+-----------------------------------------+
- | 12 | RTE_FLOW_ACTION_TYPE_PORT_ID |
- +----+-----------------------------------------+
- | 13 | RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR |
- +----+-----------------------------------------+
-
-.. note::
-
- ``RTE_FLOW_ACTION_TYPE_PORT_ID``, ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR``
- are only supported between PF and its VFs.
-
-.. _table_octeontx2_supported_egress_action_types:
-
-.. table:: Egress action types
-
- +----+-----------------------------------------+
- | # | Action Type |
- +====+=========================================+
- | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
- +----+-----------------------------------------+
- | 2 | RTE_FLOW_ACTION_TYPE_DROP |
- +----+-----------------------------------------+
- | 3 | RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN |
- +----+-----------------------------------------+
- | 4 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID |
- +----+-----------------------------------------+
- | 5 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP |
- +----+-----------------------------------------+
-
-Custom protocols supported in RTE Flow
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols.
-
-* ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level.
-* ``NGIO`` can be parsed at L3 level.
-
-For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the
-respective switch header.
-
-For example::
-
- -a 0002:02:00.0,switch_header="vlan_exdsa"
-
-The below fields of ``struct rte_flow_item_raw`` shall be used to specify the
-pattern.
-
-- ``relative`` Selects the layer at which parsing is done.
-
- - 0 for ``exdsa`` and ``vlan_exdsa``.
-
- - 1 for ``NGIO``.
-
-- ``offset`` The offset in the header where the pattern should be matched.
-- ``length`` Length of the pattern.
-- ``pattern`` Pattern as a byte string.
-
-Example usage in testpmd::
-
- ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \
- --rx-offloads=0x00080000 --rxq 8 --txq 8
- testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
- spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
diff --git a/doc/guides/nics/octeontx_ep.rst b/doc/guides/nics/octeontx_ep.rst
index b512ccfdab..2ec8a034b5 100644
--- a/doc/guides/nics/octeontx_ep.rst
+++ b/doc/guides/nics/octeontx_ep.rst
@@ -5,7 +5,7 @@ OCTEON TX EP Poll Mode driver
=============================
The OCTEON TX EP ETHDEV PMD (**librte_pmd_octeontx_ep**) provides poll mode
-ethdev driver support for the virtual functions (VF) of **Marvell OCTEON TX2**
+ethdev driver support for the virtual functions (VF) of **Marvell OCTEON 9**
and **Cavium OCTEON TX** families of adapters in SR-IOV context.
More information can be found at `Marvell Official Website
@@ -24,4 +24,4 @@ must be installed separately:
allocates resources such as number of VFs, input/output queues for itself and
the number of i/o queues each VF can use.
-See :doc:`../platform/octeontx2` for SDP interface information which provides PCIe endpoint support for a remote host.
+See :doc:`../platform/cnxk` for SDP interface information which provides PCIe endpoint support for a remote host.
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index 5213df3ccd..97e38c868c 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -13,6 +13,9 @@ More information about CN9K and CN10K SoC can be found at `Marvell Official Webs
Supported OCTEON cnxk SoCs
--------------------------
+- CN93xx
+- CN96xx
+- CN98xx
- CN106xx
- CNF105xx
@@ -583,6 +586,15 @@ Cross Compilation
Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
+CN9K:
+
+.. code-block:: console
+
+ meson build --cross-file config/arm/arm64_cn9k_linux_gcc
+ ninja -C build
+
+CN10K:
+
.. code-block:: console
meson build --cross-file config/arm/arm64_cn10k_linux_gcc
diff --git a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg b/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
deleted file mode 100644
index ecd575947a..0000000000
--- a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
+++ /dev/null
@@ -1,2804 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<!--
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2019 Marvell International Ltd.
-#
--->
-
-<svg
- xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="631.91431"
- height="288.34286"
- id="svg3868"
- version="1.1"
- inkscape:version="0.92.4 (5da689c313, 2019-01-14)"
- sodipodi:docname="octeontx2_packet_flow_hw_accelerators.svg"
- sodipodi:version="0.32"
- inkscape:output_extension="org.inkscape.output.svg.inkscape">
- <defs
- id="defs3870">
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker18508"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Send">
- <path
- transform="scale(0.2) rotate(180) translate(6,0)"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path18506" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Sstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker18096"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path18094"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) translate(6,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker17550"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Sstart"
- inkscape:collect="always">
- <path
- transform="scale(0.2) translate(6,0)"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path17548" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker17156"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Send">
- <path
- transform="scale(0.2) rotate(180) translate(6,0)"
- style="fill-rule:evenodd;stroke:#00db00;stroke-width:1pt;stroke-opacity:1;fill:#00db00;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path17154" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient13962">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop13958" />
- <stop
- style="stop-color:#fc0000;stop-opacity:0;"
- offset="1"
- id="stop13960" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Send"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow1Send"
- style="overflow:visible;"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6218"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) rotate(180) translate(6,0)" />
- </marker>
- <linearGradient
- id="linearGradient13170"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop13168" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker12747"
- style="overflow:visible;"
- inkscape:isstock="true">
- <path
- id="path12745"
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#ff0000;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- transform="scale(0.6) rotate(180) translate(0,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker10821"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow2Mend"
- inkscape:collect="always">
- <path
- transform="scale(0.6) rotate(180) translate(0,0)"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- id="path10819" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker10463"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow2Mend">
- <path
- transform="scale(0.6) rotate(180) translate(0,0)"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- id="path10461" />
- </marker>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow2Mend"
- style="overflow:visible;"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6230"
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- transform="scale(0.6) rotate(180) translate(0,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker9807"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="TriangleOutS">
- <path
- transform="scale(0.2)"
- style="fill-rule:evenodd;stroke:#fe0000;stroke-width:1pt;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
- id="path9805" />
- </marker>
- <marker
- inkscape:stockid="TriangleOutS"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="TriangleOutS"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6351"
- d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
- style="fill-rule:evenodd;stroke:#fe0000;stroke-width:1pt;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- transform="scale(0.2)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Sstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow1Sstart"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6215"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) translate(6,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient4340">
- <stop
- style="stop-color:#d7eef4;stop-opacity:1;"
- offset="0"
- id="stop4336" />
- <stop
- style="stop-color:#d7eef4;stop-opacity:0;"
- offset="1"
- id="stop4338" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient4330">
- <stop
- style="stop-color:#d7eef4;stop-opacity:1;"
- offset="0"
- id="stop4326" />
- <stop
- style="stop-color:#d7eef4;stop-opacity:0;"
- offset="1"
- id="stop4328" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient3596">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3592" />
- <stop
- style="stop-color:#6ba6fd;stop-opacity:0;"
- offset="1"
- id="stop3594" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker9460"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path9458"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker7396"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path7133"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5474">
- <stop
- style="stop-color:#ffffff;stop-opacity:1;"
- offset="0"
- id="stop5470" />
- <stop
- style="stop-color:#ffffff;stop-opacity:0;"
- offset="1"
- id="stop5472" />
- </linearGradient>
- <linearGradient
- id="linearGradient6545"
- osb:paint="solid">
- <stop
- style="stop-color:#ffa600;stop-opacity:1;"
- offset="0"
- id="stop6543" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3302"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3294"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3290"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3286"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3228"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3188"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3184"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3180"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3176"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3172"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3168"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3164"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3160"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120"
- is_visible="true" />
- <linearGradient
- id="linearGradient3114"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3112" />
- </linearGradient>
- <linearGradient
- id="linearGradient3088"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3086" />
- </linearGradient>
- <linearGradient
- id="linearGradient3058"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3056" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3054"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3050"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3046"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3042"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3038"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3034"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3030"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3004"
- is_visible="true" />
- <linearGradient
- id="linearGradient2975"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2200;stop-opacity:1;"
- offset="0"
- id="stop2973" />
- </linearGradient>
- <linearGradient
- id="linearGradient2969"
- osb:paint="solid">
- <stop
- style="stop-color:#69ff72;stop-opacity:1;"
- offset="0"
- id="stop2967" />
- </linearGradient>
- <linearGradient
- id="linearGradient2963"
- osb:paint="solid">
- <stop
- style="stop-color:#000000;stop-opacity:1;"
- offset="0"
- id="stop2961" />
- </linearGradient>
- <linearGradient
- id="linearGradient2929"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2d00;stop-opacity:1;"
- offset="0"
- id="stop2927" />
- </linearGradient>
- <linearGradient
- id="linearGradient4610"
- osb:paint="solid">
- <stop
- style="stop-color:#00ffff;stop-opacity:1;"
- offset="0"
- id="stop4608" />
- </linearGradient>
- <linearGradient
- id="linearGradient3993"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3991" />
- </linearGradient>
- <linearGradient
- id="linearGradient3808"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3806" />
- </linearGradient>
- <linearGradient
- id="linearGradient3776"
- osb:paint="solid">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop3774" />
- </linearGradient>
- <linearGradient
- id="linearGradient3438"
- osb:paint="solid">
- <stop
- style="stop-color:#b8e132;stop-opacity:1;"
- offset="0"
- id="stop3436" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3408"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3404"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3400"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3392"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3376"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3040"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3036"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3032"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3028"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3024"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3020"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2854"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect2844"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <linearGradient
- id="linearGradient2828"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop2826" />
- </linearGradient>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect329"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart"
- style="overflow:visible">
- <path
- id="path4530"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend"
- style="overflow:visible">
- <path
- id="path4533"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- id="linearGradient4513">
- <stop
- style="stop-color:#fdffdb;stop-opacity:1;"
- offset="0"
- id="stop4515" />
- <stop
- style="stop-color:#dfe2d8;stop-opacity:0;"
- offset="1"
- id="stop4517" />
- </linearGradient>
- <inkscape:perspective
- sodipodi:type="inkscape:persp3d"
- inkscape:vp_x="0 : 526.18109 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_z="744.09448 : 526.18109 : 1"
- inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
- id="perspective3876" />
- <inkscape:perspective
- id="perspective3886"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lend"
- style="overflow:visible">
- <path
- id="path3211"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3892"
- style="overflow:visible">
- <path
- id="path3894"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3896"
- style="overflow:visible">
- <path
- id="path3898"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lstart"
- style="overflow:visible">
- <path
- id="path3208"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3902"
- style="overflow:visible">
- <path
- id="path3904"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3906"
- style="overflow:visible">
- <path
- id="path3908"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3910"
- style="overflow:visible">
- <path
- id="path3912"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective4086"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective4113"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective5195"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-4"
- style="overflow:visible">
- <path
- id="path4533-7"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5272"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-4"
- style="overflow:visible">
- <path
- id="path4530-5"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-0"
- style="overflow:visible">
- <path
- id="path4533-3"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5317"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-3"
- style="overflow:visible">
- <path
- id="path4530-2"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-06"
- style="overflow:visible">
- <path
- id="path4533-1"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-8"
- style="overflow:visible">
- <path
- id="path4530-7"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-9"
- style="overflow:visible">
- <path
- id="path4533-2"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858-0"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3"
- style="overflow:visible">
- <path
- id="path4533-75"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3-2"
- style="overflow:visible">
- <path
- id="path4533-75-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008-3"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7-3"
- is_visible="true" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5695"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,206.76869,3.9208776)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-34"
- style="overflow:visible">
- <path
- id="path4530-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-45"
- style="overflow:visible">
- <path
- id="path4533-16"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7"
- style="overflow:visible">
- <path
- id="path4530-58"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1"
- style="overflow:visible">
- <path
- id="path4533-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-6"
- style="overflow:visible">
- <path
- id="path4530-58-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2"
- style="overflow:visible">
- <path
- id="path4530-58-46"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1"
- style="overflow:visible">
- <path
- id="path4533-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2-6"
- style="overflow:visible">
- <path
- id="path4530-58-46-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-4-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#grad0-40"
- id="linearGradient5917"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(8.8786147,-0.0235964,-0.00460261,1.50035,-400.25558,-2006.3745)"
- x1="-0.12893644"
- y1="1717.1688"
- x2="28.140806"
- y2="1717.1688" />
- <linearGradient
- id="grad0-40"
- x1="0"
- y1="0"
- x2="1"
- y2="0"
- gradientTransform="rotate(60,0.5,0.5)">
- <stop
- offset="0"
- stop-color="#f3f6fa"
- stop-opacity="1"
- id="stop3419" />
- <stop
- offset="0.24"
- stop-color="#f9fafc"
- stop-opacity="1"
- id="stop3421" />
- <stop
- offset="0.54"
- stop-color="#feffff"
- stop-opacity="1"
- id="stop3423" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30"
- style="overflow:visible">
- <path
- id="path4530-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6"
- style="overflow:visible">
- <path
- id="path4533-19"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0"
- style="overflow:visible">
- <path
- id="path4530-0-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8"
- style="overflow:visible">
- <path
- id="path4533-19-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9"
- style="overflow:visible">
- <path
- id="path4530-0-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3"
- style="overflow:visible">
- <path
- id="path4533-19-6-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-7"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,321.82147,-1.8659026)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-81"
- style="overflow:visible">
- <path
- id="path4530-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-5"
- style="overflow:visible">
- <path
- id="path4533-72"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-1"
- style="overflow:visible">
- <path
- id="path4530-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker9714"
- style="overflow:visible">
- <path
- id="path9712"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48"
- style="overflow:visible">
- <path
- id="path4530-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker10117"
- style="overflow:visible">
- <path
- id="path10115"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48-6"
- style="overflow:visible">
- <path
- id="path4530-4-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker11186"
- style="overflow:visible">
- <path
- id="path11184"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9-0"
- style="overflow:visible">
- <path
- id="path4530-0-6-4-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3-7"
- style="overflow:visible">
- <path
- id="path4533-19-6-1-5"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3602"
- x1="113.62777"
- y1="238.35289"
- x2="178.07406"
- y2="238.35289"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3604"
- x1="106.04746"
- y1="231.17514"
- x2="170.49375"
- y2="231.17514"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3606"
- x1="97.456466"
- y1="223.48468"
- x2="161.90276"
- y2="223.48468"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- gradientTransform="matrix(1.2309135,0,0,0.9993652,112.21043,-29.394096)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.2419105,0,0,0.99933655,110.714,51.863352)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.3078944,0,0,0.99916717,224.87462,63.380078)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-8-7"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.2309135,0,0,0.9993652,359.82239,-48.56566)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-9"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(-35.122992,139.17627)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(32.977515,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(100.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(168.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(236.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5-7"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(516.30192,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5-73"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(448.30192,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-59"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(380.30193,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(312.20142,138.83669)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <radialGradient
- inkscape:collect="always"
- xlink:href="#linearGradient4330"
- id="radialGradient4334"
- cx="222.02666"
- cy="354.61401"
- fx="222.02666"
- fy="354.61401"
- r="171.25233"
- gradientTransform="matrix(1,0,0,0.15767701,0,298.69953)"
- gradientUnits="userSpaceOnUse" />
- <radialGradient
- inkscape:collect="always"
- xlink:href="#linearGradient4340"
- id="radialGradient4342"
- cx="535.05641"
- cy="353.56737"
- fx="535.05641"
- fy="353.56737"
- r="136.95767"
- gradientTransform="matrix(1.0000096,0,0,0.19866251,-0.00515595,284.82679)"
- gradientUnits="userSpaceOnUse" />
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker28236"
- refX="0"
- refY="0"
- orient="auto"
- inkscape:stockid="Arrow2Mstart">
- <path
- inkscape:connector-curvature="0"
- transform="scale(0.6)"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- id="path28234" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3706"
- style="overflow:visible">
- <path
- id="path3704"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect14461"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9"
- style="fill:#fe0000;fill-opacity:1;fill-rule:evenodd;stroke:#fe0000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3-1"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9-8"
- style="fill:#fe0000;fill-opacity:1;fill-rule:evenodd;stroke:#fe0000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient13962"
- id="linearGradient14808"
- x1="447.95767"
- y1="176.3018"
- x2="576.27008"
- y2="176.3018"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(0,-8)" />
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3-1-6"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9-8-5"
- style="fill:#808080;fill-opacity:1;fill-rule:evenodd;stroke:#808080;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-53"
- style="overflow:visible">
- <path
- id="path4533-35"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-99"
- style="overflow:visible">
- <path
- id="path4533-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- </defs>
- <sodipodi:namedview
- id="base"
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1.0"
- inkscape:pageopacity="0.0"
- inkscape:pageshadow="2"
- inkscape:zoom="1.8101934"
- inkscape:cx="434.42776"
- inkscape:cy="99.90063"
- inkscape:document-units="px"
- inkscape:current-layer="layer1"
- showgrid="false"
- inkscape:window-width="1920"
- inkscape:window-height="1057"
- inkscape:window-x="-8"
- inkscape:window-y="-8"
- inkscape:window-maximized="1"
- fit-margin-top="0.1"
- fit-margin-left="0.1"
- fit-margin-right="0.1"
- fit-margin-bottom="0.1"
- inkscape:measure-start="-29.078,219.858"
- inkscape:measure-end="346.809,219.858"
- showguides="true"
- inkscape:snap-page="true"
- inkscape:snap-others="false"
- inkscape:snap-nodes="false"
- inkscape:snap-bbox="true"
- inkscape:lockguides="false"
- inkscape:guide-bbox="true">
- <sodipodi:guide
- position="-120.20815,574.17069"
- orientation="0,1"
- id="guide7077"
- inkscape:locked="false" />
- </sodipodi:namedview>
- <metadata
- id="metadata3873">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <g
- inkscape:label="Layer 1"
- inkscape:groupmode="layer"
- id="layer1"
- transform="translate(-46.542857,-100.33361)">
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-7"
- width="64.18129"
- height="45.550591"
- x="575.72662"
- y="144.79553" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-8-5"
- width="64.18129"
- height="45.550591"
- x="584.44391"
- y="152.87041" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-42-0"
- width="64.18129"
- height="45.550591"
- x="593.03491"
- y="160.56087" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-0-3"
- width="64.18129"
- height="45.550591"
- x="600.61523"
- y="167.73862" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-46-4"
- width="64.18129"
- height="45.550591"
- x="608.70087"
- y="175.42906" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#aaffcc;fill-opacity:1;stroke:none"
- transform="matrix(0.71467688,0,0,0.72506311,529.61388,101.41825)"><flowRegion
- id="flowRegion1855-0"
- style="fill:#aaffcc"><rect
- id="rect1857-5"
- width="67.17514"
- height="33.941124"
- x="120.20815"
- y="120.75856"
- style="fill:#aaffcc" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#aaffcc"
- id="flowPara1976" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot5313"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;letter-spacing:0px;word-spacing:0px"><flowRegion
- id="flowRegion5315"><rect
- id="rect5317"
- width="120.91525"
- height="96.873627"
- x="-192.33304"
- y="-87.130829" /></flowRegion><flowPara
- id="flowPara5319" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot8331"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion8333"><rect
- id="rect8335"
- width="48.5"
- height="28"
- x="252.5"
- y="208.34286" /></flowRegion><flowPara
- id="flowPara8337" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot11473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(46.542857,100.33361)"><flowRegion
- id="flowRegion11475"><rect
- id="rect11477"
- width="90"
- height="14.5"
- x="426"
- y="26.342873" /></flowRegion><flowPara
- id="flowPara11479">DDDpk</flowPara></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="533.54285"
- y="158.17648"
- id="text11489"><tspan
- sodipodi:role="line"
- id="tspan11487"
- x="533.54285"
- y="170.34088" /></text>
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3606);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-8"
- width="64.18129"
- height="45.550591"
- x="101.58897"
- y="178.70938" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3604);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-42"
- width="64.18129"
- height="45.550591"
- x="110.17996"
- y="186.39984" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3602);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-0"
- width="64.18129"
- height="45.550591"
- x="117.76027"
- y="193.57759" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-46"
- width="64.18129"
- height="45.550591"
- x="125.84592"
- y="201.26804" />
- <rect
- style="fill:#d7f4e3;fill-opacity:1;stroke:url(#linearGradient3608-4);stroke-width:0.293915;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86"
- width="79.001617"
- height="45.521675"
- x="221.60374"
- y="163.11812" />
- <rect
- style="fill:#d7f4e3;fill-opacity:1;stroke:url(#linearGradient3608-4-8);stroke-width:0.29522076;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-5"
- width="79.70742"
- height="45.52037"
- x="221.08463"
- y="244.37004" />
- <rect
- style="opacity:1;fill:#d7eef4;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.31139579;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718"
- width="125.8186"
- height="100.36277"
- x="321.87323"
- y="112.72702" />
- <rect
- style="fill:#ffd5d5;fill-opacity:1;stroke:url(#linearGradient3608-4-8-7);stroke-width:0.30293623;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-5-3"
- width="83.942352"
- height="45.512653"
- x="341.10928"
- y="255.85414" />
- <rect
- style="fill:#ffb380;fill-opacity:1;stroke:url(#linearGradient3608-4-9);stroke-width:0.293915;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-2"
- width="79.001617"
- height="45.521675"
- x="469.21576"
- y="143.94656" />
- <rect
- style="opacity:1;fill:url(#radialGradient4334);fill-opacity:1;stroke:#6ba6fd;stroke-width:0.32037571;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3783"
- width="342.1843"
- height="53.684738"
- x="50.934502"
- y="327.77164" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1"
- width="64.18129"
- height="45.550591"
- x="53.748672"
- y="331.81079" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3"
- width="64.18129"
- height="45.550591"
- x="121.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6"
- width="64.18129"
- height="45.550591"
- x="189.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4"
- width="64.18129"
- height="45.550591"
- x="257.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5-7);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4-9"
- width="64.18129"
- height="45.550591"
- x="325.84918"
- y="331.71741" />
- <rect
- style="opacity:1;fill:url(#radialGradient4342);fill-opacity:1;stroke:#6ba6fd;stroke-width:0.28768006;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3783-8"
- width="273.62766"
- height="54.131645"
- x="398.24258"
- y="328.00156" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-8);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-5"
- width="64.18129"
- height="45.550591"
- x="401.07309"
- y="331.47122" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-8);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-0"
- width="64.18129"
- height="45.550591"
- x="469.17358"
- y="331.37781" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-1-59);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-3"
- width="64.18129"
- height="45.550591"
- x="537.17358"
- y="331.37781" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5-73);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4-6"
- width="64.18129"
- height="45.550591"
- x="605.17358"
- y="331.37781" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3"
- width="27.798103"
- height="21.434149"
- x="325.80197"
- y="117.21037" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8"
- width="27.798103"
- height="21.434149"
- x="325.2959"
- y="140.20857" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9"
- width="27.798103"
- height="21.434149"
- x="325.2959"
- y="164.20857" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5"
- width="27.798103"
- height="21.434149"
- x="356.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1"
- width="27.798103"
- height="21.434149"
- x="355.86447"
- y="140.38893" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2"
- width="27.798103"
- height="21.434149"
- x="355.86447"
- y="164.38893" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5"
- width="27.798103"
- height="21.434149"
- x="386.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9"
- width="27.798103"
- height="21.434149"
- x="385.86447"
- y="140.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6"
- width="27.798103"
- height="21.434149"
- x="385.86447"
- y="164.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-9"
- width="27.798103"
- height="21.434149"
- x="416.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-3"
- width="27.798103"
- height="21.434149"
- x="415.86447"
- y="140.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8"
- width="27.798103"
- height="21.434149"
- x="415.86447"
- y="164.38896" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-5"
- width="27.798103"
- height="21.434149"
- x="324.61139"
- y="187.85849" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-0"
- width="27.798103"
- height="21.434149"
- x="355.17996"
- y="188.03886" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-0"
- width="27.798103"
- height="21.434149"
- x="385.17996"
- y="188.03888" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-4"
- width="27.798103"
- height="21.434149"
- x="415.17996"
- y="188.03889" />
- <rect
- style="opacity:1;fill:#d7eef4;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.31139579;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-5"
- width="125.8186"
- height="100.36277"
- x="452.24075"
- y="208.56764" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-9"
- width="27.798103"
- height="21.434149"
- x="456.16949"
- y="213.05098" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-8"
- width="27.798103"
- height="21.434149"
- x="455.66342"
- y="236.04919" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-55"
- width="27.798103"
- height="21.434149"
- x="455.66342"
- y="260.04919" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-7"
- width="27.798103"
- height="21.434149"
- x="486.73807"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-5"
- width="27.798103"
- height="21.434149"
- x="486.23199"
- y="236.22954" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-3"
- width="27.798103"
- height="21.434149"
- x="486.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-2"
- width="27.798103"
- height="21.434149"
- x="516.73804"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-5"
- width="27.798103"
- height="21.434149"
- x="516.23199"
- y="236.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-1"
- width="27.798103"
- height="21.434149"
- x="516.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-9-6"
- width="27.798103"
- height="21.434149"
- x="546.73804"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-3-1"
- width="27.798103"
- height="21.434149"
- x="546.23199"
- y="236.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-7"
- width="27.798103"
- height="21.434149"
- x="546.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-5-1"
- width="27.798103"
- height="21.434149"
- x="454.97891"
- y="283.6991" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-0-6"
- width="27.798103"
- height="21.434149"
- x="485.54749"
- y="283.87946" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-0-7"
- width="27.798103"
- height="21.434149"
- x="515.54749"
- y="283.87949" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-4-2"
- width="27.798103"
- height="21.434149"
- x="545.54749"
- y="283.87952" />
- <g
- id="g5089"
- transform="matrix(0.7206312,0,0,1.0073979,12.37404,-312.02679)"
- style="fill:#ff8080">
- <path
- inkscape:connector-curvature="0"
- d="m 64.439519,501.23542 v 5.43455 h 45.917801 v -5.43455 z"
- style="opacity:1;fill:#ff8080;fill-opacity:1;stroke:#6ba6fd;stroke-width:1.09656608;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:fill markers stroke"
- id="rect4455" />
- <path
- inkscape:connector-curvature="0"
- id="path5083"
- d="m 108.30535,494.82846 c 13.96414,8.6951 13.96414,8.40526 13.96414,8.40526 l -12.46798,9.85445 z"
- style="fill:#ff8080;stroke:#000000;stroke-width:0.53767502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
- </g>
- <g
- id="g5089-4"
- transform="matrix(-0.6745281,0,0,0.97266112,143.12774,-266.3349)"
- style="fill:#000080;fill-opacity:1">
- <path
- inkscape:connector-curvature="0"
- d="m 64.439519,501.23542 v 5.43455 h 45.917801 v -5.43455 z"
- style="opacity:1;fill:#000080;fill-opacity:1;stroke:#6ba6fd;stroke-width:1.09656608;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:fill markers stroke"
- id="rect4455-9" />
- <path
- inkscape:connector-curvature="0"
- id="path5083-2"
- d="m 108.30535,494.82846 c 13.96414,8.6951 13.96414,8.40526 13.96414,8.40526 l -12.46798,9.85445 z"
- style="fill:#000080;stroke:#000000;stroke-width:0.53767502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;fill-opacity:1" />
- </g>
- <flowRoot
- xml:space="preserve"
- id="flowRoot5112"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(52.199711,162.55901)"><flowRegion
- id="flowRegion5114"><rect
- id="rect5116"
- width="28.991377"
- height="19.79899"
- x="22.627417"
- y="64.897125" /></flowRegion><flowPara
- id="flowPara5118">Tx</flowPara></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot5112-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(49.878465,112.26812)"><flowRegion
- id="flowRegion5114-7"><rect
- id="rect5116-7"
- width="28.991377"
- height="19.79899"
- x="22.627417"
- y="64.897125" /></flowRegion><flowPara
- id="flowPara5118-5">Rx</flowPara></flowRoot> <path
- style="fill:none;stroke:#f60300;stroke-width:0.783;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:0.783, 0.78300000000000003;stroke-dashoffset:0;marker-start:url(#Arrow1Sstart);marker-end:url(#TriangleOutS)"
- d="m 116.81066,179.28348 v -11.31903 l -0.37893,-12.93605 0.37893,-5.25526 3.03134,-5.25526 4.16811,-2.82976 8.3362,-1.61701 h 7.19945 l 7.19946,2.02126 3.03135,2.02126 0.37892,2.02125 -0.37892,3.23401 -0.37892,7.27652 -0.37892,8.48927 -0.37892,14.55304"
- id="path8433"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="104.04285"
- y="144.86398"
- id="text9071"><tspan
- sodipodi:role="line"
- id="tspan9069"
- x="104.04285"
- y="144.86398"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333333px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">HW loop back device</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="59.542858"
- y="53.676483"
- id="text9621"><tspan
- sodipodi:role="line"
- id="tspan9619"
- x="59.542858"
- y="65.840889" /></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7-2-7-8-7-2-4-3-9-0-2-9-5-6-7-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="matrix(0.57822568,0,0,0.72506311,454.1297,247.6848)"><flowRegion
- id="flowRegion1855-0-1-3-66-99-9-2-5-4-1-1-1-4-0-5-4"><rect
- id="rect1857-5-1-5-2-6-1-4-9-3-8-1-8-5-7-9-1"
- width="162.09244"
- height="78.764809"
- x="120.20815"
- y="120.75856" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#5500d4"
- id="flowPara9723" /></flowRoot> <path
- style="fill:none;stroke:#fe0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow2Mend)"
- d="m 181.60025,194.22211 12.72792,-7.07106 14.14214,-2.82843 12.02081,0.70711 h 1.41422 v 0"
- id="path9797"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#marker10821)"
- d="m 179.47893,193.51501 3.53554,-14.14214 5.65685,-12.72792 16.97056,-9.19239 8.48528,-9.19238 14.84924,-7.77818 24.04163,-8.48528 18.38478,-6.36396 38.89087,-2.82843 h 12.02082 l -2.12132,-0.7071"
- id="path10453"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:0.70021206;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.70021208, 0.70021208;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3)"
- d="m 299.68795,188.0612 7.97521,-5.53298 8.86135,-2.2132 7.53214,0.5533 h 0.88614 v 0"
- id="path9797-9"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:0.96708673;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.96708673, 0.96708673;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3-1)"
- d="m 300.49277,174.25976 7.49033,-11.23756 8.32259,-4.49504 7.07419,1.12376 h 0.83227 v 0"
- id="path9797-9-7"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#marker12747)"
- d="m 299.68708,196.34344 9.19239,7.77817 7.07107,1.41421 h 4.94974 v 0"
- id="path12737"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:url(#linearGradient14808);stroke-width:4.66056013;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:4.66056002, 4.66056002;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Send)"
- d="m 447.95767,168.30181 c 119.99171,0 119.99171,0 119.99171,0"
- id="path13236"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#808080;stroke-width:0.96708673;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.96708673, 0.96708673000000001;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3-1-6)"
- d="m 529.56098,142.71226 7.49033,-11.23756 8.32259,-4.49504 7.07419,1.12376 h 0.83227 v 0"
- id="path9797-9-7-3"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mend)"
- d="m 612.93538,222.50639 -5.65686,12.72792 -14.84924,3.53553 -14.14213,0.70711"
- id="path16128"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Mend);stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0"
- d="m 624.95619,220.38507 -3.53553,13.43502 -12.72792,14.84925 -9.19239,5.65685 -19.09188,2.82843 -1.41422,-0.70711 h -1.41421"
- id="path16130"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Mend);stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0"
- d="m 635.56279,221.09217 -7.77817,33.94113 -4.24264,6.36396 -8.48528,3.53553 -10.6066,4.94975 -19.09189,5.65685 -6.36396,3.53554"
- id="path16132"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1.01083219;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.01083222, 1.01083221999999995;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-53)"
- d="m 456.03282,270.85761 -4.96024,14.83162 -13.02062,4.11988 -12.40058,0.82399"
- id="path16128-3"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:0.80101544;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.80101541, 0.80101540999999998;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-99)"
- d="m 341.29831,266.70565 -6.88826,6.70663 -18.08168,1.86296 -17.22065,0.37258"
- id="path16128-6"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00faf5;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mend)"
- d="m 219.78402,264.93279 -6.36396,-9.89949 -3.53554,-16.26346 -7.77817,-8.48528 -8.48528,-4.94975 -4.94975,-2.82842"
- id="path17144"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00db00;stroke-width:1.4;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1.4, 1.39999999999999991;stroke-dashoffset:0;marker-end:url(#marker17156);marker-start:url(#marker17550)"
- d="m 651.11914,221.09217 -7.07107,31.81981 -17.67766,34.64823 -21.21321,26.87005 -80.61017,1.41422 -86.97413,1.41421 -79.90306,-3.53553 -52.3259,1.41421 -24.04163,10.6066 -2.82843,1.41422"
- id="path17146"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#000000;stroke-width:1.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1.3, 1.30000000000000004;stroke-dashoffset:0;marker-start:url(#marker18096);marker-end:url(#marker18508)"
- d="M 659.60442,221.09217 C 656.776,327.86529 656.776,328.5724 656.776,328.5724"
- id="path18086"
- inkscape:connector-curvature="0" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7-2-7-8-7-2"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="matrix(0.57822568,0,0,0.72506311,137.7802,161.1139)"><flowRegion
- id="flowRegion1855-0-1-3-66-99-9"><rect
- id="rect1857-5-1-5-2-6-1"
- width="174.19844"
- height="91.867104"
- x="120.20815"
- y="120.75856" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#5500d4"
- id="flowPara9188-8-4" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="155.96185"
- y="220.07472"
- id="text9071-6"><tspan
- sodipodi:role="line"
- x="158.29518"
- y="220.07472"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2100"> <tspan
- style="fill:#0000ff"
- id="tspan2327">Ethdev Ports </tspan></tspan><tspan
- sodipodi:role="line"
- x="155.96185"
- y="236.74139"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104">(NIX)</tspan></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot2106"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2108"><rect
- id="rect2110"
- width="42.1875"
- height="28.125"
- x="178.125"
- y="71.155365" /></flowRegion><flowPara
- id="flowPara2112" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2114"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2116"><rect
- id="rect2118"
- width="38.28125"
- height="28.90625"
- x="196.09375"
- y="74.280365" /></flowRegion><flowPara
- id="flowPara2120" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2122"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2124"><rect
- id="rect2126"
- width="39.0625"
- height="23.4375"
- x="186.71875"
- y="153.96786" /></flowRegion><flowPara
- id="flowPara2128" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="262.1366"
- y="172.08614"
- id="text9071-6-4"><tspan
- sodipodi:role="line"
- x="264.46994"
- y="172.08614"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0">Ingress </tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="188.75281"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176">Classification</tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="205.41946"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180">(NPC)</tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="222.08614"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178" /><tspan
- sodipodi:role="line"
- x="262.1366"
- y="238.75281"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="261.26727"
- y="254.46307"
- id="text9071-6-4-9"><tspan
- sodipodi:role="line"
- x="263.60062"
- y="254.46307"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-0">Egress </tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="271.12973"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176-8">Classification</tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="287.79642"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180-9">(NPC)</tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="304.46307"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-3" /><tspan
- sodipodi:role="line"
- x="261.26727"
- y="321.12973"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2174-7" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="362.7016"
- y="111.81297"
- id="text9071-4"><tspan
- sodipodi:role="line"
- id="tspan9069-8"
- x="362.7016"
- y="111.81297"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Rx Queues</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="488.21777"
- y="207.21898"
- id="text9071-4-3"><tspan
- sodipodi:role="line"
- id="tspan9069-8-8"
- x="488.21777"
- y="207.21898"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Tx Queues</tspan></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot2311"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2313"><rect
- id="rect2315"
- width="49.21875"
- height="41.40625"
- x="195.3125"
- y="68.811615" /></flowRegion><flowPara
- id="flowPara2317" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2319"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2321"><rect
- id="rect2323"
- width="40.625"
- height="39.0625"
- x="196.09375"
- y="69.592865" /></flowRegion><flowPara
- id="flowPara2325" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="382.20477"
- y="263.74432"
- id="text9071-6-4-6"><tspan
- sodipodi:role="line"
- x="382.20477"
- y="263.74432"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-9">Egress</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="280.41098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176-3">Traffic Manager</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="297.07767"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180-1">(NIX)</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="313.74432"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-6" /><tspan
- sodipodi:role="line"
- x="382.20477"
- y="330.41098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174-8" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="500.98602"
- y="154.02556"
- id="text9071-6-4-0"><tspan
- sodipodi:role="line"
- x="503.31937"
- y="154.02556"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-97">Scheduler </tspan><tspan
- sodipodi:role="line"
- x="500.98602"
- y="170.69223"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2389" /><tspan
- sodipodi:role="line"
- x="500.98602"
- y="187.35889"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2391">SSO</tspan><tspan
- sodipodi:role="line"
- x="500.98602"
- y="204.02556"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-60" /><tspan
- sodipodi:role="line"
- x="500.98602"
- y="220.69223"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174-3" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="571.61627"
- y="119.24016"
- id="text9071-4-2"><tspan
- sodipodi:role="line"
- id="tspan9069-8-82"
- x="571.61627"
- y="119.24016"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Supports both poll mode and/or event mode</tspan><tspan
- sodipodi:role="line"
- x="571.61627"
- y="135.90683"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2416">by configuring scheduler</tspan><tspan
- sodipodi:role="line"
- x="571.61627"
- y="152.57349"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2418" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="638.14227"
- y="192.46773"
- id="text9071-6-4-9-2"><tspan
- sodipodi:role="line"
- x="638.14227"
- y="192.46773"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-3-2">ARMv8</tspan><tspan
- sodipodi:role="line"
- x="638.14227"
- y="209.1344"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2499">Cores</tspan><tspan
- sodipodi:role="line"
- x="638.14227"
- y="225.80106"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2174-7-8" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="180.24902"
- y="325.09399"
- id="text9071-4-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-7"
- x="180.24902"
- y="325.09399"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Hardware Libraries</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="487.8916"
- y="325.91599"
- id="text9071-4-1-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-7-1"
- x="487.8916"
- y="325.91599"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Software Libraries</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="81.178604"
- y="350.03149"
- id="text9071-4-18"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83"
- x="81.178604"
- y="350.03149"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Mempool</tspan><tspan
- sodipodi:role="line"
- x="81.178604"
- y="366.69815"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555">(NPA)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="151.09518"
- y="348.77365"
- id="text9071-4-18-9"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-3"
- x="151.09518"
- y="348.77365"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Timer</tspan><tspan
- sodipodi:role="line"
- x="151.09518"
- y="365.44031"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-9">(TIM)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="222.56393"
- y="347.1174"
- id="text9071-4-18-0"><tspan
- sodipodi:role="line"
- x="222.56393"
- y="347.1174"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90">Crypto</tspan><tspan
- sodipodi:role="line"
- x="222.56393"
- y="363.78406"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601">(CPT)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="289.00229"
- y="347.69473"
- id="text9071-4-18-0-5"><tspan
- sodipodi:role="line"
- x="289.00229"
- y="347.69473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90-9">Compress</tspan><tspan
- sodipodi:role="line"
- x="289.00229"
- y="364.36139"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601-6">(ZIP)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="355.50653"
- y="348.60098"
- id="text9071-4-18-0-5-6"><tspan
- sodipodi:role="line"
- x="355.50653"
- y="348.60098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90-9-5">Shared</tspan><tspan
- sodipodi:role="line"
- x="355.50653"
- y="365.26764"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2645">Memory</tspan><tspan
- sodipodi:role="line"
- x="355.50653"
- y="381.93433"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601-6-1" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="430.31393"
- y="356.4924"
- id="text9071-4-18-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-35"
- x="430.31393"
- y="356.4924"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">SW Ring</tspan><tspan
- sodipodi:role="line"
- x="430.31393"
- y="373.15906"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-6" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="569.37646"
- y="341.1799"
- id="text9071-4-18-2"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-4"
- x="569.37646"
- y="341.1799"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">HASH</tspan><tspan
- sodipodi:role="line"
- x="569.37646"
- y="357.84656"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2742">LPM</tspan><tspan
- sodipodi:role="line"
- x="569.37646"
- y="374.51324"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-2">ACL</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="503.75143"
- y="355.02365"
- id="text9071-4-18-2-3"><tspan
- sodipodi:role="line"
- x="503.75143"
- y="355.02365"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2733">Mbuf</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="639.34521"
- y="355.6174"
- id="text9071-4-18-19"><tspan
- sodipodi:role="line"
- x="639.34521"
- y="355.6174"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2771">De(Frag)</tspan></text>
- </g>
-</svg>
diff --git a/doc/guides/platform/img/octeontx2_resource_virtualization.svg b/doc/guides/platform/img/octeontx2_resource_virtualization.svg
deleted file mode 100644
index bf976b52af..0000000000
--- a/doc/guides/platform/img/octeontx2_resource_virtualization.svg
+++ /dev/null
@@ -1,2418 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<!--
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2019 Marvell International Ltd.
-#
--->
-
-<svg
- xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="631.91431"
- height="288.34286"
- id="svg3868"
- version="1.1"
- inkscape:version="0.92.4 (5da689c313, 2019-01-14)"
- sodipodi:docname="octeontx2_resource_virtualization.svg"
- sodipodi:version="0.32"
- inkscape:output_extension="org.inkscape.output.svg.inkscape">
- <defs
- id="defs3870">
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker9460"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path9458"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker7396"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path7133"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5474">
- <stop
- style="stop-color:#ffffff;stop-opacity:1;"
- offset="0"
- id="stop5470" />
- <stop
- style="stop-color:#ffffff;stop-opacity:0;"
- offset="1"
- id="stop5472" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5464">
- <stop
- style="stop-color:#daeef5;stop-opacity:1;"
- offset="0"
- id="stop5460" />
- <stop
- style="stop-color:#daeef5;stop-opacity:0;"
- offset="1"
- id="stop5462" />
- </linearGradient>
- <linearGradient
- id="linearGradient6545"
- osb:paint="solid">
- <stop
- style="stop-color:#ffa600;stop-opacity:1;"
- offset="0"
- id="stop6543" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3302"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3294"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3290"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3286"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3228"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3188"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3184"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3180"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3176"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3172"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3168"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3164"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3160"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120"
- is_visible="true" />
- <linearGradient
- id="linearGradient3114"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3112" />
- </linearGradient>
- <linearGradient
- id="linearGradient3088"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3086" />
- </linearGradient>
- <linearGradient
- id="linearGradient3058"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3056" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3054"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3050"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3046"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3042"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3038"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3034"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3030"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3004"
- is_visible="true" />
- <linearGradient
- id="linearGradient2975"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2200;stop-opacity:1;"
- offset="0"
- id="stop2973" />
- </linearGradient>
- <linearGradient
- id="linearGradient2969"
- osb:paint="solid">
- <stop
- style="stop-color:#69ff72;stop-opacity:1;"
- offset="0"
- id="stop2967" />
- </linearGradient>
- <linearGradient
- id="linearGradient2963"
- osb:paint="solid">
- <stop
- style="stop-color:#000000;stop-opacity:1;"
- offset="0"
- id="stop2961" />
- </linearGradient>
- <linearGradient
- id="linearGradient2929"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2d00;stop-opacity:1;"
- offset="0"
- id="stop2927" />
- </linearGradient>
- <linearGradient
- id="linearGradient4610"
- osb:paint="solid">
- <stop
- style="stop-color:#00ffff;stop-opacity:1;"
- offset="0"
- id="stop4608" />
- </linearGradient>
- <linearGradient
- id="linearGradient3993"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3991" />
- </linearGradient>
- <linearGradient
- id="linearGradient3808"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3806" />
- </linearGradient>
- <linearGradient
- id="linearGradient3776"
- osb:paint="solid">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop3774" />
- </linearGradient>
- <linearGradient
- id="linearGradient3438"
- osb:paint="solid">
- <stop
- style="stop-color:#b8e132;stop-opacity:1;"
- offset="0"
- id="stop3436" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3408"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3404"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3400"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3392"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3376"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3040"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3036"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3032"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3028"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3024"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3020"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2854"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect2844"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <linearGradient
- id="linearGradient2828"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop2826" />
- </linearGradient>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect329"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart"
- style="overflow:visible">
- <path
- id="path4530"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend"
- style="overflow:visible">
- <path
- id="path4533"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- id="linearGradient4513">
- <stop
- style="stop-color:#fdffdb;stop-opacity:1;"
- offset="0"
- id="stop4515" />
- <stop
- style="stop-color:#dfe2d8;stop-opacity:0;"
- offset="1"
- id="stop4517" />
- </linearGradient>
- <inkscape:perspective
- sodipodi:type="inkscape:persp3d"
- inkscape:vp_x="0 : 526.18109 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_z="744.09448 : 526.18109 : 1"
- inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
- id="perspective3876" />
- <inkscape:perspective
- id="perspective3886"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lend"
- style="overflow:visible">
- <path
- id="path3211"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3892"
- style="overflow:visible">
- <path
- id="path3894"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3896"
- style="overflow:visible">
- <path
- id="path3898"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lstart"
- style="overflow:visible">
- <path
- id="path3208"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3902"
- style="overflow:visible">
- <path
- id="path3904"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3906"
- style="overflow:visible">
- <path
- id="path3908"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3910"
- style="overflow:visible">
- <path
- id="path3912"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective4086"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective4113"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective5195"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-4"
- style="overflow:visible">
- <path
- id="path4533-7"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5272"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-4"
- style="overflow:visible">
- <path
- id="path4530-5"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-0"
- style="overflow:visible">
- <path
- id="path4533-3"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5317"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-3"
- style="overflow:visible">
- <path
- id="path4530-2"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-06"
- style="overflow:visible">
- <path
- id="path4533-1"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-8"
- style="overflow:visible">
- <path
- id="path4530-7"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-9"
- style="overflow:visible">
- <path
- id="path4533-2"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858-0"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3"
- style="overflow:visible">
- <path
- id="path4533-75"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3-2"
- style="overflow:visible">
- <path
- id="path4533-75-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008-3"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7-3"
- is_visible="true" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5464"
- id="linearGradient5466"
- x1="65.724048"
- y1="169.38839"
- x2="183.38978"
- y2="169.38839"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-14,-4)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5476"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,105.65926,-0.6580533)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5658"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,148.76869,-0.0791224)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5695"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,206.76869,3.9208776)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-34"
- style="overflow:visible">
- <path
- id="path4530-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-45"
- style="overflow:visible">
- <path
- id="path4533-16"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7"
- style="overflow:visible">
- <path
- id="path4530-58"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1"
- style="overflow:visible">
- <path
- id="path4533-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-6"
- style="overflow:visible">
- <path
- id="path4530-58-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2"
- style="overflow:visible">
- <path
- id="path4530-58-46"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1"
- style="overflow:visible">
- <path
- id="path4533-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2-6"
- style="overflow:visible">
- <path
- id="path4530-58-46-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-4-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,192.76869,-0.0791224)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#grad0-40"
- id="linearGradient5917"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(8.8786147,-0.0235964,-0.00460261,1.50035,-400.25558,-2006.3745)"
- x1="-0.12893644"
- y1="1717.1688"
- x2="28.140806"
- y2="1717.1688" />
- <linearGradient
- id="grad0-40"
- x1="0"
- y1="0"
- x2="1"
- y2="0"
- gradientTransform="rotate(60,0.5,0.5)">
- <stop
- offset="0"
- stop-color="#f3f6fa"
- stop-opacity="1"
- id="stop3419" />
- <stop
- offset="0.24"
- stop-color="#f9fafc"
- stop-opacity="1"
- id="stop3421" />
- <stop
- offset="0.54"
- stop-color="#feffff"
- stop-opacity="1"
- id="stop3423" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30"
- style="overflow:visible">
- <path
- id="path4530-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6"
- style="overflow:visible">
- <path
- id="path4533-19"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0"
- style="overflow:visible">
- <path
- id="path4530-0-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8"
- style="overflow:visible">
- <path
- id="path4533-19-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9"
- style="overflow:visible">
- <path
- id="path4530-0-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3"
- style="overflow:visible">
- <path
- id="path4533-19-6-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-7"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,321.82147,-1.8659026)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-8"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(1.3985479,0,0,0.98036646,376.02779,12.240541)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-81"
- style="overflow:visible">
- <path
- id="path4530-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-5"
- style="overflow:visible">
- <path
- id="path4533-72"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-1"
- style="overflow:visible">
- <path
- id="path4530-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker9714"
- style="overflow:visible">
- <path
- id="path9712"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48"
- style="overflow:visible">
- <path
- id="path4530-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker10117"
- style="overflow:visible">
- <path
- id="path10115"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48-6"
- style="overflow:visible">
- <path
- id="path4530-4-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker11186"
- style="overflow:visible">
- <path
- id="path11184"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-8-0"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(1.3985479,0,0,0.98036646,497.77779,12.751681)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9-0"
- style="overflow:visible">
- <path
- id="path4530-0-6-4-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3-7"
- style="overflow:visible">
- <path
- id="path4533-19-6-1-5"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- </defs>
- <sodipodi:namedview
- id="base"
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1.0"
- inkscape:pageopacity="0.0"
- inkscape:pageshadow="2"
- inkscape:zoom="1.4142136"
- inkscape:cx="371.09569"
- inkscape:cy="130.22425"
- inkscape:document-units="px"
- inkscape:current-layer="layer1"
- showgrid="false"
- inkscape:window-width="1920"
- inkscape:window-height="1057"
- inkscape:window-x="-8"
- inkscape:window-y="-8"
- inkscape:window-maximized="1"
- fit-margin-top="0.1"
- fit-margin-left="0.1"
- fit-margin-right="0.1"
- fit-margin-bottom="0.1"
- inkscape:measure-start="-29.078,219.858"
- inkscape:measure-end="346.809,219.858"
- showguides="true"
- inkscape:snap-page="true"
- inkscape:snap-others="false"
- inkscape:snap-nodes="false"
- inkscape:snap-bbox="true"
- inkscape:lockguides="false"
- inkscape:guide-bbox="true">
- <sodipodi:guide
- position="-120.20815,574.17069"
- orientation="0,1"
- id="guide7077"
- inkscape:locked="false" />
- </sodipodi:namedview>
- <metadata
- id="metadata3873">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <g
- inkscape:label="Layer 1"
- inkscape:groupmode="layer"
- id="layer1"
- transform="translate(-46.542857,-100.33361)">
- <flowRoot
- xml:space="preserve"
- id="flowRoot5313"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;letter-spacing:0px;word-spacing:0px"><flowRegion
- id="flowRegion5315"><rect
- id="rect5317"
- width="120.91525"
- height="96.873627"
- x="-192.33304"
- y="-87.130829" /></flowRegion><flowPara
- id="flowPara5319" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="90.320152"
- y="299.67871"
- id="text2978"
- inkscape:export-filename="/home/matz/barracuda/rapports/mbuf-api-v2-images/octeon_multi.png"
- inkscape:export-xdpi="112"
- inkscape:export-ydpi="112"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="90.320152"
- y="299.67871"
- id="tspan3006"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15.74255753px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025"> </tspan></text>
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.82973665;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066"
- width="127.44949"
- height="225.03024"
- x="47.185646"
- y="111.20448" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="154.93478" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.55900002;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096-6"
- width="117.1069"
- height="20.907221"
- x="51.955002"
- y="181.51834" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b7dfd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096-6-2"
- width="117.1069"
- height="20.907221"
- x="51.691605"
- y="205.82234" />
- <rect
- y="154.93478"
- x="52.003464"
- height="20.907221"
- width="117.1069"
- id="rect5160"
- style="fill:url(#linearGradient5466);fill-opacity:1;stroke:#6b8afd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5162"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="231.92767" />
- <rect
- y="255.45328"
- x="52.003464"
- height="20.907221"
- width="117.1069"
- id="rect5164"
- style="fill:#daeef5;fill-opacity:1;stroke:#6b6ffd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="281.11758" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.59729731;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-6"
- width="117.0697"
- height="23.892008"
- x="52.659744"
- y="306.01089" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:'Bitstream Vera Sans';-inkscape-font-specification:'Bitstream Vera Sans';fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.955597"
- y="163.55217"
- id="text5219-26-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.955597"
- y="163.55217"
- id="tspan5223-10-9"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.098343"
- y="187.18845"
- id="text5219-26-1-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.098343"
- y="187.18845"
- id="tspan5223-10-9-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.829468"
- y="211.79611"
- id="text5219-26-1-5"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.829468"
- y="211.79611"
- id="tspan5223-10-9-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">SSO AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.770523"
- y="235.66898"
- id="text5219-26-1-5-7-6"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.770523"
- y="235.66898"
- id="tspan5223-10-9-1-6-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPC AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.895973"
- y="259.25156"
- id="text5219-26-1-5-7-6-3"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.895973"
- y="259.25156"
- id="tspan5223-10-9-1-6-8-3"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">CPT AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.645073"
- y="282.35391"
- id="text5219-26-1-5-7-6-3-0"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.645073"
- y="282.35391"
- id="tspan5223-10-9-1-6-8-3-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">RVU AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.93084431px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.07757032"
- x="110.2803"
- y="126.02858"
- id="text5219-26"
- transform="scale(1.0076913,0.9923674)"><tspan
- sodipodi:role="line"
- x="110.2803"
- y="126.02858"
- id="tspan5223-10"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032">Linux AF driver</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="139.49821"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032"
- id="tspan5325">(octeontx2_af)</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="152.96783"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#ff0000;stroke-width:1.07757032"
- id="tspan5327">PF0</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="160.38988"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032"
- id="tspan5329" /></text>
- <rect
- style="fill:url(#linearGradient5476);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5468"
- width="36.554455"
- height="18.169683"
- x="49.603416"
- y="357.7995" />
- <g
- id="g5594"
- transform="translate(-18,-40)">
- <text
- id="text5480"
- y="409.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#6a5400;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#6a5400;fill-opacity:1"
- y="409.46326"
- x="73.41291"
- id="tspan5478"
- sodipodi:role="line">CGX-0</tspan></text>
- </g>
- <rect
- style="fill:url(#linearGradient5658);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5468-2"
- width="36.554455"
- height="18.169683"
- x="92.712852"
- y="358.37842" />
- <g
- id="g5594-7"
- transform="translate(25.109434,2.578931)">
- <text
- id="text5480-9"
- y="367.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#695400;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#695400;fill-opacity:1"
- y="367.46326"
- x="73.41291"
- id="tspan5478-0"
- sodipodi:role="line">CGX-1</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="104.15788"
- y="355.79947"
- id="text5711"><tspan
- sodipodi:role="line"
- id="tspan5709"
- x="104.15788"
- y="392.29269" /></text>
- </g>
- <rect
- style="opacity:1;fill:url(#linearGradient6997);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1"
- width="36.554455"
- height="18.169683"
- x="136.71284"
- y="358.37842" />
- <g
- id="g5594-7-0"
- transform="translate(69.109434,2.578931)">
- <text
- id="text5480-9-7"
- y="367.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#695400;fill-opacity:1"
- y="367.46326"
- x="73.41291"
- id="tspan5478-0-4"
- sodipodi:role="line">CGX-2</tspan></text>
- </g>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="116.4436"
- y="309.90784"
- id="text5219-26-1-5-7-6-3-0-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="116.4436"
- y="309.90784"
- id="tspan5223-10-9-1-6-8-3-1-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.03398025">CGX-FW Interface</tspan></text>
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart);marker-end:url(#Arrow1Mend)"
- d="m 65.54286,336.17648 v 23"
- id="path7614"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30);marker-end:url(#Arrow1Mend-6)"
- d="m 108.54285,336.67647 v 23"
- id="path7614-2"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0);marker-end:url(#Arrow1Mend-6-8)"
- d="m 152.54285,336.67647 v 23"
- id="path7614-2-2"
- inkscape:connector-curvature="0" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50469553;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1"
- width="100.27454"
- height="105.81976"
- x="242.65558"
- y="233.7666" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50588065;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6"
- width="100.27335"
- height="106.31857"
- x="361.40619"
- y="233.7672" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50588065;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-7"
- width="100.27335"
- height="106.31857"
- x="467.40619"
- y="233.7672" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.49445513;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-7-0"
- width="95.784782"
- height="106.33"
- x="573.40039"
- y="233.76149" />
- <path
- style="fill:none;stroke:#00ff00;stroke-width:0.984;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.984, 0.98400000000000021;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart);marker-end:url(#Arrow1Mend)"
- d="M 176.02438,304.15296 C 237.06133,305.2 237.06133,305.2 237.06133,305.2"
- id="path8315"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="177.04286"
- y="299.17648"
- id="text8319"><tspan
- sodipodi:role="line"
- id="tspan8317"
- x="177.04286"
- y="299.17648"
- style="font-size:10.66666698px;line-height:1">AF-PF MBOX</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="291.53308"
- y="264.67648"
- id="text8323"><tspan
- sodipodi:role="line"
- id="tspan8321"
- x="291.53308"
- y="264.67648"
- style="font-size:10px;text-align:center;text-anchor:middle"><tspan
- style="font-size:10px;fill:#0000ff"
- id="tspan8339"><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11972">Linux</tspan></tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11970"> Netdev </tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="281.34314"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345">driver</tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="298.00983"
- id="tspan8325"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">(octeontx2_pf)</tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="314.67648"
- id="tspan8327"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10511">x</tspan></tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="331.34314"
- id="tspan8329" /></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot8331"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion8333"><rect
- id="rect8335"
- width="48.5"
- height="28"
- x="252.5"
- y="208.34286" /></flowRegion><flowPara
- id="flowPara8337" /></flowRoot> <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9"
- width="71.28923"
- height="15.589548"
- x="253.89825"
- y="320.63168" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="283.97266"
- y="319.09348"
- id="text5219-26-1-5-7-6-3-0-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="283.97266"
- y="319.09348"
- id="tspan5223-10-9-1-6-8-3-1-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7"
- width="71.28923"
- height="15.589548"
- x="255.89822"
- y="237.88171" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="285.03787"
- y="239.81017"
- id="text5219-26-1-5-7-6-3-0-1-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="285.03787"
- y="239.81017"
- id="tspan5223-10-9-1-6-8-3-1-0-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333333px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA LF</tspan></text>
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.41014698;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0-9);marker-end:url(#Arrow1Mend-6-8-3)"
- d="m 287.54285,340.99417 v 18.3646"
- id="path7614-2-2-8"
- inkscape:connector-curvature="0" />
- <rect
- style="opacity:1;fill:url(#linearGradient6997-8);fill-opacity:1;stroke:#695400;stroke-width:1.316;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1-4"
- width="81.505402"
- height="17.62063"
- x="251.04015"
- y="359.86615" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="263.46152"
- y="224.99915"
- id="text8319-7"><tspan
- sodipodi:role="line"
- id="tspan8317-7"
- x="263.46152"
- y="224.99915"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="259.23218"
- y="371.46179"
- id="text8319-7-7"><tspan
- sodipodi:role="line"
- id="tspan8317-7-3"
- x="259.23218"
- y="371.46179"
- style="font-size:9.33333302px;line-height:1">CGX-x LMAC-y</tspan></text>
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3"
- width="80.855743"
- height="92.400963"
- x="197.86496"
- y="112.97599" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4"
- width="80.855743"
- height="92.400963"
- x="286.61499"
- y="112.476" />
- <path
- style="fill:none;stroke:#580000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.3, 0.3;stroke-dashoffset:0;stroke-opacity:1"
- d="m 188.04286,109.67648 c 2.5,238.5 2,238 2,238 163.49999,0.5 163.49999,0.5 163.49999,0.5 v -124 l -70,0.5 -1.5,-116 v 1.5 z"
- id="path9240"
- inkscape:connector-curvature="0" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4-0"
- width="80.855743"
- height="92.400963"
- x="375.11499"
- y="111.976" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4-0-0"
- width="80.855743"
- height="92.400963"
- x="586.61499"
- y="111.476" />
- <path
- style="fill:none;stroke:#ff00cc;stroke-width:0.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:7.2, 0.29999999999999999;stroke-dashoffset:0"
- d="m 675.54284,107.17648 1,239.5 -317.99999,0.5 -1,-125 14.5,0.5 -0.5,-113.5 z"
- id="path9272"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:0.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:7.2,0.3;stroke-dashoffset:0"
- d="m 284.54285,109.17648 0.5,100 84,-0.5 v -99.5 z"
- id="path9274"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="231.87221"
- y="146.02637"
- id="text8323-1"
- transform="scale(1.0315378,0.96942639)"><tspan
- sodipodi:role="line"
- id="tspan8321-2"
- x="231.87221"
- y="146.02637"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="font-size:8.12077141px;fill:#0000ff;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8339-6">Linux</tspan> Netdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9396">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="159.56099"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-6">driver</tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="173.09561"
- id="tspan8325-2"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">(octeontx2_vf)</tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="186.63022"
- id="tspan8327-7"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#782121;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10513">x</tspan><tspan
- style="font-size:8.12077141px;fill:#782121;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="200.16484"
- id="tspan8329-3"
- style="stroke-width:0.81207716;fill:#782121" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9"
- width="59.718147"
- height="12.272857"
- x="207.65872"
- y="185.61246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="225.56583"
- y="192.49615"
- id="text5219-26-1-5-7-6-3-0-1-6"
- transform="scale(0.99742277,1.0025839)"><tspan
- sodipodi:role="line"
- x="225.56583"
- y="192.49615"
- id="tspan5223-10-9-1-6-8-3-1-0-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5"
- width="59.718147"
- height="12.272857"
- x="209.33406"
- y="116.46765" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="226.43088"
- y="124.1223"
- id="text5219-26-1-5-7-6-3-0-1-4-7"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="226.43088"
- y="124.1223"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="317.66635"
- y="121.26925"
- id="text8323-1-9"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-3"
- x="317.66635"
- y="131.14769"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716" /><tspan
- sodipodi:role="line"
- x="317.66635"
- y="144.6823"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9400"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9402">DPDK</tspan> Ethdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9398">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="158.21692"
- id="tspan8325-2-7"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">driver</tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="171.75154"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9392" /><tspan
- sodipodi:role="line"
- x="317.66635"
- y="185.28616"
- id="tspan8327-7-8"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#782121;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10515">x</tspan><tspan
- style="font-size:8.12077141px;fill:#782121;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1-0">-VF1</tspan></tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="198.82077"
- id="tspan8329-3-3"
- style="stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3"
- width="59.718147"
- height="12.272857"
- x="295.65872"
- y="185.11246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="313.79312"
- y="191.99756"
- id="text5219-26-1-5-7-6-3-0-1-6-1"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="313.79312"
- y="191.99756"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5-8"
- width="59.718147"
- height="12.272857"
- x="297.33408"
- y="115.96765" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="314.65817"
- y="123.62372"
- id="text5219-26-1-5-7-6-3-0-1-4-7-9"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="314.65817"
- y="123.62372"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0-9"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mstart);marker-start:url(#Arrow1Mstart)"
- d="m 254.54285,205.17648 c 1,29 1,28.5 1,28.5"
- id="path9405"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-1);marker-end:url(#Arrow1Mstart-1)"
- d="m 324.42292,203.92589 c 1,29 1,28.5 1,28.5"
- id="path9405-3"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="408.28308"
- y="265.83011"
- id="text8323-7"><tspan
- sodipodi:role="line"
- id="tspan8321-3"
- x="408.28308"
- y="265.83011"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10440">DPDK</tspan> Ethdev <tspan
- style="font-size:10px;fill:#00d4aa;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8343-5">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="282.49677"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-8">driver</tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="299.16345"
- id="tspan8325-5"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /><tspan
- sodipodi:role="line"
- x="408.28308"
- y="315.83011"
- id="tspan8327-1"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#ff0000;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10517">y</tspan></tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="332.49677"
- id="tspan8329-2" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-3"
- width="71.28923"
- height="15.589548"
- x="376.64825"
- y="319.78531" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="410.92075"
- y="318.27411"
- id="text5219-26-1-5-7-6-3-0-1-62"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="410.92075"
- y="318.27411"
- id="tspan5223-10-9-1-6-8-3-1-0-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-2"
- width="71.28923"
- height="15.589548"
- x="378.64822"
- y="237.03534" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="411.98596"
- y="238.99095"
- id="text5219-26-1-5-7-6-3-0-1-4-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="411.98596"
- y="238.99095"
- id="tspan5223-10-9-1-6-8-3-1-0-8-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="386.21152"
- y="224.15277"
- id="text8319-7-5"><tspan
- sodipodi:role="line"
- id="tspan8317-7-8"
- x="386.21152"
- y="224.15277"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-48);marker-end:url(#Arrow1Mstart-48)"
- d="m 411.29285,204.33011 c 1,29 1,28.5 1,28.5"
- id="path9405-0"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="520.61176"
- y="265.49265"
- id="text8323-7-8"><tspan
- sodipodi:role="line"
- id="tspan8321-3-3"
- x="520.61176"
- y="265.49265"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff2a2a"
- id="tspan10440-2">DPDK</tspan> Eventdev <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343-5-3">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="282.1593"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345-8-6">driver</tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="298.82599"
- id="tspan8325-5-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle" /><tspan
- sodipodi:role="line"
- x="520.61176"
- y="315.49265"
- id="tspan8327-1-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10519">z</tspan></tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="332.1593"
- id="tspan8329-2-1" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-3-6"
- width="71.28923"
- height="15.589548"
- x="484.97693"
- y="319.44785" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="522.95496"
- y="317.94733"
- id="text5219-26-1-5-7-6-3-0-1-62-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="522.95496"
- y="317.94733"
- id="tspan5223-10-9-1-6-8-3-1-0-4-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">TIM LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-2-8"
- width="71.28923"
- height="15.589548"
- x="486.9769"
- y="236.69788" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="524.0202"
- y="238.66432"
- id="text5219-26-1-5-7-6-3-0-1-4-4-3"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="524.0202"
- y="238.66432"
- id="tspan5223-10-9-1-6-8-3-1-0-8-7-6"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">SSO LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="619.6156"
- y="265.47531"
- id="text8323-7-8-3"><tspan
- sodipodi:role="line"
- id="tspan8321-3-3-1"
- x="619.6156"
- y="265.47531"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"> <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff"
- id="tspan10562">Linux </tspan>Crypto <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343-5-3-7">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="282.14197"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345-8-6-8">driver</tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="298.80865"
- id="tspan8325-5-4-3"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle" /><tspan
- sodipodi:role="line"
- x="619.6156"
- y="315.47531"
- id="tspan8327-1-0-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10560">m</tspan></tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="332.14197"
- id="tspan8329-2-1-9" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3-0"
- width="59.718147"
- height="12.272857"
- x="385.10458"
- y="183.92126" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="403.46997"
- y="190.80957"
- id="text5219-26-1-5-7-6-3-0-1-6-1-5"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="403.46997"
- y="190.80957"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5-8-5"
- width="59.718147"
- height="12.272857"
- x="386.77994"
- y="116.77647" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="404.33502"
- y="124.43062"
- id="text5219-26-1-5-7-6-3-0-1-4-7-9-8"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="404.33502"
- y="124.43062"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0-9-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="402.97598"
- y="143.8235"
- id="text8323-1-7"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-1"
- x="402.97598"
- y="143.8235"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11102">DPDK</tspan> Ethdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9396-1">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="157.35812"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-6-5">driver</tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="170.89275"
- id="tspan8327-7-2"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /><tspan
- sodipodi:role="line"
- x="402.97598"
- y="184.42735"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11106">PF<tspan
- style="fill:#a02c2c;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11110">y</tspan><tspan
- style="font-size:8.12077141px;fill:#a02c2c;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1-2">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="197.96198"
- id="tspan8329-3-4"
- style="stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3-0-0"
- width="59.718147"
- height="12.272857"
- x="596.60461"
- y="185.11246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="615.51703"
- y="191.99774"
- id="text5219-26-1-5-7-6-3-0-1-6-1-5-1"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="615.51703"
- y="191.99774"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5-5-2"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">CPT LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="608.00879"
- y="145.05219"
- id="text8323-1-7-3"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-1-5"
- x="608.00879"
- y="145.05219"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716"><tspan
- id="tspan1793"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff2a2a">DPDK</tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11966"> Crypto </tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#0066ff"
- id="tspan9396-1-1">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="158.58681"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716"
- id="tspan8345-6-5-4">driver</tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="172.12143"
- id="tspan8327-7-2-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716" /><tspan
- sodipodi:role="line"
- x="608.00879"
- y="185.65604"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716"
- id="tspan11106-8">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#c83737"
- id="tspan11172">m</tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;fill:#c83737;stroke-width:0.81207716"
- id="tspan8347-1-2-0">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="199.19066"
- id="tspan8329-3-4-0"
- style="stroke-width:0.81207716" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="603.23218"
- y="224.74855"
- id="text8319-7-5-1"><tspan
- sodipodi:role="line"
- id="tspan8317-7-8-4"
- x="603.23218"
- y="224.74855"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-48-6);marker-end:url(#Arrow1Mstart-48-6)"
- d="m 628.31351,204.92589 c 1,29 1,28.5 1,28.5"
- id="path9405-0-2"
- inkscape:connector-curvature="0" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot11473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(46.542857,100.33361)"><flowRegion
- id="flowRegion11475"><rect
- id="rect11477"
- width="90"
- height="14.5"
- x="426"
- y="26.342873" /></flowRegion><flowPara
- id="flowPara11479">DDDpk</flowPara></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="509.60013"
- y="128.17648"
- id="text11483"><tspan
- sodipodi:role="line"
- id="tspan11481"
- x="511.47513"
- y="128.17648"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544">D<tspan
- style="-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal;fill:#005544"
- id="tspan11962">PDK-APP1 with </tspan></tspan><tspan
- sodipodi:role="line"
- x="511.47513"
- y="144.84315"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11485">one ethdev </tspan><tspan
- sodipodi:role="line"
- x="509.60013"
- y="161.50981"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11491">over Linux PF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="533.54285"
- y="158.17648"
- id="text11489"><tspan
- sodipodi:role="line"
- id="tspan11487"
- x="533.54285"
- y="170.34088" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="518.02197"
- y="179.98117"
- id="text11483-6"><tspan
- sodipodi:role="line"
- id="tspan11481-4"
- x="519.42822"
- y="179.98117"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">DPDK-APP2 with </tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="196.64784"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11485-5">Two ethdevs(PF,VF) ,</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="213.3145"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11517">eventdev, timer adapter and</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="229.98117"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11519"> cryptodev</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="246.64784"
- style="font-size:10.66666698px;text-align:center;text-anchor:middle;fill:#00ffff"
- id="tspan11491-6" /></text>
- <path
- style="fill:#005544;stroke:#00ffff;stroke-width:1.02430511;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.02430516, 4.09722065999999963;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mstart-8)"
- d="m 483.99846,150.16496 -112.95349,13.41069 v 0 l -0.48897,-0.53643 h 0.48897"
- id="path11521"
- inkscape:connector-curvature="0" />
- <path
- style="fill:#ff0000;stroke:#ff5555;stroke-width:1.16440296;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.16440301, 2.32880602999999997;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-0)"
- d="m 545.54814,186.52569 c 26.3521,-76.73875 26.3521,-76.73875 26.3521,-76.73875"
- id="path11523"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.41014698;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0-9-0);marker-end:url(#Arrow1Mend-6-8-3-7)"
- d="m 409.29286,341.50531 v 18.3646"
- id="path7614-2-2-8-2"
- inkscape:connector-curvature="0" />
- <rect
- style="opacity:1;fill:url(#linearGradient6997-8-0);fill-opacity:1;stroke:#695400;stroke-width:1.31599998;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1-4-9"
- width="81.505402"
- height="17.62063"
- x="372.79016"
- y="360.37729" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="380.98218"
- y="371.97293"
- id="text8319-7-7-1"><tspan
- sodipodi:role="line"
- id="tspan8317-7-3-1"
- x="380.98218"
- y="371.97293"
- style="font-size:9.33333302px;line-height:1">CGX-x LMAC-y</tspan></text>
- </g>
-</svg>
diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst
index 7614e1a368..2ff91a6018 100644
--- a/doc/guides/platform/index.rst
+++ b/doc/guides/platform/index.rst
@@ -15,4 +15,3 @@ The following are platform specific guides and setup information.
dpaa
dpaa2
octeontx
- octeontx2
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
deleted file mode 100644
index 5ab43abbdd..0000000000
--- a/doc/guides/platform/octeontx2.rst
+++ /dev/null
@@ -1,520 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-Marvell OCTEON TX2 Platform Guide
-=================================
-
-This document gives an overview of **Marvell OCTEON TX2** RVU H/W block,
-packet flow and procedure to build DPDK on OCTEON TX2 platform.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Supported OCTEON TX2 SoCs
--------------------------
-
-- CN98xx
-- CN96xx
-- CN93xx
-
-OCTEON TX2 Resource Virtualization Unit architecture
-----------------------------------------------------
-
-The :numref:`figure_octeontx2_resource_virtualization` diagram depicts the
-RVU architecture and a resource provisioning example.
-
-.. _figure_octeontx2_resource_virtualization:
-
-.. figure:: img/octeontx2_resource_virtualization.*
-
- OCTEON TX2 Resource virtualization architecture and provisioning example
-
-
-Resource Virtualization Unit (RVU) on Marvell's OCTEON TX2 SoC maps HW
-resources belonging to the network, crypto and other functional blocks onto
-PCI-compatible physical and virtual functions.
-
-Each functional block has multiple local functions (LFs) for
-provisioning to different PCIe devices. RVU supports multiple PCIe SRIOV
-physical functions (PFs) and virtual functions (VFs).
-
-The :numref:`table_octeontx2_rvu_dpdk_mapping` shows the various local
-functions (LFs) provided by the RVU and its functional mapping to
-DPDK subsystem.
-
-.. _table_octeontx2_rvu_dpdk_mapping:
-
-.. table:: RVU managed functional blocks and its mapping to DPDK subsystem
-
- +---+-----+--------------------------------------------------------------+
- | # | LF | DPDK subsystem mapping |
- +===+=====+==============================================================+
- | 1 | NIX | rte_ethdev, rte_tm, rte_event_eth_[rt]x_adapter, rte_security|
- +---+-----+--------------------------------------------------------------+
- | 2 | NPA | rte_mempool |
- +---+-----+--------------------------------------------------------------+
- | 3 | NPC | rte_flow |
- +---+-----+--------------------------------------------------------------+
- | 4 | CPT | rte_cryptodev, rte_event_crypto_adapter |
- +---+-----+--------------------------------------------------------------+
- | 5 | SSO | rte_eventdev |
- +---+-----+--------------------------------------------------------------+
- | 6 | TIM | rte_event_timer_adapter |
- +---+-----+--------------------------------------------------------------+
- | 7 | LBK | rte_ethdev |
- +---+-----+--------------------------------------------------------------+
- | 8 | DPI | rte_rawdev |
- +---+-----+--------------------------------------------------------------+
- | 9 | SDP | rte_ethdev |
- +---+-----+--------------------------------------------------------------+
- | 10| REE | rte_regexdev |
- +---+-----+--------------------------------------------------------------+
-
-PF0 is called the administrative / admin function (AF) and has exclusive
-privileges to provision RVU functional block's LFs to each of the PF/VF.
-
-PF/VFs communicates with AF via a shared memory region (mailbox).Upon receiving
-requests from PF/VF, AF does resource provisioning and other HW configuration.
-
-AF is always attached to host, but PF/VFs may be used by host kernel itself,
-or attached to VMs or to userspace applications like DPDK, etc. So, AF has to
-handle provisioning/configuration requests sent by any device from any domain.
-
-The AF driver does not receive or process any data.
-It is only a configuration driver used in control path.
-
-The :numref:`figure_octeontx2_resource_virtualization` diagram also shows a
-resource provisioning example where,
-
-1. PFx and PFx-VF0 bound to Linux netdev driver.
-2. PFx-VF1 ethdev driver bound to the first DPDK application.
-3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
-
-LBK HW Access
--------------
-
-Loopback HW Unit (LBK) receives packets from NIX-RX and sends packets back to NIX-TX.
-The loopback block has N channels and contains data buffering that is shared across
-all channels. The LBK HW Unit is abstracted using ethdev subsystem, Where PF0's
-VFs are exposed as ethdev device and odd-even pairs of VFs are tied together,
-that is, packets sent on odd VF end up received on even VF and vice versa.
-This would enable HW accelerated means of communication between two domains
-where even VF bound to the first domain and odd VF bound to the second domain.
-
-Typical application usage models are,
-
-#. Communication between the Linux kernel and DPDK application.
-#. Exception path to Linux kernel from DPDK application as SW ``KNI`` replacement.
-#. Communication between two different DPDK applications.
-
-SDP interface
--------------
-
-System DPI Packet Interface unit(SDP) provides PCIe endpoint support for remote host
-to DMA packets into and out of OCTEON TX2 SoC. SDP interface comes in to live only when
-OCTEON TX2 SoC is connected in PCIe endpoint mode. It can be used to send/receive
-packets to/from remote host machine using input/output queue pairs exposed to it.
-SDP interface receives input packets from remote host from NIX-RX and sends packets
-to remote host using NIX-TX. Remote host machine need to use corresponding driver
-(kernel/user mode) to communicate with SDP interface on OCTEON TX2 SoC. SDP supports
-single PCIe SRIOV physical function(PF) and multiple virtual functions(VF's). Users
-can bind PF or VF to use SDP interface and it will be enumerated as ethdev ports.
-
-The primary use case for SDP is to enable the smart NIC use case. Typical usage models are,
-
-#. Communication channel between remote host and OCTEON TX2 SoC over PCIe.
-#. Transfer packets received from network interface to remote host over PCIe and
- vice-versa.
-
-OCTEON TX2 packet flow
-----------------------
-
-The :numref:`figure_octeontx2_packet_flow_hw_accelerators` diagram depicts
-the packet flow on OCTEON TX2 SoC in conjunction with use of various HW accelerators.
-
-.. _figure_octeontx2_packet_flow_hw_accelerators:
-
-.. figure:: img/octeontx2_packet_flow_hw_accelerators.*
-
- OCTEON TX2 packet flow in conjunction with use of HW accelerators
-
-HW Offload Drivers
-------------------
-
-This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
-
-#. **Ethdev Driver**
- See :doc:`../nics/octeontx2` for NIX Ethdev driver information.
-
-#. **Mempool Driver**
- See :doc:`../mempool/octeontx2` for NPA mempool driver information.
-
-#. **Event Device Driver**
- See :doc:`../eventdevs/octeontx2` for SSO event device driver information.
-
-#. **Crypto Device Driver**
- See :doc:`../cryptodevs/octeontx2` for CPT crypto device driver information.
-
-Procedure to Setup Platform
----------------------------
-
-There are three main prerequisites for setting up DPDK on OCTEON TX2
-compatible board:
-
-1. **OCTEON TX2 Linux kernel driver**
-
- The dependent kernel drivers can be obtained from the
- `kernel.org <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/marvell/octeontx2>`_.
-
- Alternatively, the Marvell SDK also provides the required kernel drivers.
-
- Linux kernel should be configured with the following features enabled:
-
-.. code-block:: console
-
- # 64K pages enabled for better performance
- CONFIG_ARM64_64K_PAGES=y
- CONFIG_ARM64_VA_BITS_48=y
- # huge pages support enabled
- CONFIG_HUGETLBFS=y
- CONFIG_HUGETLB_PAGE=y
- # VFIO enabled with TYPE1 IOMMU at minimum
- CONFIG_VFIO_IOMMU_TYPE1=y
- CONFIG_VFIO_VIRQFD=y
- CONFIG_VFIO=y
- CONFIG_VFIO_NOIOMMU=y
- CONFIG_VFIO_PCI=y
- CONFIG_VFIO_PCI_MMAP=y
- # SMMUv3 driver
- CONFIG_ARM_SMMU_V3=y
- # ARMv8.1 LSE atomics
- CONFIG_ARM64_LSE_ATOMICS=y
- # OCTEONTX2 drivers
- CONFIG_OCTEONTX2_MBOX=y
- CONFIG_OCTEONTX2_AF=y
- # Enable if netdev PF driver required
- CONFIG_OCTEONTX2_PF=y
- # Enable if netdev VF driver required
- CONFIG_OCTEONTX2_VF=y
- CONFIG_CRYPTO_DEV_OCTEONTX2_CPT=y
- # Enable if OCTEONTX2 DMA PF driver required
- CONFIG_OCTEONTX2_DPI_PF=n
-
-2. **ARM64 Linux Tool Chain**
-
- For example, the *aarch64* Linaro Toolchain, which can be obtained from
- `here <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/>`_.
-
- Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is
- optimized for OCTEON TX2 CPU.
-
-3. **Rootfile system**
-
- Any *aarch64* supporting filesystem may be used. For example,
- Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
- from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
-
- Alternatively, the Marvell SDK provides the buildroot based root filesystem.
- The SDK includes all the above prerequisites necessary to bring up the OCTEON TX2 board.
-
-- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
-
-
-Debugging Options
------------------
-
-.. _table_octeontx2_common_debug_options:
-
-.. table:: OCTEON TX2 common debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | Common | --log-level='pmd\.octeontx2\.base,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | Mailbox | --log-level='pmd\.octeontx2\.mbox,8' |
- +---+------------+-------------------------------------------------------+
-
-Debugfs support
-~~~~~~~~~~~~~~~
-
-The **OCTEON TX2 Linux kernel driver** provides support to dump RVU blocks
-context or stats using debugfs.
-
-Enable ``debugfs`` by:
-
-1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUGFS=y``.
-2. Boot OCTEON TX2 with debugfs supported kernel.
-3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
-
-.. code-block:: console
-
- # mount -t debugfs none /sys/kernel/debug
-
-Currently ``debugfs`` supports the following RVU blocks NIX, NPA, NPC, NDC,
-SSO & CGX.
-
-The file structure under ``/sys/kernel/debug`` is as follows
-
-.. code-block:: console
-
- octeontx2/
- |-- cgx
- | |-- cgx0
- | | '-- lmac0
- | | '-- stats
- | |-- cgx1
- | | |-- lmac0
- | | | '-- stats
- | | '-- lmac1
- | | '-- stats
- | '-- cgx2
- | '-- lmac0
- | '-- stats
- |-- cpt
- | |-- cpt_engines_info
- | |-- cpt_engines_sts
- | |-- cpt_err_info
- | |-- cpt_lfs_info
- | '-- cpt_pc
- |---- nix
- | |-- cq_ctx
- | |-- ndc_rx_cache
- | |-- ndc_rx_hits_miss
- | |-- ndc_tx_cache
- | |-- ndc_tx_hits_miss
- | |-- qsize
- | |-- rq_ctx
- | |-- sq_ctx
- | '-- tx_stall_hwissue
- |-- npa
- | |-- aura_ctx
- | |-- ndc_cache
- | |-- ndc_hits_miss
- | |-- pool_ctx
- | '-- qsize
- |-- npc
- | |-- mcam_info
- | '-- rx_miss_act_stats
- |-- rsrc_alloc
- '-- sso
- |-- hws
- | '-- sso_hws_info
- '-- hwgrp
- |-- sso_hwgrp_aq_thresh
- |-- sso_hwgrp_iaq_walk
- |-- sso_hwgrp_pc
- |-- sso_hwgrp_free_list_walk
- |-- sso_hwgrp_ient_walk
- '-- sso_hwgrp_taq_walk
-
-RVU block LF allocation:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/rsrc_alloc
-
- pcifunc NPA NIX SSO GROUP SSOWS TIM CPT
- PF1 0 0
- PF4 1
- PF13 0, 1 0, 1 0
-
-CGX example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/cgx/cgx2/lmac0/stats
-
- =======Link Status======
- Link is UP 40000 Mbps
- =======RX_STATS======
- Received packets: 0
- Octets of received packets: 0
- Received PAUSE packets: 0
- Received PAUSE and control packets: 0
- Filtered DMAC0 (NIX-bound) packets: 0
- Filtered DMAC0 (NIX-bound) octets: 0
- Packets dropped due to RX FIFO full: 0
- Octets dropped due to RX FIFO full: 0
- Error packets: 0
- Filtered DMAC1 (NCSI-bound) packets: 0
- Filtered DMAC1 (NCSI-bound) octets: 0
- NCSI-bound packets dropped: 0
- NCSI-bound octets dropped: 0
- =======TX_STATS======
- Packets dropped due to excessive collisions: 0
- Packets dropped due to excessive deferral: 0
- Multiple collisions before successful transmission: 0
- Single collisions before successful transmission: 0
- Total octets sent on the interface: 0
- Total frames sent on the interface: 0
- Packets sent with an octet count < 64: 0
- Packets sent with an octet count == 64: 0
- Packets sent with an octet count of 65127: 0
- Packets sent with an octet count of 128-255: 0
- Packets sent with an octet count of 256-511: 0
- Packets sent with an octet count of 512-1023: 0
- Packets sent with an octet count of 1024-1518: 0
- Packets sent with an octet count of > 1518: 0
- Packets sent to a broadcast DMAC: 0
- Packets sent to the multicast DMAC: 0
- Transmit underflow and were truncated: 0
- Control/PAUSE packets sent: 0
-
-CPT example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/cpt/cpt_pc
-
- CPT instruction requests 0
- CPT instruction latency 0
- CPT NCB read requests 0
- CPT NCB read latency 0
- CPT read requests caused by UC fills 0
- CPT active cycles pc 1395642
- CPT clock count pc 5579867595493
-
-NIX example usage:
-
-.. code-block:: console
-
- Usage: echo <nixlf> [cq number/all] > /sys/kernel/debug/octeontx2/nix/cq_ctx
- cat /sys/kernel/debug/octeontx2/nix/cq_ctx
- echo 0 0 > /sys/kernel/debug/octeontx2/nix/cq_ctx
- cat /sys/kernel/debug/octeontx2/nix/cq_ctx
-
- =====cq_ctx for nixlf:0 and qidx:0 is=====
- W0: base 158ef1a00
-
- W1: wrptr 0
- W1: avg_con 0
- W1: cint_idx 0
- W1: cq_err 0
- W1: qint_idx 0
- W1: bpid 0
- W1: bp_ena 0
-
- W2: update_time 31043
- W2:avg_level 255
- W2: head 0
- W2:tail 0
-
- W3: cq_err_int_ena 5
- W3:cq_err_int 0
- W3: qsize 4
- W3:caching 1
- W3: substream 0x000
- W3: ena 1
- W3: drop_ena 1
- W3: drop 64
- W3: bp 0
-
-NPA example usage:
-
-.. code-block:: console
-
- Usage: echo <npalf> [pool number/all] > /sys/kernel/debug/octeontx2/npa/pool_ctx
- cat /sys/kernel/debug/octeontx2/npa/pool_ctx
- echo 0 0 > /sys/kernel/debug/octeontx2/npa/pool_ctx
- cat /sys/kernel/debug/octeontx2/npa/pool_ctx
-
- ======POOL : 0=======
- W0: Stack base 1375bff00
- W1: ena 1
- W1: nat_align 1
- W1: stack_caching 1
- W1: stack_way_mask 0
- W1: buf_offset 1
- W1: buf_size 19
- W2: stack_max_pages 24315
- W2: stack_pages 24314
- W3: op_pc 267456
- W4: stack_offset 2
- W4: shift 5
- W4: avg_level 255
- W4: avg_con 0
- W4: fc_ena 0
- W4: fc_stype 0
- W4: fc_hyst_bits 0
- W4: fc_up_crossing 0
- W4: update_time 62993
- W5: fc_addr 0
- W6: ptr_start 1593adf00
- W7: ptr_end 180000000
- W8: err_int 0
- W8: err_int_ena 7
- W8: thresh_int 0
- W8: thresh_int_ena 0
- W8: thresh_up 0
- W8: thresh_qint_idx 0
- W8: err_qint_idx 0
-
-NPC example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/npc/mcam_info
-
- NPC MCAM info:
- RX keywidth : 224bits
- TX keywidth : 224bits
-
- MCAM entries : 2048
- Reserved : 158
- Available : 1890
-
- MCAM counters : 512
- Reserved : 1
- Available : 511
-
-SSO example usage:
-
-.. code-block:: console
-
- Usage: echo [<hws>/all] > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info
- echo 0 > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info
-
- ==================================================
- SSOW HWS[0] Arbitration State 0x0
- SSOW HWS[0] Guest Machine Control 0x0
- SSOW HWS[0] SET[0] Group Mask[0] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[1] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[2] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[3] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[0] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[1] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[2] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[3] 0xffffffffffffffff
- ==================================================
-
-Compile DPDK
-------------
-
-DPDK may be compiled either natively on OCTEON TX2 platform or cross-compiled on
-an x86 based platform.
-
-Native Compilation
-~~~~~~~~~~~~~~~~~~
-
-.. code-block:: console
-
- meson build
- ninja -C build
-
-Cross Compilation
-~~~~~~~~~~~~~~~~~
-
-Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
-
-.. code-block:: console
-
- meson build --cross-file config/arm/arm64_octeontx2_linux_gcc
- ninja -C build
-
-.. note::
-
- By default, meson cross compilation uses ``aarch64-linux-gnu-gcc`` toolchain,
- if Marvell toolchain is available then it can be used by overriding the
- c, cpp, ar, strip ``binaries`` attributes to respective Marvell
- toolchain binaries in ``config/arm/arm64_octeontx2_linux_gcc`` file.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5581822d10..4e5b23c53d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,20 +125,3 @@ Deprecation Notices
applications should be updated to use the ``dmadev`` library instead,
with the underlying HW-functionality being provided by the ``ioat`` or
``idxd`` dma drivers
-
-* drivers/octeontx2: remove octeontx2 drivers
-
- In the view of enabling unified driver for ``octeontx2(cn9k)``/``octeontx3(cn10k)``,
- removing ``drivers/octeontx2`` drivers and replace with ``drivers/cnxk/`` which
- supports both ``octeontx2(cn9k)`` and ``octeontx3(cn10k)`` SoCs.
- This deprecation notice is to do following actions in DPDK v22.02 version.
-
- #. Replace ``drivers/common/octeontx2/`` with ``drivers/common/cnxk/``
- #. Replace ``drivers/mempool/octeontx2/`` with ``drivers/mempool/cnxk/``
- #. Replace ``drivers/net/octeontx2/`` with ``drivers/net/cnxk/``
- #. Replace ``drivers/event/octeontx2/`` with ``drivers/event/cnxk/``
- #. Replace ``drivers/crypto/octeontx2/`` with ``drivers/crypto/cnxk/``
- #. Rename ``drivers/regex/octeontx2/`` as ``drivers/regex/cn9k/``
- #. Rename ``config/arm/arm64_octeontx2_linux_gcc`` as ``config/arm/arm64_cn9k_linux_gcc``
-
- Last two actions are to align naming convention as cnxk scheme.
diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst
index 1a0e6111d7..31fcebdf95 100644
--- a/doc/guides/rel_notes/release_19_08.rst
+++ b/doc/guides/rel_notes/release_19_08.rst
@@ -152,11 +152,11 @@ New Features
``eventdev Tx adapter``, ``eventdev Timer adapter`` and ``rawdev DMA``
drivers for various HW co-processors available in ``OCTEON TX2`` SoC.
- See :doc:`../platform/octeontx2` and driver information:
+ See ``platform/octeontx2`` and driver information:
- * :doc:`../nics/octeontx2`
- * :doc:`../mempool/octeontx2`
- * :doc:`../eventdevs/octeontx2`
+ * ``nics/octeontx2``
+ * ``mempool/octeontx2``
+ * ``eventdevs/octeontx2``
* ``rawdevs/octeontx2_dma``
* **Introduced the Intel NTB PMD.**
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 302b3e5f37..79f3475ae6 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -192,7 +192,7 @@ New Features
Added a new PMD for hardware crypto offload block on ``OCTEON TX2``
SoC.
- See :doc:`../cryptodevs/octeontx2` for more details
+ See ``cryptodevs/octeontx2`` for more details
* **Updated NXP crypto PMDs for PDCP support.**
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index ce93483291..d3d5ebe4dc 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -157,7 +157,6 @@ The following are the application command-line options:
crypto_mvsam
crypto_null
crypto_octeontx
- crypto_octeontx2
crypto_openssl
crypto_qat
crypto_scheduler
diff --git a/drivers/common/meson.build b/drivers/common/meson.build
index 4acbad60b1..ea261dd70a 100644
--- a/drivers/common/meson.build
+++ b/drivers/common/meson.build
@@ -8,5 +8,4 @@ drivers = [
'iavf',
'mvep',
'octeontx',
- 'octeontx2',
]
diff --git a/drivers/common/octeontx2/hw/otx2_nix.h b/drivers/common/octeontx2/hw/otx2_nix.h
deleted file mode 100644
index e3b68505b7..0000000000
--- a/drivers/common/octeontx2/hw/otx2_nix.h
+++ /dev/null
@@ -1,1391 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NIX_HW_H__
-#define __OTX2_NIX_HW_H__
-
-/* Register offsets */
-
-#define NIX_AF_CFG (0x0ull)
-#define NIX_AF_STATUS (0x10ull)
-#define NIX_AF_NDC_CFG (0x18ull)
-#define NIX_AF_CONST (0x20ull)
-#define NIX_AF_CONST1 (0x28ull)
-#define NIX_AF_CONST2 (0x30ull)
-#define NIX_AF_CONST3 (0x38ull)
-#define NIX_AF_SQ_CONST (0x40ull)
-#define NIX_AF_CQ_CONST (0x48ull)
-#define NIX_AF_RQ_CONST (0x50ull)
-#define NIX_AF_PSE_CONST (0x60ull)
-#define NIX_AF_TL1_CONST (0x70ull)
-#define NIX_AF_TL2_CONST (0x78ull)
-#define NIX_AF_TL3_CONST (0x80ull)
-#define NIX_AF_TL4_CONST (0x88ull)
-#define NIX_AF_MDQ_CONST (0x90ull)
-#define NIX_AF_MC_MIRROR_CONST (0x98ull)
-#define NIX_AF_LSO_CFG (0xa8ull)
-#define NIX_AF_BLK_RST (0xb0ull)
-#define NIX_AF_TX_TSTMP_CFG (0xc0ull)
-#define NIX_AF_RX_CFG (0xd0ull)
-#define NIX_AF_AVG_DELAY (0xe0ull)
-#define NIX_AF_CINT_DELAY (0xf0ull)
-#define NIX_AF_RX_MCAST_BASE (0x100ull)
-#define NIX_AF_RX_MCAST_CFG (0x110ull)
-#define NIX_AF_RX_MCAST_BUF_BASE (0x120ull)
-#define NIX_AF_RX_MCAST_BUF_CFG (0x130ull)
-#define NIX_AF_RX_MIRROR_BUF_BASE (0x140ull)
-#define NIX_AF_RX_MIRROR_BUF_CFG (0x148ull)
-#define NIX_AF_LF_RST (0x150ull)
-#define NIX_AF_GEN_INT (0x160ull)
-#define NIX_AF_GEN_INT_W1S (0x168ull)
-#define NIX_AF_GEN_INT_ENA_W1S (0x170ull)
-#define NIX_AF_GEN_INT_ENA_W1C (0x178ull)
-#define NIX_AF_ERR_INT (0x180ull)
-#define NIX_AF_ERR_INT_W1S (0x188ull)
-#define NIX_AF_ERR_INT_ENA_W1S (0x190ull)
-#define NIX_AF_ERR_INT_ENA_W1C (0x198ull)
-#define NIX_AF_RAS (0x1a0ull)
-#define NIX_AF_RAS_W1S (0x1a8ull)
-#define NIX_AF_RAS_ENA_W1S (0x1b0ull)
-#define NIX_AF_RAS_ENA_W1C (0x1b8ull)
-#define NIX_AF_RVU_INT (0x1c0ull)
-#define NIX_AF_RVU_INT_W1S (0x1c8ull)
-#define NIX_AF_RVU_INT_ENA_W1S (0x1d0ull)
-#define NIX_AF_RVU_INT_ENA_W1C (0x1d8ull)
-#define NIX_AF_TCP_TIMER (0x1e0ull)
-#define NIX_AF_RX_DEF_OL2 (0x200ull)
-#define NIX_AF_RX_DEF_OIP4 (0x210ull)
-#define NIX_AF_RX_DEF_IIP4 (0x220ull)
-#define NIX_AF_RX_DEF_OIP6 (0x230ull)
-#define NIX_AF_RX_DEF_IIP6 (0x240ull)
-#define NIX_AF_RX_DEF_OTCP (0x250ull)
-#define NIX_AF_RX_DEF_ITCP (0x260ull)
-#define NIX_AF_RX_DEF_OUDP (0x270ull)
-#define NIX_AF_RX_DEF_IUDP (0x280ull)
-#define NIX_AF_RX_DEF_OSCTP (0x290ull)
-#define NIX_AF_RX_DEF_ISCTP (0x2a0ull)
-#define NIX_AF_RX_DEF_IPSECX(a) (0x2b0ull | (uint64_t)(a) << 3)
-#define NIX_AF_RX_IPSEC_GEN_CFG (0x300ull)
-#define NIX_AF_RX_CPTX_INST_QSEL(a) (0x320ull | (uint64_t)(a) << 3)
-#define NIX_AF_RX_CPTX_CREDIT(a) (0x360ull | (uint64_t)(a) << 3)
-#define NIX_AF_NDC_RX_SYNC (0x3e0ull)
-#define NIX_AF_NDC_TX_SYNC (0x3f0ull)
-#define NIX_AF_AQ_CFG (0x400ull)
-#define NIX_AF_AQ_BASE (0x410ull)
-#define NIX_AF_AQ_STATUS (0x420ull)
-#define NIX_AF_AQ_DOOR (0x430ull)
-#define NIX_AF_AQ_DONE_WAIT (0x440ull)
-#define NIX_AF_AQ_DONE (0x450ull)
-#define NIX_AF_AQ_DONE_ACK (0x460ull)
-#define NIX_AF_AQ_DONE_TIMER (0x470ull)
-#define NIX_AF_AQ_DONE_ENA_W1S (0x490ull)
-#define NIX_AF_AQ_DONE_ENA_W1C (0x498ull)
-#define NIX_AF_RX_LINKX_CFG(a) (0x540ull | (uint64_t)(a) << 16)
-#define NIX_AF_RX_SW_SYNC (0x550ull)
-#define NIX_AF_RX_LINKX_WRR_CFG(a) (0x560ull | (uint64_t)(a) << 16)
-#define NIX_AF_EXPR_TX_FIFO_STATUS (0x640ull)
-#define NIX_AF_NORM_TX_FIFO_STATUS (0x648ull)
-#define NIX_AF_SDP_TX_FIFO_STATUS (0x650ull)
-#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x660ull)
-#define NIX_AF_TX_NPC_CAPTURE_INFO (0x668ull)
-#define NIX_AF_TX_NPC_CAPTURE_RESPX(a) (0x680ull | (uint64_t)(a) << 3)
-#define NIX_AF_SEB_ACTIVE_CYCLES_PCX(a) (0x6c0ull | (uint64_t)(a) << 3)
-#define NIX_AF_SMQX_CFG(a) (0x700ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_HEAD(a) (0x710ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_TAIL(a) (0x720ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_STATUS(a) (0x730ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_NXT_HEAD(a) (0x740ull | (uint64_t)(a) << 16)
-#define NIX_AF_SQM_ACTIVE_CYCLES_PC (0x770ull)
-#define NIX_AF_PSE_CHANNEL_LEVEL (0x800ull)
-#define NIX_AF_PSE_SHAPER_CFG (0x810ull)
-#define NIX_AF_PSE_ACTIVE_CYCLES_PC (0x8c0ull)
-#define NIX_AF_MARK_FORMATX_CTL(a) (0x900ull | (uint64_t)(a) << 18)
-#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xa00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xa10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xa20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xa30ull | (uint64_t)(a) << 16)
-#define NIX_AF_SDP_LINK_CREDIT (0xa40ull)
-#define NIX_AF_SDP_SW_XOFFX(a) (0xa60ull | (uint64_t)(a) << 3)
-#define NIX_AF_SDP_HW_XOFFX(a) (0xac0ull | (uint64_t)(a) << 3)
-#define NIX_AF_TL4X_BP_STATUS(a) (0xb00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xb10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SCHEDULE(a) (0xc00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SHAPE(a) (0xc10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_CIR(a) (0xc20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SHAPE_STATE(a) (0xc50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SW_XOFF(a) (0xc70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_TOPOLOGY(a) (0xc80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG0(a) (0xcc0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG1(a) (0xcc8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG2(a) (0xcd0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG3(a) (0xcd8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xd20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xd30ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_RED_PACKETS(a) (0xd40ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_RED_BYTES(a) (0xd50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xd60ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xd70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xd80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_GREEN_BYTES(a) (0xd90ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SCHEDULE(a) (0xe00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SHAPE(a) (0xe10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_CIR(a) (0xe20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_PIR(a) (0xe30ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SCHED_STATE(a) (0xe40ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SHAPE_STATE(a) (0xe50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SW_XOFF(a) (0xe70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_TOPOLOGY(a) (0xe80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_PARENT(a) (0xe88ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG0(a) (0xec0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG1(a) (0xec8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG2(a) (0xed0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG3(a) (0xed8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SCHEDULE(a) \
- (0x1000ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SHAPE(a) \
- (0x1010ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_CIR(a) \
- (0x1020ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_PIR(a) \
- (0x1030ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SCHED_STATE(a) \
- (0x1040ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SHAPE_STATE(a) \
- (0x1050ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SW_XOFF(a) \
- (0x1070ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_TOPOLOGY(a) \
- (0x1080ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_PARENT(a) \
- (0x1088ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG0(a) \
- (0x10c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG1(a) \
- (0x10c8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG2(a) \
- (0x10d0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG3(a) \
- (0x10d8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SCHEDULE(a) \
- (0x1200ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SHAPE(a) \
- (0x1210ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_CIR(a) \
- (0x1220ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_PIR(a) \
- (0x1230ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SCHED_STATE(a) \
- (0x1240ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SHAPE_STATE(a) \
- (0x1250ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SW_XOFF(a) \
- (0x1270ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_TOPOLOGY(a) \
- (0x1280ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_PARENT(a) \
- (0x1288ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG0(a) \
- (0x12c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG1(a) \
- (0x12c8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG2(a) \
- (0x12d0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG3(a) \
- (0x12d8ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SCHEDULE(a) \
- (0x1400ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SHAPE(a) \
- (0x1410ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_CIR(a) \
- (0x1420ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_PIR(a) \
- (0x1430ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SCHED_STATE(a) \
- (0x1440ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SHAPE_STATE(a) \
- (0x1450ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SW_XOFF(a) \
- (0x1470ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_PARENT(a) \
- (0x1480ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_MD_DEBUG(a) \
- (0x14c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_CFG(a) \
- (0x1600ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_BP_STATUS(a) \
- (0x1610ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) \
- (0x1700ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) \
- (0x1800ull | (uint64_t)(a) << 18 | (uint64_t)(b) << 3)
-#define NIX_AF_TX_MCASTX(a) \
- (0x1900ull | (uint64_t)(a) << 15)
-#define NIX_AF_TX_VTAG_DEFX_CTL(a) \
- (0x1a00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_VTAG_DEFX_DATA(a) \
- (0x1a10ull | (uint64_t)(a) << 16)
-#define NIX_AF_RX_BPIDX_STATUS(a) \
- (0x1a20ull | (uint64_t)(a) << 17)
-#define NIX_AF_RX_CHANX_CFG(a) \
- (0x1a30ull | (uint64_t)(a) << 15)
-#define NIX_AF_CINT_TIMERX(a) \
- (0x1a40ull | (uint64_t)(a) << 18)
-#define NIX_AF_LSO_FORMATX_FIELDX(a, b) \
- (0x1b00ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_CFG(a) \
- (0x4000ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_SQS_CFG(a) \
- (0x4020ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_CFG2(a) \
- (0x4028ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_SQS_BASE(a) \
- (0x4030ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RQS_CFG(a) \
- (0x4040ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RQS_BASE(a) \
- (0x4050ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CQS_CFG(a) \
- (0x4060ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CQS_BASE(a) \
- (0x4070ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_CFG(a) \
- (0x4080ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_PARSE_CFG(a) \
- (0x4090ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_CFG(a) \
- (0x40a0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RSS_CFG(a) \
- (0x40c0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RSS_BASE(a) \
- (0x40d0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_QINTS_CFG(a) \
- (0x4100ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_QINTS_BASE(a) \
- (0x4110ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CINTS_CFG(a) \
- (0x4120ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CINTS_BASE(a) \
- (0x4130ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_CFG0(a) \
- (0x4140ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_CFG1(a) \
- (0x4148ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) \
- (0x4150ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) \
- (0x4158ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) \
- (0x4170ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_STATUS(a) \
- (0x4180ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) \
- (0x4200ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_LOCKX(a, b) \
- (0x4300ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_TX_STATX(a, b) \
- (0x4400ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_RX_STATX(a, b) \
- (0x4500ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_RSS_GRPX(a, b) \
- (0x4600ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_RX_NPC_MC_RCV (0x4700ull)
-#define NIX_AF_RX_NPC_MC_DROP (0x4710ull)
-#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720ull)
-#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730ull)
-#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) \
- (0x4800ull | (uint64_t)(a) << 16)
-#define NIX_PRIV_AF_INT_CFG (0x8000000ull)
-#define NIX_PRIV_LFX_CFG(a) \
- (0x8000010ull | (uint64_t)(a) << 8)
-#define NIX_PRIV_LFX_INT_CFG(a) \
- (0x8000020ull | (uint64_t)(a) << 8)
-#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030ull)
-
-#define NIX_LF_RX_SECRETX(a) (0x0ull | (uint64_t)(a) << 3)
-#define NIX_LF_CFG (0x100ull)
-#define NIX_LF_GINT (0x200ull)
-#define NIX_LF_GINT_W1S (0x208ull)
-#define NIX_LF_GINT_ENA_W1C (0x210ull)
-#define NIX_LF_GINT_ENA_W1S (0x218ull)
-#define NIX_LF_ERR_INT (0x220ull)
-#define NIX_LF_ERR_INT_W1S (0x228ull)
-#define NIX_LF_ERR_INT_ENA_W1C (0x230ull)
-#define NIX_LF_ERR_INT_ENA_W1S (0x238ull)
-#define NIX_LF_RAS (0x240ull)
-#define NIX_LF_RAS_W1S (0x248ull)
-#define NIX_LF_RAS_ENA_W1C (0x250ull)
-#define NIX_LF_RAS_ENA_W1S (0x258ull)
-#define NIX_LF_SQ_OP_ERR_DBG (0x260ull)
-#define NIX_LF_MNQ_ERR_DBG (0x270ull)
-#define NIX_LF_SEND_ERR_DBG (0x280ull)
-#define NIX_LF_TX_STATX(a) (0x300ull | (uint64_t)(a) << 3)
-#define NIX_LF_RX_STATX(a) (0x400ull | (uint64_t)(a) << 3)
-#define NIX_LF_OP_SENDX(a) (0x800ull | (uint64_t)(a) << 3)
-#define NIX_LF_RQ_OP_INT (0x900ull)
-#define NIX_LF_RQ_OP_OCTS (0x910ull)
-#define NIX_LF_RQ_OP_PKTS (0x920ull)
-#define NIX_LF_RQ_OP_DROP_OCTS (0x930ull)
-#define NIX_LF_RQ_OP_DROP_PKTS (0x940ull)
-#define NIX_LF_RQ_OP_RE_PKTS (0x950ull)
-#define NIX_LF_OP_IPSEC_DYNO_CNT (0x980ull)
-#define NIX_LF_SQ_OP_INT (0xa00ull)
-#define NIX_LF_SQ_OP_OCTS (0xa10ull)
-#define NIX_LF_SQ_OP_PKTS (0xa20ull)
-#define NIX_LF_SQ_OP_STATUS (0xa30ull)
-#define NIX_LF_SQ_OP_DROP_OCTS (0xa40ull)
-#define NIX_LF_SQ_OP_DROP_PKTS (0xa50ull)
-#define NIX_LF_CQ_OP_INT (0xb00ull)
-#define NIX_LF_CQ_OP_DOOR (0xb30ull)
-#define NIX_LF_CQ_OP_STATUS (0xb40ull)
-#define NIX_LF_QINTX_CNT(a) (0xc00ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_INT(a) (0xc10ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_ENA_W1S(a) (0xc20ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_ENA_W1C(a) (0xc30ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_CNT(a) (0xd00ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_WAIT(a) (0xd10ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_INT(a) (0xd20ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_INT_W1S(a) (0xd30ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_ENA_W1S(a) (0xd40ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_ENA_W1C(a) (0xd50ull | (uint64_t)(a) << 12)
-
-
-/* Enum offsets */
-
-#define NIX_TX_VTAGOP_NOP (0x0ull)
-#define NIX_TX_VTAGOP_INSERT (0x1ull)
-#define NIX_TX_VTAGOP_REPLACE (0x2ull)
-
-#define NIX_TX_ACTIONOP_DROP (0x0ull)
-#define NIX_TX_ACTIONOP_UCAST_DEFAULT (0x1ull)
-#define NIX_TX_ACTIONOP_UCAST_CHAN (0x2ull)
-#define NIX_TX_ACTIONOP_MCAST (0x3ull)
-#define NIX_TX_ACTIONOP_DROP_VIOL (0x5ull)
-
-#define NIX_INTF_RX (0x0ull)
-#define NIX_INTF_TX (0x1ull)
-
-#define NIX_TXLAYER_OL3 (0x0ull)
-#define NIX_TXLAYER_OL4 (0x1ull)
-#define NIX_TXLAYER_IL3 (0x2ull)
-#define NIX_TXLAYER_IL4 (0x3ull)
-
-#define NIX_SUBDC_NOP (0x0ull)
-#define NIX_SUBDC_EXT (0x1ull)
-#define NIX_SUBDC_CRC (0x2ull)
-#define NIX_SUBDC_IMM (0x3ull)
-#define NIX_SUBDC_SG (0x4ull)
-#define NIX_SUBDC_MEM (0x5ull)
-#define NIX_SUBDC_JUMP (0x6ull)
-#define NIX_SUBDC_WORK (0x7ull)
-#define NIX_SUBDC_SOD (0xfull)
-
-#define NIX_STYPE_STF (0x0ull)
-#define NIX_STYPE_STT (0x1ull)
-#define NIX_STYPE_STP (0x2ull)
-
-#define NIX_STAT_LF_TX_TX_UCAST (0x0ull)
-#define NIX_STAT_LF_TX_TX_BCAST (0x1ull)
-#define NIX_STAT_LF_TX_TX_MCAST (0x2ull)
-#define NIX_STAT_LF_TX_TX_DROP (0x3ull)
-#define NIX_STAT_LF_TX_TX_OCTS (0x4ull)
-
-#define NIX_STAT_LF_RX_RX_OCTS (0x0ull)
-#define NIX_STAT_LF_RX_RX_UCAST (0x1ull)
-#define NIX_STAT_LF_RX_RX_BCAST (0x2ull)
-#define NIX_STAT_LF_RX_RX_MCAST (0x3ull)
-#define NIX_STAT_LF_RX_RX_DROP (0x4ull)
-#define NIX_STAT_LF_RX_RX_DROP_OCTS (0x5ull)
-#define NIX_STAT_LF_RX_RX_FCS (0x6ull)
-#define NIX_STAT_LF_RX_RX_ERR (0x7ull)
-#define NIX_STAT_LF_RX_RX_DRP_BCAST (0x8ull)
-#define NIX_STAT_LF_RX_RX_DRP_MCAST (0x9ull)
-#define NIX_STAT_LF_RX_RX_DRP_L3BCAST (0xaull)
-#define NIX_STAT_LF_RX_RX_DRP_L3MCAST (0xbull)
-
-#define NIX_SQOPERR_SQ_OOR (0x0ull)
-#define NIX_SQOPERR_SQ_CTX_FAULT (0x1ull)
-#define NIX_SQOPERR_SQ_CTX_POISON (0x2ull)
-#define NIX_SQOPERR_SQ_DISABLED (0x3ull)
-#define NIX_SQOPERR_MAX_SQE_SIZE_ERR (0x4ull)
-#define NIX_SQOPERR_SQE_OFLOW (0x5ull)
-#define NIX_SQOPERR_SQB_NULL (0x6ull)
-#define NIX_SQOPERR_SQB_FAULT (0x7ull)
-
-#define NIX_XQESZ_W64 (0x0ull)
-#define NIX_XQESZ_W16 (0x1ull)
-
-#define NIX_VTAGSIZE_T4 (0x0ull)
-#define NIX_VTAGSIZE_T8 (0x1ull)
-
-#define NIX_RX_ACTIONOP_DROP (0x0ull)
-#define NIX_RX_ACTIONOP_UCAST (0x1ull)
-#define NIX_RX_ACTIONOP_UCAST_IPSEC (0x2ull)
-#define NIX_RX_ACTIONOP_MCAST (0x3ull)
-#define NIX_RX_ACTIONOP_RSS (0x4ull)
-#define NIX_RX_ACTIONOP_PF_FUNC_DROP (0x5ull)
-#define NIX_RX_ACTIONOP_MIRROR (0x6ull)
-
-#define NIX_RX_VTAGACTION_VTAG0_RELPTR (0x0ull)
-#define NIX_RX_VTAGACTION_VTAG1_RELPTR (0x4ull)
-#define NIX_RX_VTAGACTION_VTAG_VALID (0x1ull)
-#define NIX_TX_VTAGACTION_VTAG0_RELPTR \
- (sizeof(struct nix_inst_hdr_s) + 2 * 6)
-#define NIX_TX_VTAGACTION_VTAG1_RELPTR \
- (sizeof(struct nix_inst_hdr_s) + 2 * 6 + 4)
-#define NIX_RQINT_DROP (0x0ull)
-#define NIX_RQINT_RED (0x1ull)
-#define NIX_RQINT_R2 (0x2ull)
-#define NIX_RQINT_R3 (0x3ull)
-#define NIX_RQINT_R4 (0x4ull)
-#define NIX_RQINT_R5 (0x5ull)
-#define NIX_RQINT_R6 (0x6ull)
-#define NIX_RQINT_R7 (0x7ull)
-
-#define NIX_MAXSQESZ_W16 (0x0ull)
-#define NIX_MAXSQESZ_W8 (0x1ull)
-
-#define NIX_LSOALG_NOP (0x0ull)
-#define NIX_LSOALG_ADD_SEGNUM (0x1ull)
-#define NIX_LSOALG_ADD_PAYLEN (0x2ull)
-#define NIX_LSOALG_ADD_OFFSET (0x3ull)
-#define NIX_LSOALG_TCP_FLAGS (0x4ull)
-
-#define NIX_MNQERR_SQ_CTX_FAULT (0x0ull)
-#define NIX_MNQERR_SQ_CTX_POISON (0x1ull)
-#define NIX_MNQERR_SQB_FAULT (0x2ull)
-#define NIX_MNQERR_SQB_POISON (0x3ull)
-#define NIX_MNQERR_TOTAL_ERR (0x4ull)
-#define NIX_MNQERR_LSO_ERR (0x5ull)
-#define NIX_MNQERR_CQ_QUERY_ERR (0x6ull)
-#define NIX_MNQERR_MAX_SQE_SIZE_ERR (0x7ull)
-#define NIX_MNQERR_MAXLEN_ERR (0x8ull)
-#define NIX_MNQERR_SQE_SIZEM1_ZERO (0x9ull)
-
-#define NIX_MDTYPE_RSVD (0x0ull)
-#define NIX_MDTYPE_FLUSH (0x1ull)
-#define NIX_MDTYPE_PMD (0x2ull)
-
-#define NIX_NDC_TX_PORT_LMT (0x0ull)
-#define NIX_NDC_TX_PORT_ENQ (0x1ull)
-#define NIX_NDC_TX_PORT_MNQ (0x2ull)
-#define NIX_NDC_TX_PORT_DEQ (0x3ull)
-#define NIX_NDC_TX_PORT_DMA (0x4ull)
-#define NIX_NDC_TX_PORT_XQE (0x5ull)
-
-#define NIX_NDC_RX_PORT_AQ (0x0ull)
-#define NIX_NDC_RX_PORT_CQ (0x1ull)
-#define NIX_NDC_RX_PORT_CINT (0x2ull)
-#define NIX_NDC_RX_PORT_MC (0x3ull)
-#define NIX_NDC_RX_PORT_PKT (0x4ull)
-#define NIX_NDC_RX_PORT_RQ (0x5ull)
-
-#define NIX_RE_OPCODE_RE_NONE (0x0ull)
-#define NIX_RE_OPCODE_RE_PARTIAL (0x1ull)
-#define NIX_RE_OPCODE_RE_JABBER (0x2ull)
-#define NIX_RE_OPCODE_RE_FCS (0x7ull)
-#define NIX_RE_OPCODE_RE_FCS_RCV (0x8ull)
-#define NIX_RE_OPCODE_RE_TERMINATE (0x9ull)
-#define NIX_RE_OPCODE_RE_RX_CTL (0xbull)
-#define NIX_RE_OPCODE_RE_SKIP (0xcull)
-#define NIX_RE_OPCODE_RE_DMAPKT (0xfull)
-#define NIX_RE_OPCODE_UNDERSIZE (0x10ull)
-#define NIX_RE_OPCODE_OVERSIZE (0x11ull)
-#define NIX_RE_OPCODE_OL2_LENMISM (0x12ull)
-
-#define NIX_REDALG_STD (0x0ull)
-#define NIX_REDALG_SEND (0x1ull)
-#define NIX_REDALG_STALL (0x2ull)
-#define NIX_REDALG_DISCARD (0x3ull)
-
-#define NIX_RX_MCOP_RQ (0x0ull)
-#define NIX_RX_MCOP_RSS (0x1ull)
-
-#define NIX_RX_PERRCODE_NPC_RESULT_ERR (0x2ull)
-#define NIX_RX_PERRCODE_MCAST_FAULT (0x4ull)
-#define NIX_RX_PERRCODE_MIRROR_FAULT (0x5ull)
-#define NIX_RX_PERRCODE_MCAST_POISON (0x6ull)
-#define NIX_RX_PERRCODE_MIRROR_POISON (0x7ull)
-#define NIX_RX_PERRCODE_DATA_FAULT (0x8ull)
-#define NIX_RX_PERRCODE_MEMOUT (0x9ull)
-#define NIX_RX_PERRCODE_BUFS_OFLOW (0xaull)
-#define NIX_RX_PERRCODE_OL3_LEN (0x10ull)
-#define NIX_RX_PERRCODE_OL4_LEN (0x11ull)
-#define NIX_RX_PERRCODE_OL4_CHK (0x12ull)
-#define NIX_RX_PERRCODE_OL4_PORT (0x13ull)
-#define NIX_RX_PERRCODE_IL3_LEN (0x20ull)
-#define NIX_RX_PERRCODE_IL4_LEN (0x21ull)
-#define NIX_RX_PERRCODE_IL4_CHK (0x22ull)
-#define NIX_RX_PERRCODE_IL4_PORT (0x23ull)
-
-#define NIX_SENDCRCALG_CRC32 (0x0ull)
-#define NIX_SENDCRCALG_CRC32C (0x1ull)
-#define NIX_SENDCRCALG_ONES16 (0x2ull)
-
-#define NIX_SENDL3TYPE_NONE (0x0ull)
-#define NIX_SENDL3TYPE_IP4 (0x2ull)
-#define NIX_SENDL3TYPE_IP4_CKSUM (0x3ull)
-#define NIX_SENDL3TYPE_IP6 (0x4ull)
-
-#define NIX_SENDL4TYPE_NONE (0x0ull)
-#define NIX_SENDL4TYPE_TCP_CKSUM (0x1ull)
-#define NIX_SENDL4TYPE_SCTP_CKSUM (0x2ull)
-#define NIX_SENDL4TYPE_UDP_CKSUM (0x3ull)
-
-#define NIX_SENDLDTYPE_LDD (0x0ull)
-#define NIX_SENDLDTYPE_LDT (0x1ull)
-#define NIX_SENDLDTYPE_LDWB (0x2ull)
-
-#define NIX_SENDMEMALG_SET (0x0ull)
-#define NIX_SENDMEMALG_SETTSTMP (0x1ull)
-#define NIX_SENDMEMALG_SETRSLT (0x2ull)
-#define NIX_SENDMEMALG_ADD (0x8ull)
-#define NIX_SENDMEMALG_SUB (0x9ull)
-#define NIX_SENDMEMALG_ADDLEN (0xaull)
-#define NIX_SENDMEMALG_SUBLEN (0xbull)
-#define NIX_SENDMEMALG_ADDMBUF (0xcull)
-#define NIX_SENDMEMALG_SUBMBUF (0xdull)
-
-#define NIX_SENDMEMDSZ_B64 (0x0ull)
-#define NIX_SENDMEMDSZ_B32 (0x1ull)
-#define NIX_SENDMEMDSZ_B16 (0x2ull)
-#define NIX_SENDMEMDSZ_B8 (0x3ull)
-
-#define NIX_SEND_STATUS_GOOD (0x0ull)
-#define NIX_SEND_STATUS_SQ_CTX_FAULT (0x1ull)
-#define NIX_SEND_STATUS_SQ_CTX_POISON (0x2ull)
-#define NIX_SEND_STATUS_SQB_FAULT (0x3ull)
-#define NIX_SEND_STATUS_SQB_POISON (0x4ull)
-#define NIX_SEND_STATUS_SEND_HDR_ERR (0x5ull)
-#define NIX_SEND_STATUS_SEND_EXT_ERR (0x6ull)
-#define NIX_SEND_STATUS_JUMP_FAULT (0x7ull)
-#define NIX_SEND_STATUS_JUMP_POISON (0x8ull)
-#define NIX_SEND_STATUS_SEND_CRC_ERR (0x10ull)
-#define NIX_SEND_STATUS_SEND_IMM_ERR (0x11ull)
-#define NIX_SEND_STATUS_SEND_SG_ERR (0x12ull)
-#define NIX_SEND_STATUS_SEND_MEM_ERR (0x13ull)
-#define NIX_SEND_STATUS_INVALID_SUBDC (0x14ull)
-#define NIX_SEND_STATUS_SUBDC_ORDER_ERR (0x15ull)
-#define NIX_SEND_STATUS_DATA_FAULT (0x16ull)
-#define NIX_SEND_STATUS_DATA_POISON (0x17ull)
-#define NIX_SEND_STATUS_NPC_DROP_ACTION (0x20ull)
-#define NIX_SEND_STATUS_LOCK_VIOL (0x21ull)
-#define NIX_SEND_STATUS_NPC_UCAST_CHAN_ERR (0x22ull)
-#define NIX_SEND_STATUS_NPC_MCAST_CHAN_ERR (0x23ull)
-#define NIX_SEND_STATUS_NPC_MCAST_ABORT (0x24ull)
-#define NIX_SEND_STATUS_NPC_VTAG_PTR_ERR (0x25ull)
-#define NIX_SEND_STATUS_NPC_VTAG_SIZE_ERR (0x26ull)
-#define NIX_SEND_STATUS_SEND_MEM_FAULT (0x27ull)
-
-#define NIX_SQINT_LMT_ERR (0x0ull)
-#define NIX_SQINT_MNQ_ERR (0x1ull)
-#define NIX_SQINT_SEND_ERR (0x2ull)
-#define NIX_SQINT_SQB_ALLOC_FAIL (0x3ull)
-
-#define NIX_XQE_TYPE_INVALID (0x0ull)
-#define NIX_XQE_TYPE_RX (0x1ull)
-#define NIX_XQE_TYPE_RX_IPSECS (0x2ull)
-#define NIX_XQE_TYPE_RX_IPSECH (0x3ull)
-#define NIX_XQE_TYPE_RX_IPSECD (0x4ull)
-#define NIX_XQE_TYPE_SEND (0x8ull)
-
-#define NIX_AQ_COMP_NOTDONE (0x0ull)
-#define NIX_AQ_COMP_GOOD (0x1ull)
-#define NIX_AQ_COMP_SWERR (0x2ull)
-#define NIX_AQ_COMP_CTX_POISON (0x3ull)
-#define NIX_AQ_COMP_CTX_FAULT (0x4ull)
-#define NIX_AQ_COMP_LOCKERR (0x5ull)
-#define NIX_AQ_COMP_SQB_ALLOC_FAIL (0x6ull)
-
-#define NIX_AF_INT_VEC_RVU (0x0ull)
-#define NIX_AF_INT_VEC_GEN (0x1ull)
-#define NIX_AF_INT_VEC_AQ_DONE (0x2ull)
-#define NIX_AF_INT_VEC_AF_ERR (0x3ull)
-#define NIX_AF_INT_VEC_POISON (0x4ull)
-
-#define NIX_AQINT_GEN_RX_MCAST_DROP (0x0ull)
-#define NIX_AQINT_GEN_RX_MIRROR_DROP (0x1ull)
-#define NIX_AQINT_GEN_TL1_DRAIN (0x3ull)
-#define NIX_AQINT_GEN_SMQ_FLUSH_DONE (0x4ull)
-
-#define NIX_AQ_INSTOP_NOP (0x0ull)
-#define NIX_AQ_INSTOP_INIT (0x1ull)
-#define NIX_AQ_INSTOP_WRITE (0x2ull)
-#define NIX_AQ_INSTOP_READ (0x3ull)
-#define NIX_AQ_INSTOP_LOCK (0x4ull)
-#define NIX_AQ_INSTOP_UNLOCK (0x5ull)
-
-#define NIX_AQ_CTYPE_RQ (0x0ull)
-#define NIX_AQ_CTYPE_SQ (0x1ull)
-#define NIX_AQ_CTYPE_CQ (0x2ull)
-#define NIX_AQ_CTYPE_MCE (0x3ull)
-#define NIX_AQ_CTYPE_RSS (0x4ull)
-#define NIX_AQ_CTYPE_DYNO (0x5ull)
-
-#define NIX_COLORRESULT_GREEN (0x0ull)
-#define NIX_COLORRESULT_YELLOW (0x1ull)
-#define NIX_COLORRESULT_RED_SEND (0x2ull)
-#define NIX_COLORRESULT_RED_DROP (0x3ull)
-
-#define NIX_CHAN_LBKX_CHX(a, b) \
- (0x000ull | ((uint64_t)(a) << 8) | (uint64_t)(b))
-#define NIX_CHAN_R4 (0x400ull)
-#define NIX_CHAN_R5 (0x500ull)
-#define NIX_CHAN_R6 (0x600ull)
-#define NIX_CHAN_SDP_CH_END (0x7ffull)
-#define NIX_CHAN_SDP_CH_START (0x700ull)
-#define NIX_CHAN_CGXX_LMACX_CHX(a, b, c) \
- (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | \
- (uint64_t)(c))
-
-#define NIX_INTF_SDP (0x4ull)
-#define NIX_INTF_CGX0 (0x0ull)
-#define NIX_INTF_CGX1 (0x1ull)
-#define NIX_INTF_CGX2 (0x2ull)
-#define NIX_INTF_LBK0 (0x3ull)
-
-#define NIX_CQERRINT_DOOR_ERR (0x0ull)
-#define NIX_CQERRINT_WR_FULL (0x1ull)
-#define NIX_CQERRINT_CQE_FAULT (0x2ull)
-
-#define NIX_LF_INT_VEC_GINT (0x80ull)
-#define NIX_LF_INT_VEC_ERR_INT (0x81ull)
-#define NIX_LF_INT_VEC_POISON (0x82ull)
-#define NIX_LF_INT_VEC_QINT_END (0x3full)
-#define NIX_LF_INT_VEC_QINT_START (0x0ull)
-#define NIX_LF_INT_VEC_CINT_END (0x7full)
-#define NIX_LF_INT_VEC_CINT_START (0x40ull)
-
-/* Enums definitions */
-
-/* Structures definitions */
-
-/* NIX admin queue instruction structure */
-struct nix_aq_inst_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t lf : 7;
- uint64_t rsvd_23_15 : 9;
- uint64_t cindex : 20;
- uint64_t rsvd_62_44 : 19;
- uint64_t doneint : 1;
- uint64_t res_addr : 64; /* W1 */
-};
-
-/* NIX admin queue result structure */
-struct nix_aq_res_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t compcode : 8;
- uint64_t doneint : 1;
- uint64_t rsvd_63_17 : 47;
- uint64_t rsvd_127_64 : 64; /* W1 */
-};
-
-/* NIX completion interrupt context hardware structure */
-struct nix_cint_hw_s {
- uint64_t ecount : 32;
- uint64_t qcount : 16;
- uint64_t intr : 1;
- uint64_t ena : 1;
- uint64_t timer_idx : 8;
- uint64_t rsvd_63_58 : 6;
- uint64_t ecount_wait : 32;
- uint64_t qcount_wait : 16;
- uint64_t time_wait : 8;
- uint64_t rsvd_127_120 : 8;
-};
-
-/* NIX completion queue entry header structure */
-struct nix_cqe_hdr_s {
- uint64_t tag : 32;
- uint64_t q : 20;
- uint64_t rsvd_57_52 : 6;
- uint64_t node : 2;
- uint64_t cqe_type : 4;
-};
-
-/* NIX completion queue context structure */
-struct nix_cq_ctx_s {
- uint64_t base : 64;/* W0 */
- uint64_t rsvd_67_64 : 4;
- uint64_t bp_ena : 1;
- uint64_t rsvd_71_69 : 3;
- uint64_t bpid : 9;
- uint64_t rsvd_83_81 : 3;
- uint64_t qint_idx : 7;
- uint64_t cq_err : 1;
- uint64_t cint_idx : 7;
- uint64_t avg_con : 9;
- uint64_t wrptr : 20;
- uint64_t tail : 20;
- uint64_t head : 20;
- uint64_t avg_level : 8;
- uint64_t update_time : 16;
- uint64_t bp : 8;
- uint64_t drop : 8;
- uint64_t drop_ena : 1;
- uint64_t ena : 1;
- uint64_t rsvd_211_210 : 2;
- uint64_t substream : 20;
- uint64_t caching : 1;
- uint64_t rsvd_235_233 : 3;
- uint64_t qsize : 4;
- uint64_t cq_err_int : 8;
- uint64_t cq_err_int_ena : 8;
-};
-
-/* NIX instruction header structure */
-struct nix_inst_hdr_s {
- uint64_t pf_func : 16;
- uint64_t sq : 20;
- uint64_t rsvd_63_36 : 28;
-};
-
-/* NIX i/o virtual address structure */
-struct nix_iova_s {
- uint64_t addr : 64; /* W0 */
-};
-
-/* NIX IPsec dynamic ordering counter structure */
-struct nix_ipsec_dyno_s {
- uint32_t count : 32; /* W0 */
-};
-
-/* NIX memory value structure */
-struct nix_mem_result_s {
- uint64_t v : 1;
- uint64_t color : 2;
- uint64_t rsvd_63_3 : 61;
-};
-
-/* NIX statistics operation write data structure */
-struct nix_op_q_wdata_s {
- uint64_t rsvd_31_0 : 32;
- uint64_t q : 20;
- uint64_t rsvd_63_52 : 12;
-};
-
-/* NIX queue interrupt context hardware structure */
-struct nix_qint_hw_s {
- uint32_t count : 22;
- uint32_t rsvd_30_22 : 9;
- uint32_t ena : 1;
-};
-
-/* NIX receive queue context structure */
-struct nix_rq_ctx_hw_s {
- uint64_t ena : 1;
- uint64_t sso_ena : 1;
- uint64_t ipsech_ena : 1;
- uint64_t ena_wqwd : 1;
- uint64_t cq : 20;
- uint64_t substream : 20;
- uint64_t wqe_aura : 20;
- uint64_t spb_aura : 20;
- uint64_t lpb_aura : 20;
- uint64_t sso_grp : 10;
- uint64_t sso_tt : 2;
- uint64_t pb_caching : 2;
- uint64_t wqe_caching : 1;
- uint64_t xqe_drop_ena : 1;
- uint64_t spb_drop_ena : 1;
- uint64_t lpb_drop_ena : 1;
- uint64_t wqe_skip : 2;
- uint64_t rsvd_127_124 : 4;
- uint64_t rsvd_139_128 : 12;
- uint64_t spb_sizem1 : 6;
- uint64_t rsvd_150_146 : 5;
- uint64_t spb_ena : 1;
- uint64_t lpb_sizem1 : 12;
- uint64_t first_skip : 7;
- uint64_t rsvd_171 : 1;
- uint64_t later_skip : 6;
- uint64_t xqe_imm_size : 6;
- uint64_t rsvd_189_184 : 6;
- uint64_t xqe_imm_copy : 1;
- uint64_t xqe_hdr_split : 1;
- uint64_t xqe_drop : 8;
- uint64_t xqe_pass : 8;
- uint64_t wqe_pool_drop : 8;
- uint64_t wqe_pool_pass : 8;
- uint64_t spb_aura_drop : 8;
- uint64_t spb_aura_pass : 8;
- uint64_t spb_pool_drop : 8;
- uint64_t spb_pool_pass : 8;
- uint64_t lpb_aura_drop : 8;
- uint64_t lpb_aura_pass : 8;
- uint64_t lpb_pool_drop : 8;
- uint64_t lpb_pool_pass : 8;
- uint64_t rsvd_319_288 : 32;
- uint64_t ltag : 24;
- uint64_t good_utag : 8;
- uint64_t bad_utag : 8;
- uint64_t flow_tagw : 6;
- uint64_t rsvd_383_366 : 18;
- uint64_t octs : 48;
- uint64_t rsvd_447_432 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_511_496 : 16;
- uint64_t drop_octs : 48;
- uint64_t rsvd_575_560 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_639_624 : 16;
- uint64_t re_pkts : 48;
- uint64_t rsvd_702_688 : 15;
- uint64_t ena_copy : 1;
- uint64_t rsvd_739_704 : 36;
- uint64_t rq_int : 8;
- uint64_t rq_int_ena : 8;
- uint64_t qint_idx : 7;
- uint64_t rsvd_767_763 : 5;
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NIX receive queue context structure */
-struct nix_rq_ctx_s {
- uint64_t ena : 1;
- uint64_t sso_ena : 1;
- uint64_t ipsech_ena : 1;
- uint64_t ena_wqwd : 1;
- uint64_t cq : 20;
- uint64_t substream : 20;
- uint64_t wqe_aura : 20;
- uint64_t spb_aura : 20;
- uint64_t lpb_aura : 20;
- uint64_t sso_grp : 10;
- uint64_t sso_tt : 2;
- uint64_t pb_caching : 2;
- uint64_t wqe_caching : 1;
- uint64_t xqe_drop_ena : 1;
- uint64_t spb_drop_ena : 1;
- uint64_t lpb_drop_ena : 1;
- uint64_t rsvd_127_122 : 6;
- uint64_t rsvd_139_128 : 12;
- uint64_t spb_sizem1 : 6;
- uint64_t wqe_skip : 2;
- uint64_t rsvd_150_148 : 3;
- uint64_t spb_ena : 1;
- uint64_t lpb_sizem1 : 12;
- uint64_t first_skip : 7;
- uint64_t rsvd_171 : 1;
- uint64_t later_skip : 6;
- uint64_t xqe_imm_size : 6;
- uint64_t rsvd_189_184 : 6;
- uint64_t xqe_imm_copy : 1;
- uint64_t xqe_hdr_split : 1;
- uint64_t xqe_drop : 8;
- uint64_t xqe_pass : 8;
- uint64_t wqe_pool_drop : 8;
- uint64_t wqe_pool_pass : 8;
- uint64_t spb_aura_drop : 8;
- uint64_t spb_aura_pass : 8;
- uint64_t spb_pool_drop : 8;
- uint64_t spb_pool_pass : 8;
- uint64_t lpb_aura_drop : 8;
- uint64_t lpb_aura_pass : 8;
- uint64_t lpb_pool_drop : 8;
- uint64_t lpb_pool_pass : 8;
- uint64_t rsvd_291_288 : 4;
- uint64_t rq_int : 8;
- uint64_t rq_int_ena : 8;
- uint64_t qint_idx : 7;
- uint64_t rsvd_319_315 : 5;
- uint64_t ltag : 24;
- uint64_t good_utag : 8;
- uint64_t bad_utag : 8;
- uint64_t flow_tagw : 6;
- uint64_t rsvd_383_366 : 18;
- uint64_t octs : 48;
- uint64_t rsvd_447_432 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_511_496 : 16;
- uint64_t drop_octs : 48;
- uint64_t rsvd_575_560 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_639_624 : 16;
- uint64_t re_pkts : 48;
- uint64_t rsvd_703_688 : 16;
- uint64_t rsvd_767_704 : 64;/* W11 */
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NIX receive side scaling entry structure */
-struct nix_rsse_s {
- uint32_t rq : 20;
- uint32_t rsvd_31_20 : 12;
-};
-
-/* NIX receive action structure */
-struct nix_rx_action_s {
- uint64_t op : 4;
- uint64_t pf_func : 16;
- uint64_t index : 20;
- uint64_t match_id : 16;
- uint64_t flow_key_alg : 5;
- uint64_t rsvd_63_61 : 3;
-};
-
-/* NIX receive immediate sub descriptor structure */
-struct nix_rx_imm_s {
- uint64_t size : 16;
- uint64_t apad : 3;
- uint64_t rsvd_59_19 : 41;
- uint64_t subdc : 4;
-};
-
-/* NIX receive multicast/mirror entry structure */
-struct nix_rx_mce_s {
- uint64_t op : 2;
- uint64_t rsvd_2 : 1;
- uint64_t eol : 1;
- uint64_t index : 20;
- uint64_t rsvd_31_24 : 8;
- uint64_t pf_func : 16;
- uint64_t next : 16;
-};
-
-/* NIX receive parse structure */
-struct nix_rx_parse_s {
- uint64_t chan : 12;
- uint64_t desc_sizem1 : 5;
- uint64_t imm_copy : 1;
- uint64_t express : 1;
- uint64_t wqwd : 1;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t latype : 4;
- uint64_t lbtype : 4;
- uint64_t lctype : 4;
- uint64_t ldtype : 4;
- uint64_t letype : 4;
- uint64_t lftype : 4;
- uint64_t lgtype : 4;
- uint64_t lhtype : 4;
- uint64_t pkt_lenm1 : 16;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t vtag0_valid : 1;
- uint64_t vtag0_gone : 1;
- uint64_t vtag1_valid : 1;
- uint64_t vtag1_gone : 1;
- uint64_t pkind : 6;
- uint64_t rsvd_95_94 : 2;
- uint64_t vtag0_tci : 16;
- uint64_t vtag1_tci : 16;
- uint64_t laflags : 8;
- uint64_t lbflags : 8;
- uint64_t lcflags : 8;
- uint64_t ldflags : 8;
- uint64_t leflags : 8;
- uint64_t lfflags : 8;
- uint64_t lgflags : 8;
- uint64_t lhflags : 8;
- uint64_t eoh_ptr : 8;
- uint64_t wqe_aura : 20;
- uint64_t pb_aura : 20;
- uint64_t match_id : 16;
- uint64_t laptr : 8;
- uint64_t lbptr : 8;
- uint64_t lcptr : 8;
- uint64_t ldptr : 8;
- uint64_t leptr : 8;
- uint64_t lfptr : 8;
- uint64_t lgptr : 8;
- uint64_t lhptr : 8;
- uint64_t vtag0_ptr : 8;
- uint64_t vtag1_ptr : 8;
- uint64_t flow_key_alg : 5;
- uint64_t rsvd_383_341 : 43;
- uint64_t rsvd_447_384 : 64; /* W6 */
-};
-
-/* NIX receive scatter/gather sub descriptor structure */
-struct nix_rx_sg_s {
- uint64_t seg1_size : 16;
- uint64_t seg2_size : 16;
- uint64_t seg3_size : 16;
- uint64_t segs : 2;
- uint64_t rsvd_59_50 : 10;
- uint64_t subdc : 4;
-};
-
-/* NIX receive vtag action structure */
-struct nix_rx_vtag_action_s {
- uint64_t vtag0_relptr : 8;
- uint64_t vtag0_lid : 3;
- uint64_t rsvd_11 : 1;
- uint64_t vtag0_type : 3;
- uint64_t vtag0_valid : 1;
- uint64_t rsvd_31_16 : 16;
- uint64_t vtag1_relptr : 8;
- uint64_t vtag1_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t vtag1_type : 3;
- uint64_t vtag1_valid : 1;
- uint64_t rsvd_63_48 : 16;
-};
-
-/* NIX send completion structure */
-struct nix_send_comp_s {
- uint64_t status : 8;
- uint64_t sqe_id : 16;
- uint64_t rsvd_63_24 : 40;
-};
-
-/* NIX send CRC sub descriptor structure */
-struct nix_send_crc_s {
- uint64_t size : 16;
- uint64_t start : 16;
- uint64_t insert : 16;
- uint64_t rsvd_57_48 : 10;
- uint64_t alg : 2;
- uint64_t subdc : 4;
- uint64_t iv : 32;
- uint64_t rsvd_127_96 : 32;
-};
-
-/* NIX send extended header sub descriptor structure */
-RTE_STD_C11
-union nix_send_ext_w0_u {
- uint64_t u;
- struct {
- uint64_t lso_mps : 14;
- uint64_t lso : 1;
- uint64_t tstmp : 1;
- uint64_t lso_sb : 8;
- uint64_t lso_format : 5;
- uint64_t rsvd_31_29 : 3;
- uint64_t shp_chg : 9;
- uint64_t shp_dis : 1;
- uint64_t shp_ra : 2;
- uint64_t markptr : 8;
- uint64_t markform : 7;
- uint64_t mark_en : 1;
- uint64_t subdc : 4;
- };
-};
-
-RTE_STD_C11
-union nix_send_ext_w1_u {
- uint64_t u;
- struct {
- uint64_t vlan0_ins_ptr : 8;
- uint64_t vlan0_ins_tci : 16;
- uint64_t vlan1_ins_ptr : 8;
- uint64_t vlan1_ins_tci : 16;
- uint64_t vlan0_ins_ena : 1;
- uint64_t vlan1_ins_ena : 1;
- uint64_t rsvd_127_114 : 14;
- };
-};
-
-struct nix_send_ext_s {
- union nix_send_ext_w0_u w0;
- union nix_send_ext_w1_u w1;
-};
-
-/* NIX send header sub descriptor structure */
-RTE_STD_C11
-union nix_send_hdr_w0_u {
- uint64_t u;
- struct {
- uint64_t total : 18;
- uint64_t rsvd_18 : 1;
- uint64_t df : 1;
- uint64_t aura : 20;
- uint64_t sizem1 : 3;
- uint64_t pnc : 1;
- uint64_t sq : 20;
- };
-};
-
-RTE_STD_C11
-union nix_send_hdr_w1_u {
- uint64_t u;
- struct {
- uint64_t ol3ptr : 8;
- uint64_t ol4ptr : 8;
- uint64_t il3ptr : 8;
- uint64_t il4ptr : 8;
- uint64_t ol3type : 4;
- uint64_t ol4type : 4;
- uint64_t il3type : 4;
- uint64_t il4type : 4;
- uint64_t sqe_id : 16;
- };
-};
-
-struct nix_send_hdr_s {
- union nix_send_hdr_w0_u w0;
- union nix_send_hdr_w1_u w1;
-};
-
-/* NIX send immediate sub descriptor structure */
-struct nix_send_imm_s {
- uint64_t size : 16;
- uint64_t apad : 3;
- uint64_t rsvd_59_19 : 41;
- uint64_t subdc : 4;
-};
-
-/* NIX send jump sub descriptor structure */
-struct nix_send_jump_s {
- uint64_t sizem1 : 7;
- uint64_t rsvd_13_7 : 7;
- uint64_t ld_type : 2;
- uint64_t aura : 20;
- uint64_t rsvd_58_36 : 23;
- uint64_t f : 1;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX send memory sub descriptor structure */
-struct nix_send_mem_s {
- uint64_t offset : 16;
- uint64_t rsvd_52_16 : 37;
- uint64_t wmem : 1;
- uint64_t dsz : 2;
- uint64_t alg : 4;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX send scatter/gather sub descriptor structure */
-RTE_STD_C11
-union nix_send_sg_s {
- uint64_t u;
- struct {
- uint64_t seg1_size : 16;
- uint64_t seg2_size : 16;
- uint64_t seg3_size : 16;
- uint64_t segs : 2;
- uint64_t rsvd_54_50 : 5;
- uint64_t i1 : 1;
- uint64_t i2 : 1;
- uint64_t i3 : 1;
- uint64_t ld_type : 2;
- uint64_t subdc : 4;
- };
-};
-
-/* NIX send work sub descriptor structure */
-struct nix_send_work_s {
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t rsvd_59_44 : 16;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX sq context hardware structure */
-struct nix_sq_ctx_hw_s {
- uint64_t ena : 1;
- uint64_t substream : 20;
- uint64_t max_sqe_size : 2;
- uint64_t sqe_way_mask : 16;
- uint64_t sqb_aura : 20;
- uint64_t gbl_rsvd1 : 5;
- uint64_t cq_id : 20;
- uint64_t cq_ena : 1;
- uint64_t qint_idx : 6;
- uint64_t gbl_rsvd2 : 1;
- uint64_t sq_int : 8;
- uint64_t sq_int_ena : 8;
- uint64_t xoff : 1;
- uint64_t sqe_stype : 2;
- uint64_t gbl_rsvd : 17;
- uint64_t head_sqb : 64;/* W2 */
- uint64_t head_offset : 6;
- uint64_t sqb_dequeue_count : 16;
- uint64_t default_chan : 12;
- uint64_t sdp_mcast : 1;
- uint64_t sso_ena : 1;
- uint64_t dse_rsvd1 : 28;
- uint64_t sqb_enqueue_count : 16;
- uint64_t tail_offset : 6;
- uint64_t lmt_dis : 1;
- uint64_t smq_rr_quantum : 24;
- uint64_t dnq_rsvd1 : 17;
- uint64_t tail_sqb : 64;/* W5 */
- uint64_t next_sqb : 64;/* W6 */
- uint64_t mnq_dis : 1;
- uint64_t smq : 9;
- uint64_t smq_pend : 1;
- uint64_t smq_next_sq : 20;
- uint64_t smq_next_sq_vld : 1;
- uint64_t scm1_rsvd2 : 32;
- uint64_t smenq_sqb : 64;/* W8 */
- uint64_t smenq_offset : 6;
- uint64_t cq_limit : 8;
- uint64_t smq_rr_count : 25;
- uint64_t scm_lso_rem : 18;
- uint64_t scm_dq_rsvd0 : 7;
- uint64_t smq_lso_segnum : 8;
- uint64_t vfi_lso_total : 18;
- uint64_t vfi_lso_sizem1 : 3;
- uint64_t vfi_lso_sb : 8;
- uint64_t vfi_lso_mps : 14;
- uint64_t vfi_lso_vlan0_ins_ena : 1;
- uint64_t vfi_lso_vlan1_ins_ena : 1;
- uint64_t vfi_lso_vld : 1;
- uint64_t smenq_next_sqb_vld : 1;
- uint64_t scm_dq_rsvd1 : 9;
- uint64_t smenq_next_sqb : 64;/* W11 */
- uint64_t seb_rsvd1 : 64;/* W12 */
- uint64_t drop_pkts : 48;
- uint64_t drop_octs_lsw : 16;
- uint64_t drop_octs_msw : 32;
- uint64_t pkts_lsw : 32;
- uint64_t pkts_msw : 16;
- uint64_t octs : 48;
-};
-
-/* NIX send queue context structure */
-struct nix_sq_ctx_s {
- uint64_t ena : 1;
- uint64_t qint_idx : 6;
- uint64_t substream : 20;
- uint64_t sdp_mcast : 1;
- uint64_t cq : 20;
- uint64_t sqe_way_mask : 16;
- uint64_t smq : 9;
- uint64_t cq_ena : 1;
- uint64_t xoff : 1;
- uint64_t sso_ena : 1;
- uint64_t smq_rr_quantum : 24;
- uint64_t default_chan : 12;
- uint64_t sqb_count : 16;
- uint64_t smq_rr_count : 25;
- uint64_t sqb_aura : 20;
- uint64_t sq_int : 8;
- uint64_t sq_int_ena : 8;
- uint64_t sqe_stype : 2;
- uint64_t rsvd_191 : 1;
- uint64_t max_sqe_size : 2;
- uint64_t cq_limit : 8;
- uint64_t lmt_dis : 1;
- uint64_t mnq_dis : 1;
- uint64_t smq_next_sq : 20;
- uint64_t smq_lso_segnum : 8;
- uint64_t tail_offset : 6;
- uint64_t smenq_offset : 6;
- uint64_t head_offset : 6;
- uint64_t smenq_next_sqb_vld : 1;
- uint64_t smq_pend : 1;
- uint64_t smq_next_sq_vld : 1;
- uint64_t rsvd_255_253 : 3;
- uint64_t next_sqb : 64;/* W4 */
- uint64_t tail_sqb : 64;/* W5 */
- uint64_t smenq_sqb : 64;/* W6 */
- uint64_t smenq_next_sqb : 64;/* W7 */
- uint64_t head_sqb : 64;/* W8 */
- uint64_t rsvd_583_576 : 8;
- uint64_t vfi_lso_total : 18;
- uint64_t vfi_lso_sizem1 : 3;
- uint64_t vfi_lso_sb : 8;
- uint64_t vfi_lso_mps : 14;
- uint64_t vfi_lso_vlan0_ins_ena : 1;
- uint64_t vfi_lso_vlan1_ins_ena : 1;
- uint64_t vfi_lso_vld : 1;
- uint64_t rsvd_639_630 : 10;
- uint64_t scm_lso_rem : 18;
- uint64_t rsvd_703_658 : 46;
- uint64_t octs : 48;
- uint64_t rsvd_767_752 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_831_816 : 16;
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t drop_octs : 48;
- uint64_t rsvd_959_944 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_1023_1008 : 16;
-};
-
-/* NIX transmit action structure */
-struct nix_tx_action_s {
- uint64_t op : 4;
- uint64_t rsvd_11_4 : 8;
- uint64_t index : 20;
- uint64_t match_id : 16;
- uint64_t rsvd_63_48 : 16;
-};
-
-/* NIX transmit vtag action structure */
-struct nix_tx_vtag_action_s {
- uint64_t vtag0_relptr : 8;
- uint64_t vtag0_lid : 3;
- uint64_t rsvd_11 : 1;
- uint64_t vtag0_op : 2;
- uint64_t rsvd_15_14 : 2;
- uint64_t vtag0_def : 10;
- uint64_t rsvd_31_26 : 6;
- uint64_t vtag1_relptr : 8;
- uint64_t vtag1_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t vtag1_op : 2;
- uint64_t rsvd_47_46 : 2;
- uint64_t vtag1_def : 10;
- uint64_t rsvd_63_58 : 6;
-};
-
-/* NIX work queue entry header structure */
-struct nix_wqe_hdr_s {
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t node : 2;
- uint64_t q : 14;
- uint64_t wqe_type : 4;
-};
-
-/* NIX Rx flow key algorithm field structure */
-struct nix_rx_flowkey_alg {
- uint64_t key_offset :6;
- uint64_t ln_mask :1;
- uint64_t fn_mask :1;
- uint64_t hdr_offset :8;
- uint64_t bytesm1 :5;
- uint64_t lid :3;
- uint64_t reserved_24_24 :1;
- uint64_t ena :1;
- uint64_t sel_chan :1;
- uint64_t ltype_mask :4;
- uint64_t ltype_match :4;
- uint64_t reserved_35_63 :29;
-};
-
-/* NIX LSO format field structure */
-struct nix_lso_format {
- uint64_t offset : 8;
- uint64_t layer : 2;
- uint64_t rsvd_10_11 : 2;
- uint64_t sizem1 : 2;
- uint64_t rsvd_14_15 : 2;
- uint64_t alg : 3;
- uint64_t rsvd_19_63 : 45;
-};
-
-#define NIX_LSO_FIELD_MAX (8)
-#define NIX_LSO_FIELD_ALG_MASK GENMASK(18, 16)
-#define NIX_LSO_FIELD_SZ_MASK GENMASK(13, 12)
-#define NIX_LSO_FIELD_LY_MASK GENMASK(9, 8)
-#define NIX_LSO_FIELD_OFF_MASK GENMASK(7, 0)
-
-#define NIX_LSO_FIELD_MASK \
- (NIX_LSO_FIELD_OFF_MASK | \
- NIX_LSO_FIELD_LY_MASK | \
- NIX_LSO_FIELD_SZ_MASK | \
- NIX_LSO_FIELD_ALG_MASK)
-
-#endif /* __OTX2_NIX_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_npa.h b/drivers/common/octeontx2/hw/otx2_npa.h
deleted file mode 100644
index 2224216c96..0000000000
--- a/drivers/common/octeontx2/hw/otx2_npa.h
+++ /dev/null
@@ -1,305 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NPA_HW_H__
-#define __OTX2_NPA_HW_H__
-
-/* Register offsets */
-
-#define NPA_AF_BLK_RST (0x0ull)
-#define NPA_AF_CONST (0x10ull)
-#define NPA_AF_CONST1 (0x18ull)
-#define NPA_AF_LF_RST (0x20ull)
-#define NPA_AF_GEN_CFG (0x30ull)
-#define NPA_AF_NDC_CFG (0x40ull)
-#define NPA_AF_NDC_SYNC (0x50ull)
-#define NPA_AF_INP_CTL (0xd0ull)
-#define NPA_AF_ACTIVE_CYCLES_PC (0xf0ull)
-#define NPA_AF_AVG_DELAY (0x100ull)
-#define NPA_AF_GEN_INT (0x140ull)
-#define NPA_AF_GEN_INT_W1S (0x148ull)
-#define NPA_AF_GEN_INT_ENA_W1S (0x150ull)
-#define NPA_AF_GEN_INT_ENA_W1C (0x158ull)
-#define NPA_AF_RVU_INT (0x160ull)
-#define NPA_AF_RVU_INT_W1S (0x168ull)
-#define NPA_AF_RVU_INT_ENA_W1S (0x170ull)
-#define NPA_AF_RVU_INT_ENA_W1C (0x178ull)
-#define NPA_AF_ERR_INT (0x180ull)
-#define NPA_AF_ERR_INT_W1S (0x188ull)
-#define NPA_AF_ERR_INT_ENA_W1S (0x190ull)
-#define NPA_AF_ERR_INT_ENA_W1C (0x198ull)
-#define NPA_AF_RAS (0x1a0ull)
-#define NPA_AF_RAS_W1S (0x1a8ull)
-#define NPA_AF_RAS_ENA_W1S (0x1b0ull)
-#define NPA_AF_RAS_ENA_W1C (0x1b8ull)
-#define NPA_AF_AQ_CFG (0x600ull)
-#define NPA_AF_AQ_BASE (0x610ull)
-#define NPA_AF_AQ_STATUS (0x620ull)
-#define NPA_AF_AQ_DOOR (0x630ull)
-#define NPA_AF_AQ_DONE_WAIT (0x640ull)
-#define NPA_AF_AQ_DONE (0x650ull)
-#define NPA_AF_AQ_DONE_ACK (0x660ull)
-#define NPA_AF_AQ_DONE_TIMER (0x670ull)
-#define NPA_AF_AQ_DONE_INT (0x680ull)
-#define NPA_AF_AQ_DONE_ENA_W1S (0x690ull)
-#define NPA_AF_AQ_DONE_ENA_W1C (0x698ull)
-#define NPA_AF_LFX_AURAS_CFG(a) (0x4000ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 18)
-#define NPA_PRIV_AF_INT_CFG (0x10000ull)
-#define NPA_PRIV_LFX_CFG(a) (0x10010ull | (uint64_t)(a) << 8)
-#define NPA_PRIV_LFX_INT_CFG(a) (0x10020ull | (uint64_t)(a) << 8)
-#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030ull)
-#define NPA_AF_DTX_FILTER_CTL (0x10040ull)
-
-#define NPA_LF_AURA_OP_ALLOCX(a) (0x10ull | (uint64_t)(a) << 3)
-#define NPA_LF_AURA_OP_FREE0 (0x20ull)
-#define NPA_LF_AURA_OP_FREE1 (0x28ull)
-#define NPA_LF_AURA_OP_CNT (0x30ull)
-#define NPA_LF_AURA_OP_LIMIT (0x50ull)
-#define NPA_LF_AURA_OP_INT (0x60ull)
-#define NPA_LF_AURA_OP_THRESH (0x70ull)
-#define NPA_LF_POOL_OP_PC (0x100ull)
-#define NPA_LF_POOL_OP_AVAILABLE (0x110ull)
-#define NPA_LF_POOL_OP_PTR_START0 (0x120ull)
-#define NPA_LF_POOL_OP_PTR_START1 (0x128ull)
-#define NPA_LF_POOL_OP_PTR_END0 (0x130ull)
-#define NPA_LF_POOL_OP_PTR_END1 (0x138ull)
-#define NPA_LF_POOL_OP_INT (0x160ull)
-#define NPA_LF_POOL_OP_THRESH (0x170ull)
-#define NPA_LF_ERR_INT (0x200ull)
-#define NPA_LF_ERR_INT_W1S (0x208ull)
-#define NPA_LF_ERR_INT_ENA_W1C (0x210ull)
-#define NPA_LF_ERR_INT_ENA_W1S (0x218ull)
-#define NPA_LF_RAS (0x220ull)
-#define NPA_LF_RAS_W1S (0x228ull)
-#define NPA_LF_RAS_ENA_W1C (0x230ull)
-#define NPA_LF_RAS_ENA_W1S (0x238ull)
-#define NPA_LF_QINTX_CNT(a) (0x300ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_INT(a) (0x310ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_ENA_W1S(a) (0x320ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_ENA_W1C(a) (0x330ull | (uint64_t)(a) << 12)
-
-
-/* Enum offsets */
-
-#define NPA_AQ_COMP_NOTDONE (0x0ull)
-#define NPA_AQ_COMP_GOOD (0x1ull)
-#define NPA_AQ_COMP_SWERR (0x2ull)
-#define NPA_AQ_COMP_CTX_POISON (0x3ull)
-#define NPA_AQ_COMP_CTX_FAULT (0x4ull)
-#define NPA_AQ_COMP_LOCKERR (0x5ull)
-
-#define NPA_AF_INT_VEC_RVU (0x0ull)
-#define NPA_AF_INT_VEC_GEN (0x1ull)
-#define NPA_AF_INT_VEC_AQ_DONE (0x2ull)
-#define NPA_AF_INT_VEC_AF_ERR (0x3ull)
-#define NPA_AF_INT_VEC_POISON (0x4ull)
-
-#define NPA_AQ_INSTOP_NOP (0x0ull)
-#define NPA_AQ_INSTOP_INIT (0x1ull)
-#define NPA_AQ_INSTOP_WRITE (0x2ull)
-#define NPA_AQ_INSTOP_READ (0x3ull)
-#define NPA_AQ_INSTOP_LOCK (0x4ull)
-#define NPA_AQ_INSTOP_UNLOCK (0x5ull)
-
-#define NPA_AQ_CTYPE_AURA (0x0ull)
-#define NPA_AQ_CTYPE_POOL (0x1ull)
-
-#define NPA_BPINTF_NIX0_RX (0x0ull)
-#define NPA_BPINTF_NIX1_RX (0x1ull)
-
-#define NPA_AURA_ERR_INT_AURA_FREE_UNDER (0x0ull)
-#define NPA_AURA_ERR_INT_AURA_ADD_OVER (0x1ull)
-#define NPA_AURA_ERR_INT_AURA_ADD_UNDER (0x2ull)
-#define NPA_AURA_ERR_INT_POOL_DIS (0x3ull)
-#define NPA_AURA_ERR_INT_R4 (0x4ull)
-#define NPA_AURA_ERR_INT_R5 (0x5ull)
-#define NPA_AURA_ERR_INT_R6 (0x6ull)
-#define NPA_AURA_ERR_INT_R7 (0x7ull)
-
-#define NPA_LF_INT_VEC_ERR_INT (0x40ull)
-#define NPA_LF_INT_VEC_POISON (0x41ull)
-#define NPA_LF_INT_VEC_QINT_END (0x3full)
-#define NPA_LF_INT_VEC_QINT_START (0x0ull)
-
-#define NPA_INPQ_SSO (0x4ull)
-#define NPA_INPQ_TIM (0x5ull)
-#define NPA_INPQ_DPI (0x6ull)
-#define NPA_INPQ_AURA_OP (0xeull)
-#define NPA_INPQ_INTERNAL_RSV (0xfull)
-#define NPA_INPQ_NIX0_RX (0x0ull)
-#define NPA_INPQ_NIX1_RX (0x2ull)
-#define NPA_INPQ_NIX0_TX (0x1ull)
-#define NPA_INPQ_NIX1_TX (0x3ull)
-#define NPA_INPQ_R_END (0xdull)
-#define NPA_INPQ_R_START (0x7ull)
-
-#define NPA_POOL_ERR_INT_OVFLS (0x0ull)
-#define NPA_POOL_ERR_INT_RANGE (0x1ull)
-#define NPA_POOL_ERR_INT_PERR (0x2ull)
-#define NPA_POOL_ERR_INT_R3 (0x3ull)
-#define NPA_POOL_ERR_INT_R4 (0x4ull)
-#define NPA_POOL_ERR_INT_R5 (0x5ull)
-#define NPA_POOL_ERR_INT_R6 (0x6ull)
-#define NPA_POOL_ERR_INT_R7 (0x7ull)
-
-#define NPA_NDC0_PORT_AURA0 (0x0ull)
-#define NPA_NDC0_PORT_AURA1 (0x1ull)
-#define NPA_NDC0_PORT_POOL0 (0x2ull)
-#define NPA_NDC0_PORT_POOL1 (0x3ull)
-#define NPA_NDC0_PORT_STACK0 (0x4ull)
-#define NPA_NDC0_PORT_STACK1 (0x5ull)
-
-#define NPA_LF_ERR_INT_AURA_DIS (0x0ull)
-#define NPA_LF_ERR_INT_AURA_OOR (0x1ull)
-#define NPA_LF_ERR_INT_AURA_FAULT (0xcull)
-#define NPA_LF_ERR_INT_POOL_FAULT (0xdull)
-#define NPA_LF_ERR_INT_STACK_FAULT (0xeull)
-#define NPA_LF_ERR_INT_QINT_FAULT (0xfull)
-
-/* Structures definitions */
-
-/* NPA admin queue instruction structure */
-struct npa_aq_inst_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t lf : 9;
- uint64_t rsvd_23_17 : 7;
- uint64_t cindex : 20;
- uint64_t rsvd_62_44 : 19;
- uint64_t doneint : 1;
- uint64_t res_addr : 64; /* W1 */
-};
-
-/* NPA admin queue result structure */
-struct npa_aq_res_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t compcode : 8;
- uint64_t doneint : 1;
- uint64_t rsvd_63_17 : 47;
- uint64_t rsvd_127_64 : 64; /* W1 */
-};
-
-/* NPA aura operation write data structure */
-struct npa_aura_op_wdata_s {
- uint64_t aura : 20;
- uint64_t rsvd_62_20 : 43;
- uint64_t drop : 1;
-};
-
-/* NPA aura context structure */
-struct npa_aura_s {
- uint64_t pool_addr : 64;/* W0 */
- uint64_t ena : 1;
- uint64_t rsvd_66_65 : 2;
- uint64_t pool_caching : 1;
- uint64_t pool_way_mask : 16;
- uint64_t avg_con : 9;
- uint64_t rsvd_93 : 1;
- uint64_t pool_drop_ena : 1;
- uint64_t aura_drop_ena : 1;
- uint64_t bp_ena : 2;
- uint64_t rsvd_103_98 : 6;
- uint64_t aura_drop : 8;
- uint64_t shift : 6;
- uint64_t rsvd_119_118 : 2;
- uint64_t avg_level : 8;
- uint64_t count : 36;
- uint64_t rsvd_167_164 : 4;
- uint64_t nix0_bpid : 9;
- uint64_t rsvd_179_177 : 3;
- uint64_t nix1_bpid : 9;
- uint64_t rsvd_191_189 : 3;
- uint64_t limit : 36;
- uint64_t rsvd_231_228 : 4;
- uint64_t bp : 8;
- uint64_t rsvd_243_240 : 4;
- uint64_t fc_ena : 1;
- uint64_t fc_up_crossing : 1;
- uint64_t fc_stype : 2;
- uint64_t fc_hyst_bits : 4;
- uint64_t rsvd_255_252 : 4;
- uint64_t fc_addr : 64;/* W4 */
- uint64_t pool_drop : 8;
- uint64_t update_time : 16;
- uint64_t err_int : 8;
- uint64_t err_int_ena : 8;
- uint64_t thresh_int : 1;
- uint64_t thresh_int_ena : 1;
- uint64_t thresh_up : 1;
- uint64_t rsvd_363 : 1;
- uint64_t thresh_qint_idx : 7;
- uint64_t rsvd_371 : 1;
- uint64_t err_qint_idx : 7;
- uint64_t rsvd_383_379 : 5;
- uint64_t thresh : 36;
- uint64_t rsvd_447_420 : 28;
- uint64_t rsvd_511_448 : 64;/* W7 */
-};
-
-/* NPA pool context structure */
-struct npa_pool_s {
- uint64_t stack_base : 64;/* W0 */
- uint64_t ena : 1;
- uint64_t nat_align : 1;
- uint64_t rsvd_67_66 : 2;
- uint64_t stack_caching : 1;
- uint64_t rsvd_71_69 : 3;
- uint64_t stack_way_mask : 16;
- uint64_t buf_offset : 12;
- uint64_t rsvd_103_100 : 4;
- uint64_t buf_size : 11;
- uint64_t rsvd_127_115 : 13;
- uint64_t stack_max_pages : 32;
- uint64_t stack_pages : 32;
- uint64_t op_pc : 48;
- uint64_t rsvd_255_240 : 16;
- uint64_t stack_offset : 4;
- uint64_t rsvd_263_260 : 4;
- uint64_t shift : 6;
- uint64_t rsvd_271_270 : 2;
- uint64_t avg_level : 8;
- uint64_t avg_con : 9;
- uint64_t fc_ena : 1;
- uint64_t fc_stype : 2;
- uint64_t fc_hyst_bits : 4;
- uint64_t fc_up_crossing : 1;
- uint64_t rsvd_299_297 : 3;
- uint64_t update_time : 16;
- uint64_t rsvd_319_316 : 4;
- uint64_t fc_addr : 64;/* W5 */
- uint64_t ptr_start : 64;/* W6 */
- uint64_t ptr_end : 64;/* W7 */
- uint64_t rsvd_535_512 : 24;
- uint64_t err_int : 8;
- uint64_t err_int_ena : 8;
- uint64_t thresh_int : 1;
- uint64_t thresh_int_ena : 1;
- uint64_t thresh_up : 1;
- uint64_t rsvd_555 : 1;
- uint64_t thresh_qint_idx : 7;
- uint64_t rsvd_563 : 1;
- uint64_t err_qint_idx : 7;
- uint64_t rsvd_575_571 : 5;
- uint64_t thresh : 36;
- uint64_t rsvd_639_612 : 28;
- uint64_t rsvd_703_640 : 64;/* W10 */
- uint64_t rsvd_767_704 : 64;/* W11 */
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NPA queue interrupt context hardware structure */
-struct npa_qint_hw_s {
- uint32_t count : 22;
- uint32_t rsvd_30_22 : 9;
- uint32_t ena : 1;
-};
-
-#endif /* __OTX2_NPA_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_npc.h b/drivers/common/octeontx2/hw/otx2_npc.h
deleted file mode 100644
index b4e3c1eedc..0000000000
--- a/drivers/common/octeontx2/hw/otx2_npc.h
+++ /dev/null
@@ -1,503 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NPC_HW_H__
-#define __OTX2_NPC_HW_H__
-
-/* Register offsets */
-
-#define NPC_AF_CFG (0x0ull)
-#define NPC_AF_ACTIVE_PC (0x10ull)
-#define NPC_AF_CONST (0x20ull)
-#define NPC_AF_CONST1 (0x30ull)
-#define NPC_AF_BLK_RST (0x40ull)
-#define NPC_AF_MCAM_SCRUB_CTL (0xa0ull)
-#define NPC_AF_KCAM_SCRUB_CTL (0xb0ull)
-#define NPC_AF_KPUX_CFG(a) \
- (0x500ull | (uint64_t)(a) << 3)
-#define NPC_AF_PCK_CFG (0x600ull)
-#define NPC_AF_PCK_DEF_OL2 (0x610ull)
-#define NPC_AF_PCK_DEF_OIP4 (0x620ull)
-#define NPC_AF_PCK_DEF_OIP6 (0x630ull)
-#define NPC_AF_PCK_DEF_IIP4 (0x640ull)
-#define NPC_AF_KEX_LDATAX_FLAGS_CFG(a) \
- (0x800ull | (uint64_t)(a) << 3)
-#define NPC_AF_INTFX_KEX_CFG(a) \
- (0x1010ull | (uint64_t)(a) << 8)
-#define NPC_AF_PKINDX_ACTION0(a) \
- (0x80000ull | (uint64_t)(a) << 6)
-#define NPC_AF_PKINDX_ACTION1(a) \
- (0x80008ull | (uint64_t)(a) << 6)
-#define NPC_AF_PKINDX_CPI_DEFX(a, b) \
- (0x80020ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
-#define NPC_AF_CHLEN90B_PKIND (0x3bull)
-#define NPC_AF_KPUX_ENTRYX_CAMX(a, b, c) \
- (0x100000ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_KPUX_ENTRYX_ACTION0(a, b) \
- (0x100020ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
-#define NPC_AF_KPUX_ENTRYX_ACTION1(a, b) \
- (0x100028ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
-#define NPC_AF_KPUX_ENTRY_DISX(a, b) \
- (0x180000ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
-#define NPC_AF_CPIX_CFG(a) \
- (0x200000ull | (uint64_t)(a) << 3)
-#define NPC_AF_INTFX_LIDX_LTX_LDX_CFG(a, b, c, d) \
- (0x900000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
- (uint64_t)(c) << 5 | (uint64_t)(d) << 3)
-#define NPC_AF_INTFX_LDATAX_FLAGSX_CFG(a, b, c) \
- (0x980000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_INTF(a, b, c) \
- (0x1000000ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_W0(a, b, c) \
- (0x1000010ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_W1(a, b, c) \
- (0x1000020ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CFG(a, b) \
- (0x1800000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MCAMEX_BANKX_STAT_ACT(a, b) \
- (0x1880000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MATCH_STATX(a) \
- (0x1880008ull | (uint64_t)(a) << 8)
-#define NPC_AF_INTFX_MISS_STAT_ACT(a) \
- (0x1880040ull + (uint64_t)(a) * 0x8)
-#define NPC_AF_MCAMEX_BANKX_ACTION(a, b) \
- (0x1900000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MCAMEX_BANKX_TAG_ACT(a, b) \
- (0x1900008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_INTFX_MISS_ACT(a) \
- (0x1a00000ull | (uint64_t)(a) << 4)
-#define NPC_AF_INTFX_MISS_TAG_ACT(a) \
- (0x1b00008ull | (uint64_t)(a) << 4)
-#define NPC_AF_MCAM_BANKX_HITX(a, b) \
- (0x1c80000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_LKUP_CTL (0x2000000ull)
-#define NPC_AF_LKUP_DATAX(a) \
- (0x2000200ull | (uint64_t)(a) << 4)
-#define NPC_AF_LKUP_RESULTX(a) \
- (0x2000400ull | (uint64_t)(a) << 4)
-#define NPC_AF_INTFX_STAT(a) \
- (0x2000800ull | (uint64_t)(a) << 4)
-#define NPC_AF_DBG_CTL (0x3000000ull)
-#define NPC_AF_DBG_STATUS (0x3000010ull)
-#define NPC_AF_KPUX_DBG(a) \
- (0x3000020ull | (uint64_t)(a) << 8)
-#define NPC_AF_IKPU_ERR_CTL (0x3000080ull)
-#define NPC_AF_KPUX_ERR_CTL(a) \
- (0x30000a0ull | (uint64_t)(a) << 8)
-#define NPC_AF_MCAM_DBG (0x3001000ull)
-#define NPC_AF_DBG_DATAX(a) \
- (0x3001400ull | (uint64_t)(a) << 4)
-#define NPC_AF_DBG_RESULTX(a) \
- (0x3001800ull | (uint64_t)(a) << 4)
-
-
-/* Enum offsets */
-
-#define NPC_INTF_NIX0_RX (0x0ull)
-#define NPC_INTF_NIX0_TX (0x1ull)
-
-#define NPC_LKUPOP_PKT (0x0ull)
-#define NPC_LKUPOP_KEY (0x1ull)
-
-#define NPC_MCAM_KEY_X1 (0x0ull)
-#define NPC_MCAM_KEY_X2 (0x1ull)
-#define NPC_MCAM_KEY_X4 (0x2ull)
-
-enum NPC_ERRLEV_E {
- NPC_ERRLEV_RE = 0,
- NPC_ERRLEV_LA = 1,
- NPC_ERRLEV_LB = 2,
- NPC_ERRLEV_LC = 3,
- NPC_ERRLEV_LD = 4,
- NPC_ERRLEV_LE = 5,
- NPC_ERRLEV_LF = 6,
- NPC_ERRLEV_LG = 7,
- NPC_ERRLEV_LH = 8,
- NPC_ERRLEV_R9 = 9,
- NPC_ERRLEV_R10 = 10,
- NPC_ERRLEV_R11 = 11,
- NPC_ERRLEV_R12 = 12,
- NPC_ERRLEV_R13 = 13,
- NPC_ERRLEV_R14 = 14,
- NPC_ERRLEV_NIX = 15,
- NPC_ERRLEV_ENUM_LAST = 16,
-};
-
-enum npc_kpu_err_code {
- NPC_EC_NOERR = 0, /* has to be zero */
- NPC_EC_UNK,
- NPC_EC_IH_LENGTH,
- NPC_EC_EDSA_UNK,
- NPC_EC_L2_K1,
- NPC_EC_L2_K2,
- NPC_EC_L2_K3,
- NPC_EC_L2_K3_ETYPE_UNK,
- NPC_EC_L2_K4,
- NPC_EC_MPLS_2MANY,
- NPC_EC_MPLS_UNK,
- NPC_EC_NSH_UNK,
- NPC_EC_IP_TTL_0,
- NPC_EC_IP_FRAG_OFFSET_1,
- NPC_EC_IP_VER,
- NPC_EC_IP6_HOP_0,
- NPC_EC_IP6_VER,
- NPC_EC_TCP_FLAGS_FIN_ONLY,
- NPC_EC_TCP_FLAGS_ZERO,
- NPC_EC_TCP_FLAGS_RST_FIN,
- NPC_EC_TCP_FLAGS_URG_SYN,
- NPC_EC_TCP_FLAGS_RST_SYN,
- NPC_EC_TCP_FLAGS_SYN_FIN,
- NPC_EC_VXLAN,
- NPC_EC_NVGRE,
- NPC_EC_GRE,
- NPC_EC_GRE_VER1,
- NPC_EC_L4,
- NPC_EC_OIP4_CSUM,
- NPC_EC_IIP4_CSUM,
- NPC_EC_LAST /* has to be the last item */
-};
-
-enum NPC_LID_E {
- NPC_LID_LA = 0,
- NPC_LID_LB,
- NPC_LID_LC,
- NPC_LID_LD,
- NPC_LID_LE,
- NPC_LID_LF,
- NPC_LID_LG,
- NPC_LID_LH,
-};
-
-#define NPC_LT_NA 0
-
-enum npc_kpu_la_ltype {
- NPC_LT_LA_8023 = 1,
- NPC_LT_LA_ETHER,
- NPC_LT_LA_IH_NIX_ETHER,
- NPC_LT_LA_IH_8_ETHER,
- NPC_LT_LA_IH_4_ETHER,
- NPC_LT_LA_IH_2_ETHER,
- NPC_LT_LA_HIGIG2_ETHER,
- NPC_LT_LA_IH_NIX_HIGIG2_ETHER,
- NPC_LT_LA_CUSTOM_L2_90B_ETHER,
- NPC_LT_LA_CPT_HDR,
- NPC_LT_LA_CUSTOM_L2_24B_ETHER,
- NPC_LT_LA_CUSTOM0 = 0xE,
- NPC_LT_LA_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lb_ltype {
- NPC_LT_LB_ETAG = 1,
- NPC_LT_LB_CTAG,
- NPC_LT_LB_STAG_QINQ,
- NPC_LT_LB_BTAG,
- NPC_LT_LB_ITAG,
- NPC_LT_LB_DSA,
- NPC_LT_LB_DSA_VLAN,
- NPC_LT_LB_EDSA,
- NPC_LT_LB_EDSA_VLAN,
- NPC_LT_LB_EXDSA,
- NPC_LT_LB_EXDSA_VLAN,
- NPC_LT_LB_FDSA,
- NPC_LT_LB_VLAN_EXDSA,
- NPC_LT_LB_CUSTOM0 = 0xE,
- NPC_LT_LB_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lc_ltype {
- NPC_LT_LC_PTP = 1,
- NPC_LT_LC_IP,
- NPC_LT_LC_IP_OPT,
- NPC_LT_LC_IP6,
- NPC_LT_LC_IP6_EXT,
- NPC_LT_LC_ARP,
- NPC_LT_LC_RARP,
- NPC_LT_LC_MPLS,
- NPC_LT_LC_NSH,
- NPC_LT_LC_FCOE,
- NPC_LT_LC_NGIO,
- NPC_LT_LC_CUSTOM0 = 0xE,
- NPC_LT_LC_CUSTOM1 = 0xF,
-};
-
-/* Don't modify Ltypes up to SCTP, otherwise it will
- * effect flow tag calculation and thus RSS.
- */
-enum npc_kpu_ld_ltype {
- NPC_LT_LD_TCP = 1,
- NPC_LT_LD_UDP,
- NPC_LT_LD_ICMP,
- NPC_LT_LD_SCTP,
- NPC_LT_LD_ICMP6,
- NPC_LT_LD_CUSTOM0,
- NPC_LT_LD_CUSTOM1,
- NPC_LT_LD_IGMP = 8,
- NPC_LT_LD_AH,
- NPC_LT_LD_GRE,
- NPC_LT_LD_NVGRE,
- NPC_LT_LD_NSH,
- NPC_LT_LD_TU_MPLS_IN_NSH,
- NPC_LT_LD_TU_MPLS_IN_IP,
-};
-
-enum npc_kpu_le_ltype {
- NPC_LT_LE_VXLAN = 1,
- NPC_LT_LE_GENEVE,
- NPC_LT_LE_ESP,
- NPC_LT_LE_GTPU = 4,
- NPC_LT_LE_VXLANGPE,
- NPC_LT_LE_GTPC,
- NPC_LT_LE_NSH,
- NPC_LT_LE_TU_MPLS_IN_GRE,
- NPC_LT_LE_TU_NSH_IN_GRE,
- NPC_LT_LE_TU_MPLS_IN_UDP,
- NPC_LT_LE_CUSTOM0 = 0xE,
- NPC_LT_LE_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lf_ltype {
- NPC_LT_LF_TU_ETHER = 1,
- NPC_LT_LF_TU_PPP,
- NPC_LT_LF_TU_MPLS_IN_VXLANGPE,
- NPC_LT_LF_TU_NSH_IN_VXLANGPE,
- NPC_LT_LF_TU_MPLS_IN_NSH,
- NPC_LT_LF_TU_3RD_NSH,
- NPC_LT_LF_CUSTOM0 = 0xE,
- NPC_LT_LF_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lg_ltype {
- NPC_LT_LG_TU_IP = 1,
- NPC_LT_LG_TU_IP6,
- NPC_LT_LG_TU_ARP,
- NPC_LT_LG_TU_ETHER_IN_NSH,
- NPC_LT_LG_CUSTOM0 = 0xE,
- NPC_LT_LG_CUSTOM1 = 0xF,
-};
-
-/* Don't modify Ltypes up to SCTP, otherwise it will
- * effect flow tag calculation and thus RSS.
- */
-enum npc_kpu_lh_ltype {
- NPC_LT_LH_TU_TCP = 1,
- NPC_LT_LH_TU_UDP,
- NPC_LT_LH_TU_ICMP,
- NPC_LT_LH_TU_SCTP,
- NPC_LT_LH_TU_ICMP6,
- NPC_LT_LH_TU_IGMP = 8,
- NPC_LT_LH_TU_ESP,
- NPC_LT_LH_TU_AH,
- NPC_LT_LH_CUSTOM0 = 0xE,
- NPC_LT_LH_CUSTOM1 = 0xF,
-};
-
-/* Structures definitions */
-struct npc_kpu_profile_cam {
- uint8_t state;
- uint8_t state_mask;
- uint16_t dp0;
- uint16_t dp0_mask;
- uint16_t dp1;
- uint16_t dp1_mask;
- uint16_t dp2;
- uint16_t dp2_mask;
-};
-
-struct npc_kpu_profile_action {
- uint8_t errlev;
- uint8_t errcode;
- uint8_t dp0_offset;
- uint8_t dp1_offset;
- uint8_t dp2_offset;
- uint8_t bypass_count;
- uint8_t parse_done;
- uint8_t next_state;
- uint8_t ptr_advance;
- uint8_t cap_ena;
- uint8_t lid;
- uint8_t ltype;
- uint8_t flags;
- uint8_t offset;
- uint8_t mask;
- uint8_t right;
- uint8_t shift;
-};
-
-struct npc_kpu_profile {
- int cam_entries;
- int action_entries;
- struct npc_kpu_profile_cam *cam;
- struct npc_kpu_profile_action *action;
-};
-
-/* NPC KPU register formats */
-struct npc_kpu_cam {
- uint64_t dp0_data : 16;
- uint64_t dp1_data : 16;
- uint64_t dp2_data : 16;
- uint64_t state : 8;
- uint64_t rsvd_63_56 : 8;
-};
-
-struct npc_kpu_action0 {
- uint64_t var_len_shift : 3;
- uint64_t var_len_right : 1;
- uint64_t var_len_mask : 8;
- uint64_t var_len_offset : 8;
- uint64_t ptr_advance : 8;
- uint64_t capture_flags : 8;
- uint64_t capture_ltype : 4;
- uint64_t capture_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t next_state : 8;
- uint64_t parse_done : 1;
- uint64_t capture_ena : 1;
- uint64_t byp_count : 3;
- uint64_t rsvd_63_57 : 7;
-};
-
-struct npc_kpu_action1 {
- uint64_t dp0_offset : 8;
- uint64_t dp1_offset : 8;
- uint64_t dp2_offset : 8;
- uint64_t errcode : 8;
- uint64_t errlev : 4;
- uint64_t rsvd_63_36 : 28;
-};
-
-struct npc_kpu_pkind_cpi_def {
- uint64_t cpi_base : 10;
- uint64_t rsvd_11_10 : 2;
- uint64_t add_shift : 3;
- uint64_t rsvd_15 : 1;
- uint64_t add_mask : 8;
- uint64_t add_offset : 8;
- uint64_t flags_mask : 8;
- uint64_t flags_match : 8;
- uint64_t ltype_mask : 4;
- uint64_t ltype_match : 4;
- uint64_t lid : 3;
- uint64_t rsvd_62_59 : 4;
- uint64_t ena : 1;
-};
-
-struct nix_rx_action {
- uint64_t op :4;
- uint64_t pf_func :16;
- uint64_t index :20;
- uint64_t match_id :16;
- uint64_t flow_key_alg :5;
- uint64_t rsvd_63_61 :3;
-};
-
-struct nix_tx_action {
- uint64_t op :4;
- uint64_t rsvd_11_4 :8;
- uint64_t index :20;
- uint64_t match_id :16;
- uint64_t rsvd_63_48 :16;
-};
-
-/* NPC layer parse information structure */
-struct npc_layer_info_s {
- uint32_t lptr : 8;
- uint32_t flags : 8;
- uint32_t ltype : 4;
- uint32_t rsvd_31_20 : 12;
-};
-
-/* NPC layer mcam search key extract structure */
-struct npc_layer_kex_s {
- uint16_t flags : 8;
- uint16_t ltype : 4;
- uint16_t rsvd_15_12 : 4;
-};
-
-/* NPC mcam search key x1 structure */
-struct npc_mcam_key_x1_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 48;
- uint64_t rsvd_191_176 : 16;
-};
-
-/* NPC mcam search key x2 structure */
-struct npc_mcam_key_x2_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 64; /* W2 */
- uint64_t kw2 : 64; /* W3 */
- uint64_t kw3 : 32;
- uint64_t rsvd_319_288 : 32;
-};
-
-/* NPC mcam search key x4 structure */
-struct npc_mcam_key_x4_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 64; /* W2 */
- uint64_t kw2 : 64; /* W3 */
- uint64_t kw3 : 64; /* W4 */
- uint64_t kw4 : 64; /* W5 */
- uint64_t kw5 : 64; /* W6 */
- uint64_t kw6 : 64; /* W7 */
-};
-
-/* NPC parse key extract structure */
-struct npc_parse_kex_s {
- uint64_t chan : 12;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t la : 12;
- uint64_t lb : 12;
- uint64_t lc : 12;
- uint64_t ld : 12;
- uint64_t le : 12;
- uint64_t lf : 12;
- uint64_t lg : 12;
- uint64_t lh : 12;
- uint64_t rsvd_127_124 : 4;
-};
-
-/* NPC result structure */
-struct npc_result_s {
- uint64_t intf : 2;
- uint64_t pkind : 6;
- uint64_t chan : 12;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t eoh_ptr : 8;
- uint64_t rsvd_63_44 : 20;
- uint64_t action : 64; /* W1 */
- uint64_t vtag_action : 64; /* W2 */
- uint64_t la : 20;
- uint64_t lb : 20;
- uint64_t lc : 20;
- uint64_t rsvd_255_252 : 4;
- uint64_t ld : 20;
- uint64_t le : 20;
- uint64_t lf : 20;
- uint64_t rsvd_319_316 : 4;
- uint64_t lg : 20;
- uint64_t lh : 20;
- uint64_t rsvd_383_360 : 24;
-};
-
-#endif /* __OTX2_NPC_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_ree.h b/drivers/common/octeontx2/hw/otx2_ree.h
deleted file mode 100644
index b7481f125f..0000000000
--- a/drivers/common/octeontx2/hw/otx2_ree.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_REE_HW_H__
-#define __OTX2_REE_HW_H__
-
-/* REE BAR0*/
-#define REE_AF_REEXM_MAX_MATCH (0x80c8)
-
-/* REE BAR02 */
-#define REE_LF_MISC_INT (0x300)
-#define REE_LF_DONE_INT (0x120)
-
-#define REE_AF_QUEX_GMCTL(a) (0x800 | (a) << 3)
-
-#define REE_AF_INT_VEC_RAS (0x0ull)
-#define REE_AF_INT_VEC_RVU (0x1ull)
-#define REE_AF_INT_VEC_QUE_DONE (0x2ull)
-#define REE_AF_INT_VEC_AQ (0x3ull)
-
-/* ENUMS */
-
-#define REE_LF_INT_VEC_QUE_DONE (0x0ull)
-#define REE_LF_INT_VEC_MISC (0x1ull)
-
-#endif /* __OTX2_REE_HW_H__*/
diff --git a/drivers/common/octeontx2/hw/otx2_rvu.h b/drivers/common/octeontx2/hw/otx2_rvu.h
deleted file mode 100644
index b98dbcb1cd..0000000000
--- a/drivers/common/octeontx2/hw/otx2_rvu.h
+++ /dev/null
@@ -1,219 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_RVU_HW_H__
-#define __OTX2_RVU_HW_H__
-
-/* Register offsets */
-
-#define RVU_AF_MSIXTR_BASE (0x10ull)
-#define RVU_AF_BLK_RST (0x30ull)
-#define RVU_AF_PF_BAR4_ADDR (0x40ull)
-#define RVU_AF_RAS (0x100ull)
-#define RVU_AF_RAS_W1S (0x108ull)
-#define RVU_AF_RAS_ENA_W1S (0x110ull)
-#define RVU_AF_RAS_ENA_W1C (0x118ull)
-#define RVU_AF_GEN_INT (0x120ull)
-#define RVU_AF_GEN_INT_W1S (0x128ull)
-#define RVU_AF_GEN_INT_ENA_W1S (0x130ull)
-#define RVU_AF_GEN_INT_ENA_W1C (0x138ull)
-#define RVU_AF_AFPFX_MBOXX(a, b) \
- (0x2000ull | (uint64_t)(a) << 4 | (uint64_t)(b) << 3)
-#define RVU_AF_PFME_STATUS (0x2800ull)
-#define RVU_AF_PFTRPEND (0x2810ull)
-#define RVU_AF_PFTRPEND_W1S (0x2820ull)
-#define RVU_AF_PF_RST (0x2840ull)
-#define RVU_AF_HWVF_RST (0x2850ull)
-#define RVU_AF_PFAF_MBOX_INT (0x2880ull)
-#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888ull)
-#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890ull)
-#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898ull)
-#define RVU_AF_PFFLR_INT (0x28a0ull)
-#define RVU_AF_PFFLR_INT_W1S (0x28a8ull)
-#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0ull)
-#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8ull)
-#define RVU_AF_PFME_INT (0x28c0ull)
-#define RVU_AF_PFME_INT_W1S (0x28c8ull)
-#define RVU_AF_PFME_INT_ENA_W1S (0x28d0ull)
-#define RVU_AF_PFME_INT_ENA_W1C (0x28d8ull)
-#define RVU_PRIV_CONST (0x8000000ull)
-#define RVU_PRIV_GEN_CFG (0x8000010ull)
-#define RVU_PRIV_CLK_CFG (0x8000020ull)
-#define RVU_PRIV_ACTIVE_PC (0x8000030ull)
-#define RVU_PRIV_PFX_CFG(a) (0x8000100ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_NIXX_CFG(a, b) \
- (0x8000300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_PFX_NPA_CFG(a) (0x8000310ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_SSO_CFG(a) (0x8000320ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_SSOW_CFG(a) (0x8000330ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_TIM_CFG(a) (0x8000340ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_CPTX_CFG(a, b) \
- (0x8000350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400ull | (uint64_t)(a) << 3)
-#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_NIXX_CFG(a, b) \
- (0x8001300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_HWVFX_NPA_CFG(a) (0x8001310ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_SSO_CFG(a) (0x8001320ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_SSOW_CFG(a) (0x8001330ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_TIM_CFG(a) (0x8001340ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_CPTX_CFG(a, b) \
- (0x8001350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-
-#define RVU_PF_VFX_PFVF_MBOXX(a, b) \
- (0x0ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 3)
-#define RVU_PF_VF_BAR4_ADDR (0x10ull)
-#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_STATUSX(a) (0x800ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFTRPENDX(a) (0x820ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFTRPEND_W1SX(a) (0x840ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INTX(a) (0x880ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8a0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8c0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8e0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INTX(a) (0x900ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_W1SX(a) (0x920ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INTX(a) (0x980ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_W1SX(a) (0x9a0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9c0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9e0ull | (uint64_t)(a) << 3)
-#define RVU_PF_PFAF_MBOXX(a) (0xc00ull | (uint64_t)(a) << 3)
-#define RVU_PF_INT (0xc20ull)
-#define RVU_PF_INT_W1S (0xc28ull)
-#define RVU_PF_INT_ENA_W1S (0xc30ull)
-#define RVU_PF_INT_ENA_W1C (0xc38ull)
-#define RVU_PF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
-#define RVU_PF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
-#define RVU_PF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
-#define RVU_VF_VFPF_MBOXX(a) (0x0ull | (uint64_t)(a) << 3)
-#define RVU_VF_INT (0x20ull)
-#define RVU_VF_INT_W1S (0x28ull)
-#define RVU_VF_INT_ENA_W1S (0x30ull)
-#define RVU_VF_INT_ENA_W1C (0x38ull)
-#define RVU_VF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
-#define RVU_VF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
-#define RVU_VF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
-#define RVU_VF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
-
-
-/* Enum offsets */
-
-#define RVU_BAR_RVU_PF_END_BAR0 (0x84f000000000ull)
-#define RVU_BAR_RVU_PF_START_BAR0 (0x840000000000ull)
-#define RVU_BAR_RVU_PFX_FUNCX_BAR2(a, b) \
- (0x840200000000ull | ((uint64_t)(a) << 36) | ((uint64_t)(b) << 25))
-
-#define RVU_AF_INT_VEC_POISON (0x0ull)
-#define RVU_AF_INT_VEC_PFFLR (0x1ull)
-#define RVU_AF_INT_VEC_PFME (0x2ull)
-#define RVU_AF_INT_VEC_GEN (0x3ull)
-#define RVU_AF_INT_VEC_MBOX (0x4ull)
-
-#define RVU_BLOCK_TYPE_RVUM (0x0ull)
-#define RVU_BLOCK_TYPE_LMT (0x2ull)
-#define RVU_BLOCK_TYPE_NIX (0x3ull)
-#define RVU_BLOCK_TYPE_NPA (0x4ull)
-#define RVU_BLOCK_TYPE_NPC (0x5ull)
-#define RVU_BLOCK_TYPE_SSO (0x6ull)
-#define RVU_BLOCK_TYPE_SSOW (0x7ull)
-#define RVU_BLOCK_TYPE_TIM (0x8ull)
-#define RVU_BLOCK_TYPE_CPT (0x9ull)
-#define RVU_BLOCK_TYPE_NDC (0xaull)
-#define RVU_BLOCK_TYPE_DDF (0xbull)
-#define RVU_BLOCK_TYPE_ZIP (0xcull)
-#define RVU_BLOCK_TYPE_RAD (0xdull)
-#define RVU_BLOCK_TYPE_DFA (0xeull)
-#define RVU_BLOCK_TYPE_HNA (0xfull)
-#define RVU_BLOCK_TYPE_REE (0xeull)
-
-#define RVU_BLOCK_ADDR_RVUM (0x0ull)
-#define RVU_BLOCK_ADDR_LMT (0x1ull)
-#define RVU_BLOCK_ADDR_NPA (0x3ull)
-#define RVU_BLOCK_ADDR_NIX0 (0x4ull)
-#define RVU_BLOCK_ADDR_NIX1 (0x5ull)
-#define RVU_BLOCK_ADDR_NPC (0x6ull)
-#define RVU_BLOCK_ADDR_SSO (0x7ull)
-#define RVU_BLOCK_ADDR_SSOW (0x8ull)
-#define RVU_BLOCK_ADDR_TIM (0x9ull)
-#define RVU_BLOCK_ADDR_CPT0 (0xaull)
-#define RVU_BLOCK_ADDR_CPT1 (0xbull)
-#define RVU_BLOCK_ADDR_NDC0 (0xcull)
-#define RVU_BLOCK_ADDR_NDC1 (0xdull)
-#define RVU_BLOCK_ADDR_NDC2 (0xeull)
-#define RVU_BLOCK_ADDR_R_END (0x1full)
-#define RVU_BLOCK_ADDR_R_START (0x14ull)
-#define RVU_BLOCK_ADDR_REE0 (0x14ull)
-#define RVU_BLOCK_ADDR_REE1 (0x15ull)
-
-#define RVU_VF_INT_VEC_MBOX (0x0ull)
-
-#define RVU_PF_INT_VEC_AFPF_MBOX (0x6ull)
-#define RVU_PF_INT_VEC_VFFLR0 (0x0ull)
-#define RVU_PF_INT_VEC_VFFLR1 (0x1ull)
-#define RVU_PF_INT_VEC_VFME0 (0x2ull)
-#define RVU_PF_INT_VEC_VFME1 (0x3ull)
-#define RVU_PF_INT_VEC_VFPF_MBOX0 (0x4ull)
-#define RVU_PF_INT_VEC_VFPF_MBOX1 (0x5ull)
-
-
-#define AF_BAR2_ALIASX_SIZE (0x100000ull)
-
-#define TIM_AF_BAR2_SEL (0x9000000ull)
-#define SSO_AF_BAR2_SEL (0x9000000ull)
-#define NIX_AF_BAR2_SEL (0x9000000ull)
-#define SSOW_AF_BAR2_SEL (0x9000000ull)
-#define NPA_AF_BAR2_SEL (0x9000000ull)
-#define CPT_AF_BAR2_SEL (0x9000000ull)
-#define RVU_AF_BAR2_SEL (0x9000000ull)
-#define REE_AF_BAR2_SEL (0x9000000ull)
-
-#define AF_BAR2_ALIASX(a, b) \
- (0x9100000ull | (uint64_t)(a) << 12 | (uint64_t)(b))
-#define TIM_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define SSO_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define NIX_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
-#define SSOW_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define NPA_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
-#define CPT_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define RVU_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define REE_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-
-/* Structures definitions */
-
-/* RVU admin function register address structure */
-struct rvu_af_addr_s {
- uint64_t addr : 28;
- uint64_t block : 5;
- uint64_t rsvd_63_33 : 31;
-};
-
-/* RVU function-unique address structure */
-struct rvu_func_addr_s {
- uint32_t addr : 12;
- uint32_t lf_slot : 8;
- uint32_t block : 5;
- uint32_t rsvd_31_25 : 7;
-};
-
-/* RVU msi-x vector structure */
-struct rvu_msix_vec_s {
- uint64_t addr : 64; /* W0 */
- uint64_t data : 32;
- uint64_t mask : 1;
- uint64_t pend : 1;
- uint64_t rsvd_127_98 : 30;
-};
-
-/* RVU pf function identification structure */
-struct rvu_pf_func_s {
- uint16_t func : 10;
- uint16_t pf : 6;
-};
-
-#endif /* __OTX2_RVU_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_sdp.h b/drivers/common/octeontx2/hw/otx2_sdp.h
deleted file mode 100644
index 1e690f8b32..0000000000
--- a/drivers/common/octeontx2/hw/otx2_sdp.h
+++ /dev/null
@@ -1,184 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SDP_HW_H_
-#define __OTX2_SDP_HW_H_
-
-/* SDP VF IOQs */
-#define SDP_MIN_RINGS_PER_VF (1)
-#define SDP_MAX_RINGS_PER_VF (8)
-
-/* SDP VF IQ configuration */
-#define SDP_VF_MAX_IQ_DESCRIPTORS (512)
-#define SDP_VF_MIN_IQ_DESCRIPTORS (128)
-
-#define SDP_VF_DB_MIN (1)
-#define SDP_VF_DB_TIMEOUT (1)
-#define SDP_VF_INTR_THRESHOLD (0xFFFFFFFF)
-
-#define SDP_VF_64BYTE_INSTR (64)
-#define SDP_VF_32BYTE_INSTR (32)
-
-/* SDP VF OQ configuration */
-#define SDP_VF_MAX_OQ_DESCRIPTORS (512)
-#define SDP_VF_MIN_OQ_DESCRIPTORS (128)
-#define SDP_VF_OQ_BUF_SIZE (2048)
-#define SDP_VF_OQ_REFIL_THRESHOLD (16)
-
-#define SDP_VF_OQ_INFOPTR_MODE (1)
-#define SDP_VF_OQ_BUFPTR_MODE (0)
-
-#define SDP_VF_OQ_INTR_PKT (1)
-#define SDP_VF_OQ_INTR_TIME (10)
-#define SDP_VF_CFG_IO_QUEUES SDP_MAX_RINGS_PER_VF
-
-/* Wait time in milliseconds for FLR */
-#define SDP_VF_PCI_FLR_WAIT (100)
-#define SDP_VF_BUSY_LOOP_COUNT (10000)
-
-#define SDP_VF_MAX_IO_QUEUES SDP_MAX_RINGS_PER_VF
-#define SDP_VF_MIN_IO_QUEUES SDP_MIN_RINGS_PER_VF
-
-/* SDP VF IOQs per rawdev */
-#define SDP_VF_MAX_IOQS_PER_RAWDEV SDP_VF_MAX_IO_QUEUES
-#define SDP_VF_DEFAULT_IOQS_PER_RAWDEV SDP_VF_MIN_IO_QUEUES
-
-/* SDP VF Register definitions */
-#define SDP_VF_RING_OFFSET (0x1ull << 17)
-
-/* SDP VF IQ Registers */
-#define SDP_VF_R_IN_CONTROL_START (0x10000)
-#define SDP_VF_R_IN_ENABLE_START (0x10010)
-#define SDP_VF_R_IN_INSTR_BADDR_START (0x10020)
-#define SDP_VF_R_IN_INSTR_RSIZE_START (0x10030)
-#define SDP_VF_R_IN_INSTR_DBELL_START (0x10040)
-#define SDP_VF_R_IN_CNTS_START (0x10050)
-#define SDP_VF_R_IN_INT_LEVELS_START (0x10060)
-#define SDP_VF_R_IN_PKT_CNT_START (0x10080)
-#define SDP_VF_R_IN_BYTE_CNT_START (0x10090)
-
-#define SDP_VF_R_IN_CONTROL(ring) \
- (SDP_VF_R_IN_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_ENABLE(ring) \
- (SDP_VF_R_IN_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_BADDR(ring) \
- (SDP_VF_R_IN_INSTR_BADDR_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_RSIZE(ring) \
- (SDP_VF_R_IN_INSTR_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_DBELL(ring) \
- (SDP_VF_R_IN_INSTR_DBELL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_CNTS(ring) \
- (SDP_VF_R_IN_CNTS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INT_LEVELS(ring) \
- (SDP_VF_R_IN_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_PKT_CNT(ring) \
- (SDP_VF_R_IN_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_BYTE_CNT(ring) \
- (SDP_VF_R_IN_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-/* SDP VF IQ Masks */
-#define SDP_VF_R_IN_CTL_RPVF_MASK (0xF)
-#define SDP_VF_R_IN_CTL_RPVF_POS (48)
-
-#define SDP_VF_R_IN_CTL_IDLE (0x1ull << 28)
-#define SDP_VF_R_IN_CTL_RDSIZE (0x3ull << 25) /* Setting to max(4) */
-#define SDP_VF_R_IN_CTL_IS_64B (0x1ull << 24)
-#define SDP_VF_R_IN_CTL_D_NSR (0x1ull << 8)
-#define SDP_VF_R_IN_CTL_D_ESR (0x1ull << 6)
-#define SDP_VF_R_IN_CTL_D_ROR (0x1ull << 5)
-#define SDP_VF_R_IN_CTL_NSR (0x1ull << 3)
-#define SDP_VF_R_IN_CTL_ESR (0x1ull << 1)
-#define SDP_VF_R_IN_CTL_ROR (0x1ull << 0)
-
-#define SDP_VF_R_IN_CTL_MASK \
- (SDP_VF_R_IN_CTL_RDSIZE | SDP_VF_R_IN_CTL_IS_64B)
-
-/* SDP VF OQ Registers */
-#define SDP_VF_R_OUT_CNTS_START (0x10100)
-#define SDP_VF_R_OUT_INT_LEVELS_START (0x10110)
-#define SDP_VF_R_OUT_SLIST_BADDR_START (0x10120)
-#define SDP_VF_R_OUT_SLIST_RSIZE_START (0x10130)
-#define SDP_VF_R_OUT_SLIST_DBELL_START (0x10140)
-#define SDP_VF_R_OUT_CONTROL_START (0x10150)
-#define SDP_VF_R_OUT_ENABLE_START (0x10160)
-#define SDP_VF_R_OUT_PKT_CNT_START (0x10180)
-#define SDP_VF_R_OUT_BYTE_CNT_START (0x10190)
-
-#define SDP_VF_R_OUT_CONTROL(ring) \
- (SDP_VF_R_OUT_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_ENABLE(ring) \
- (SDP_VF_R_OUT_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_BADDR(ring) \
- (SDP_VF_R_OUT_SLIST_BADDR_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_RSIZE(ring) \
- (SDP_VF_R_OUT_SLIST_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_DBELL(ring) \
- (SDP_VF_R_OUT_SLIST_DBELL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_CNTS(ring) \
- (SDP_VF_R_OUT_CNTS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_INT_LEVELS(ring) \
- (SDP_VF_R_OUT_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_PKT_CNT(ring) \
- (SDP_VF_R_OUT_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_BYTE_CNT(ring) \
- (SDP_VF_R_OUT_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-/* SDP VF OQ Masks */
-#define SDP_VF_R_OUT_CTL_IDLE (1ull << 40)
-#define SDP_VF_R_OUT_CTL_ES_I (1ull << 34)
-#define SDP_VF_R_OUT_CTL_NSR_I (1ull << 33)
-#define SDP_VF_R_OUT_CTL_ROR_I (1ull << 32)
-#define SDP_VF_R_OUT_CTL_ES_D (1ull << 30)
-#define SDP_VF_R_OUT_CTL_NSR_D (1ull << 29)
-#define SDP_VF_R_OUT_CTL_ROR_D (1ull << 28)
-#define SDP_VF_R_OUT_CTL_ES_P (1ull << 26)
-#define SDP_VF_R_OUT_CTL_NSR_P (1ull << 25)
-#define SDP_VF_R_OUT_CTL_ROR_P (1ull << 24)
-#define SDP_VF_R_OUT_CTL_IMODE (1ull << 23)
-
-#define SDP_VF_R_OUT_INT_LEVELS_BMODE (1ull << 63)
-#define SDP_VF_R_OUT_INT_LEVELS_TIMET (32)
-
-/* SDP Instruction Header */
-struct sdp_instr_ih {
- /* Data Len */
- uint64_t tlen:16;
-
- /* Reserved1 */
- uint64_t rsvd1:20;
-
- /* PKIND for SDP */
- uint64_t pkind:6;
-
- /* Front Data size */
- uint64_t fsz:6;
-
- /* No. of entries in gather list */
- uint64_t gsz:14;
-
- /* Gather indicator */
- uint64_t gather:1;
-
- /* Reserved2 */
- uint64_t rsvd2:1;
-} __rte_packed;
-
-#endif /* __OTX2_SDP_HW_H_ */
-
diff --git a/drivers/common/octeontx2/hw/otx2_sso.h b/drivers/common/octeontx2/hw/otx2_sso.h
deleted file mode 100644
index 98a8130b16..0000000000
--- a/drivers/common/octeontx2/hw/otx2_sso.h
+++ /dev/null
@@ -1,209 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SSO_HW_H__
-#define __OTX2_SSO_HW_H__
-
-/* Register offsets */
-
-#define SSO_AF_CONST (0x1000ull)
-#define SSO_AF_CONST1 (0x1008ull)
-#define SSO_AF_WQ_INT_PC (0x1020ull)
-#define SSO_AF_NOS_CNT (0x1050ull)
-#define SSO_AF_AW_WE (0x1080ull)
-#define SSO_AF_WS_CFG (0x1088ull)
-#define SSO_AF_GWE_CFG (0x1098ull)
-#define SSO_AF_GWE_RANDOM (0x10b0ull)
-#define SSO_AF_LF_HWGRP_RST (0x10e0ull)
-#define SSO_AF_AW_CFG (0x10f0ull)
-#define SSO_AF_BLK_RST (0x10f8ull)
-#define SSO_AF_ACTIVE_CYCLES0 (0x1100ull)
-#define SSO_AF_ACTIVE_CYCLES1 (0x1108ull)
-#define SSO_AF_ACTIVE_CYCLES2 (0x1110ull)
-#define SSO_AF_ERR0 (0x1220ull)
-#define SSO_AF_ERR0_W1S (0x1228ull)
-#define SSO_AF_ERR0_ENA_W1C (0x1230ull)
-#define SSO_AF_ERR0_ENA_W1S (0x1238ull)
-#define SSO_AF_ERR2 (0x1260ull)
-#define SSO_AF_ERR2_W1S (0x1268ull)
-#define SSO_AF_ERR2_ENA_W1C (0x1270ull)
-#define SSO_AF_ERR2_ENA_W1S (0x1278ull)
-#define SSO_AF_UNMAP_INFO (0x12f0ull)
-#define SSO_AF_UNMAP_INFO2 (0x1300ull)
-#define SSO_AF_UNMAP_INFO3 (0x1310ull)
-#define SSO_AF_RAS (0x1420ull)
-#define SSO_AF_RAS_W1S (0x1430ull)
-#define SSO_AF_RAS_ENA_W1C (0x1460ull)
-#define SSO_AF_RAS_ENA_W1S (0x1470ull)
-#define SSO_AF_AW_INP_CTL (0x2070ull)
-#define SSO_AF_AW_ADD (0x2080ull)
-#define SSO_AF_AW_READ_ARB (0x2090ull)
-#define SSO_AF_XAQ_REQ_PC (0x20b0ull)
-#define SSO_AF_XAQ_LATENCY_PC (0x20b8ull)
-#define SSO_AF_TAQ_CNT (0x20c0ull)
-#define SSO_AF_TAQ_ADD (0x20e0ull)
-#define SSO_AF_POISONX(a) (0x2100ull | (uint64_t)(a) << 3)
-#define SSO_AF_POISONX_W1S(a) (0x2200ull | (uint64_t)(a) << 3)
-#define SSO_PRIV_AF_INT_CFG (0x3000ull)
-#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800ull)
-#define SSO_PRIV_LFX_HWGRP_CFG(a) (0x10000ull | (uint64_t)(a) << 3)
-#define SSO_PRIV_LFX_HWGRP_INT_CFG(a) (0x20000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IU_ACCNTX_CFG(a) (0x50000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IU_ACCNTX_RST(a) (0x60000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_HEAD_PTR(a) (0x80000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_TAIL_PTR(a) (0x90000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_HEAD_NEXT(a) (0xa0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_TAIL_NEXT(a) (0xb0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TIAQX_STATUS(a) (0xc0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TOAQX_STATUS(a) (0xd0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_GMCTL(a) (0xe0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_HWGRPX_IAQ_THR(a) (0x200000ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_TAQ_THR(a) (0x200010ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_PRI(a) (0x200020ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_WS_PC(a) (0x200050ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_EXT_PC(a) (0x200060ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_WA_PC(a) (0x200070ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_TS_PC(a) (0x200080ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_DS_PC(a) (0x200090ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_DQ_PC(a) (0x2000A0ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_PAGE_CNT(a) (0x200100ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_STATUS(a) (0x200110ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_CFG(a) (0x200120ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_TAGSPACE(a) (0x200130ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_XAQ_AURA(a) (0x200140ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_XAQ_LIMIT(a) (0x200220ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_IU_ACCNT(a) (0x200230ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_ARB(a) (0x400100ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_INV(a) (0x400180ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_GMCTL(a) (0x400200ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_SX_GRPMSKX(a, b, c) \
- (0x400400ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 5 | \
- (uint64_t)(c) << 3)
-#define SSO_AF_IPL_FREEX(a) (0x800000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_IAQX(a) (0x840000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_DESCHEDX(a) (0x860000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_CONFX(a) (0x880000ull | (uint64_t)(a) << 3)
-#define SSO_AF_NPA_DIGESTX(a) (0x900000ull | (uint64_t)(a) << 3)
-#define SSO_AF_NPA_DIGESTX_W1S(a) (0x900100ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFP_DIGESTX(a) (0x900200ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFP_DIGESTX_W1S(a) (0x900300ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFPN_DIGESTX(a) (0x900400ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFPN_DIGESTX_W1S(a) (0x900500ull | (uint64_t)(a) << 3)
-#define SSO_AF_GRPDIS_DIGESTX(a) (0x900600ull | (uint64_t)(a) << 3)
-#define SSO_AF_GRPDIS_DIGESTX_W1S(a) (0x900700ull | (uint64_t)(a) << 3)
-#define SSO_AF_AWEMPTY_DIGESTX(a) (0x900800ull | (uint64_t)(a) << 3)
-#define SSO_AF_AWEMPTY_DIGESTX_W1S(a) (0x900900ull | (uint64_t)(a) << 3)
-#define SSO_AF_WQP0_DIGESTX(a) (0x900a00ull | (uint64_t)(a) << 3)
-#define SSO_AF_WQP0_DIGESTX_W1S(a) (0x900b00ull | (uint64_t)(a) << 3)
-#define SSO_AF_AW_DROPPED_DIGESTX(a) (0x900c00ull | (uint64_t)(a) << 3)
-#define SSO_AF_AW_DROPPED_DIGESTX_W1S(a) (0x900d00ull | (uint64_t)(a) << 3)
-#define SSO_AF_QCTLDIS_DIGESTX(a) (0x900e00ull | (uint64_t)(a) << 3)
-#define SSO_AF_QCTLDIS_DIGESTX_W1S(a) (0x900f00ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQDIS_DIGESTX(a) (0x901000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQDIS_DIGESTX_W1S(a) (0x901100ull | (uint64_t)(a) << 3)
-#define SSO_AF_FLR_AQ_DIGESTX(a) (0x901200ull | (uint64_t)(a) << 3)
-#define SSO_AF_FLR_AQ_DIGESTX_W1S(a) (0x901300ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GMULTI_DIGESTX(a) (0x902000ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GMULTI_DIGESTX_W1S(a) (0x902100ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GUNMAP_DIGESTX(a) (0x902200ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GUNMAP_DIGESTX_W1S(a) (0x902300ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_AWE_DIGESTX(a) (0x902400ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_AWE_DIGESTX_W1S(a) (0x902500ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GWI_DIGESTX(a) (0x902600ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GWI_DIGESTX_W1S(a) (0x902700ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_NE_DIGESTX(a) (0x902800ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_NE_DIGESTX_W1S(a) (0x902900ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_TAG(a) (0xa00000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_GRP(a) (0xa20000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_PENDTAG(a) (0xa40000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_LINKS(a) (0xa60000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_QLINKS(a) (0xa80000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_WQP(a) (0xaa0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TAQX_LINK(a) (0xc00000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TAQX_WAEX_TAG(a, b) \
- (0xe00000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define SSO_AF_TAQX_WAEX_WQP(a, b) \
- (0xe00008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-
-#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
-#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
-#define SSO_LF_GGRP_QCTL (0x20ull)
-#define SSO_LF_GGRP_EXE_DIS (0x80ull)
-#define SSO_LF_GGRP_INT (0x100ull)
-#define SSO_LF_GGRP_INT_W1S (0x108ull)
-#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
-#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
-#define SSO_LF_GGRP_INT_THR (0x140ull)
-#define SSO_LF_GGRP_INT_CNT (0x180ull)
-#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
-#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
-#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
-#define SSO_LF_GGRP_MISC_CNT (0x200ull)
-
-#define SSO_AF_IAQ_FREE_CNT_MASK 0x3FFFull
-#define SSO_AF_IAQ_RSVD_FREE_MASK 0x3FFFull
-#define SSO_AF_IAQ_RSVD_FREE_SHIFT 16
-#define SSO_AF_IAQ_FREE_CNT_MAX SSO_AF_IAQ_FREE_CNT_MASK
-#define SSO_AF_AW_ADD_RSVD_FREE_MASK 0x3FFFull
-#define SSO_AF_AW_ADD_RSVD_FREE_SHIFT 16
-#define SSO_HWGRP_IAQ_MAX_THR_MASK 0x3FFFull
-#define SSO_HWGRP_IAQ_RSVD_THR_MASK 0x3FFFull
-#define SSO_HWGRP_IAQ_MAX_THR_SHIFT 32
-#define SSO_HWGRP_IAQ_RSVD_THR 0x2
-
-#define SSO_AF_TAQ_FREE_CNT_MASK 0x7FFull
-#define SSO_AF_TAQ_RSVD_FREE_MASK 0x7FFull
-#define SSO_AF_TAQ_RSVD_FREE_SHIFT 16
-#define SSO_AF_TAQ_FREE_CNT_MAX SSO_AF_TAQ_FREE_CNT_MASK
-#define SSO_AF_TAQ_ADD_RSVD_FREE_MASK 0x1FFFull
-#define SSO_AF_TAQ_ADD_RSVD_FREE_SHIFT 16
-#define SSO_HWGRP_TAQ_MAX_THR_MASK 0x7FFull
-#define SSO_HWGRP_TAQ_RSVD_THR_MASK 0x7FFull
-#define SSO_HWGRP_TAQ_MAX_THR_SHIFT 32
-#define SSO_HWGRP_TAQ_RSVD_THR 0x3
-
-#define SSO_HWGRP_PRI_AFF_MASK 0xFull
-#define SSO_HWGRP_PRI_AFF_SHIFT 8
-#define SSO_HWGRP_PRI_WGT_MASK 0x3Full
-#define SSO_HWGRP_PRI_WGT_SHIFT 16
-#define SSO_HWGRP_PRI_WGT_LEFT_MASK 0x3Full
-#define SSO_HWGRP_PRI_WGT_LEFT_SHIFT 24
-
-#define SSO_HWGRP_AW_CFG_RWEN BIT_ULL(0)
-#define SSO_HWGRP_AW_CFG_LDWB BIT_ULL(1)
-#define SSO_HWGRP_AW_CFG_LDT BIT_ULL(2)
-#define SSO_HWGRP_AW_CFG_STT BIT_ULL(3)
-#define SSO_HWGRP_AW_CFG_XAQ_BYP_DIS BIT_ULL(4)
-
-#define SSO_HWGRP_AW_STS_TPTR_VLD BIT_ULL(8)
-#define SSO_HWGRP_AW_STS_NPA_FETCH BIT_ULL(9)
-#define SSO_HWGRP_AW_STS_XAQ_BUFSC_MASK 0x7ull
-#define SSO_HWGRP_AW_STS_INIT_STS 0x18ull
-
-/* Enum offsets */
-
-#define SSO_LF_INT_VEC_GRP (0x0ull)
-
-#define SSO_AF_INT_VEC_ERR0 (0x0ull)
-#define SSO_AF_INT_VEC_ERR2 (0x1ull)
-#define SSO_AF_INT_VEC_RAS (0x2ull)
-
-#define SSO_WA_IOBN (0x0ull)
-#define SSO_WA_NIXRX (0x1ull)
-#define SSO_WA_CPT (0x2ull)
-#define SSO_WA_ADDWQ (0x3ull)
-#define SSO_WA_DPI (0x4ull)
-#define SSO_WA_NIXTX (0x5ull)
-#define SSO_WA_TIM (0x6ull)
-#define SSO_WA_ZIP (0x7ull)
-
-#define SSO_TT_ORDERED (0x0ull)
-#define SSO_TT_ATOMIC (0x1ull)
-#define SSO_TT_UNTAGGED (0x2ull)
-#define SSO_TT_EMPTY (0x3ull)
-
-
-/* Structures definitions */
-
-#endif /* __OTX2_SSO_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_ssow.h b/drivers/common/octeontx2/hw/otx2_ssow.h
deleted file mode 100644
index 8a44578036..0000000000
--- a/drivers/common/octeontx2/hw/otx2_ssow.h
+++ /dev/null
@@ -1,56 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SSOW_HW_H__
-#define __OTX2_SSOW_HW_H__
-
-/* Register offsets */
-
-#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x10ull)
-#define SSOW_AF_LF_HWS_RST (0x30ull)
-#define SSOW_PRIV_LFX_HWS_CFG(a) (0x1000ull | (uint64_t)(a) << 3)
-#define SSOW_PRIV_LFX_HWS_INT_CFG(a) (0x2000ull | (uint64_t)(a) << 3)
-#define SSOW_AF_SCRATCH_WS (0x100000ull)
-#define SSOW_AF_SCRATCH_GW (0x200000ull)
-#define SSOW_AF_SCRATCH_AW (0x300000ull)
-
-#define SSOW_LF_GWS_LINKS (0x10ull)
-#define SSOW_LF_GWS_PENDWQP (0x40ull)
-#define SSOW_LF_GWS_PENDSTATE (0x50ull)
-#define SSOW_LF_GWS_NW_TIM (0x70ull)
-#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
-#define SSOW_LF_GWS_INT (0x100ull)
-#define SSOW_LF_GWS_INT_W1S (0x108ull)
-#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
-#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
-#define SSOW_LF_GWS_TAG (0x200ull)
-#define SSOW_LF_GWS_WQP (0x210ull)
-#define SSOW_LF_GWS_SWTP (0x220ull)
-#define SSOW_LF_GWS_PENDTAG (0x230ull)
-#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
-#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
-#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
-#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
-#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
-#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
-#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
-#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
-#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
-#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
-#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
-#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
-
-
-/* Enum offsets */
-
-#define SSOW_LF_INT_VEC_IOP (0x0ull)
-
-
-#endif /* __OTX2_SSOW_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_tim.h b/drivers/common/octeontx2/hw/otx2_tim.h
deleted file mode 100644
index 41442ad0a8..0000000000
--- a/drivers/common/octeontx2/hw/otx2_tim.h
+++ /dev/null
@@ -1,34 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_HW_H__
-#define __OTX2_TIM_HW_H__
-
-/* TIM */
-#define TIM_AF_CONST (0x90)
-#define TIM_PRIV_LFX_CFG(a) (0x20000 | (a) << 3)
-#define TIM_PRIV_LFX_INT_CFG(a) (0x24000 | (a) << 3)
-#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000)
-#define TIM_AF_BLK_RST (0x10)
-#define TIM_AF_LF_RST (0x20)
-#define TIM_AF_BLK_RST (0x10)
-#define TIM_AF_RINGX_GMCTL(a) (0x2000 | (a) << 3)
-#define TIM_AF_RINGX_CTL0(a) (0x4000 | (a) << 3)
-#define TIM_AF_RINGX_CTL1(a) (0x6000 | (a) << 3)
-#define TIM_AF_RINGX_CTL2(a) (0x8000 | (a) << 3)
-#define TIM_AF_FLAGS_REG (0x80)
-#define TIM_AF_FLAGS_REG_ENA_TIM BIT_ULL(0)
-#define TIM_AF_RINGX_CTL1_ENA BIT_ULL(47)
-#define TIM_AF_RINGX_CTL1_RCF_BUSY BIT_ULL(50)
-#define TIM_AF_RINGX_CLT1_CLK_10NS (0)
-#define TIM_AF_RINGX_CLT1_CLK_GPIO (1)
-#define TIM_AF_RINGX_CLT1_CLK_GTI (2)
-#define TIM_AF_RINGX_CLT1_CLK_PTP (3)
-
-/* ENUMS */
-
-#define TIM_LF_INT_VEC_NRSPERR_INT (0x0ull)
-#define TIM_LF_INT_VEC_RAS_INT (0x1ull)
-
-#endif /* __OTX2_TIM_HW_H__ */
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
deleted file mode 100644
index 223ba5ef51..0000000000
--- a/drivers/common/octeontx2/meson.build
+++ /dev/null
@@ -1,24 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources= files(
- 'otx2_common.c',
- 'otx2_dev.c',
- 'otx2_irq.c',
- 'otx2_mbox.c',
- 'otx2_sec_idev.c',
-)
-
-deps = ['eal', 'pci', 'ethdev', 'kvargs']
-includes += include_directories(
- '../../common/octeontx2',
- '../../mempool/octeontx2',
- '../../bus/pci',
-)
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
deleted file mode 100644
index d23c50242e..0000000000
--- a/drivers/common/octeontx2/otx2_common.c
+++ /dev/null
@@ -1,216 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_malloc.h>
-#include <rte_log.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_mbox.h"
-
-/**
- * @internal
- * Set default NPA configuration.
- */
-void
-otx2_npa_set_defaults(struct otx2_idev_cfg *idev)
-{
- idev->npa_pf_func = 0;
- rte_atomic16_set(&idev->npa_refcnt, 0);
-}
-
-/**
- * @internal
- * Get intra device config structure.
- */
-struct otx2_idev_cfg *
-otx2_intra_dev_get_cfg(void)
-{
- const char name[] = "octeontx2_intra_device_conf";
- const struct rte_memzone *mz;
- struct otx2_idev_cfg *idev;
-
- mz = rte_memzone_lookup(name);
- if (mz != NULL)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_cfg),
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz != NULL) {
- idev = mz->addr;
- idev->sso_pf_func = 0;
- idev->npa_lf = NULL;
- otx2_npa_set_defaults(idev);
- return idev;
- }
- return NULL;
-}
-
-/**
- * @internal
- * Get SSO PF_FUNC.
- */
-uint16_t
-otx2_sso_pf_func_get(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t sso_pf_func;
-
- sso_pf_func = 0;
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL)
- sso_pf_func = idev->sso_pf_func;
-
- return sso_pf_func;
-}
-
-/**
- * @internal
- * Set SSO PF_FUNC.
- */
-void
-otx2_sso_pf_func_set(uint16_t sso_pf_func)
-{
- struct otx2_idev_cfg *idev;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL) {
- idev->sso_pf_func = sso_pf_func;
- rte_smp_wmb();
- }
-}
-
-/**
- * @internal
- * Get NPA PF_FUNC.
- */
-uint16_t
-otx2_npa_pf_func_get(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t npa_pf_func;
-
- npa_pf_func = 0;
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL)
- npa_pf_func = idev->npa_pf_func;
-
- return npa_pf_func;
-}
-
-/**
- * @internal
- * Get NPA lf object.
- */
-struct otx2_npa_lf *
-otx2_npa_lf_obj_get(void)
-{
- struct otx2_idev_cfg *idev;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL && rte_atomic16_read(&idev->npa_refcnt))
- return idev->npa_lf;
-
- return NULL;
-}
-
-/**
- * @internal
- * Is NPA lf active for the given device?.
- */
-int
-otx2_npa_lf_active(void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
-
- /* Check if npalf is actively used on this dev */
- idev = otx2_intra_dev_get_cfg();
- if (!idev || !idev->npa_lf || idev->npa_lf->mbox != dev->mbox)
- return 0;
-
- return rte_atomic16_read(&idev->npa_refcnt);
-}
-
-/*
- * @internal
- * Gets reference only to existing NPA LF object.
- */
-int otx2_npa_lf_obj_ref(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t cnt;
- int rc;
-
- idev = otx2_intra_dev_get_cfg();
-
- /* Check if ref not possible */
- if (idev == NULL)
- return -EINVAL;
-
-
- /* Get ref only if > 0 */
- cnt = rte_atomic16_read(&idev->npa_refcnt);
- while (cnt != 0) {
- rc = rte_atomic16_cmpset(&idev->npa_refcnt_u16, cnt, cnt + 1);
- if (rc)
- break;
-
- cnt = rte_atomic16_read(&idev->npa_refcnt);
- }
-
- return cnt ? 0 : -EINVAL;
-}
-
-static int
-parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint64_t val;
-
- val = strtoull(value, NULL, 16);
-
- *(uint64_t *)extra_args = val;
-
- return 0;
-}
-
-/*
- * @internal
- * Parse common device arguments
- */
-void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
-{
-
- struct otx2_idev_cfg *idev;
- uint64_t npa_lock_mask = 0;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
- &parse_npa_lock_mask, &npa_lock_mask);
-
- idev->npa_lock_mask = npa_lock_mask;
-}
-
-RTE_LOG_REGISTER(otx2_logtype_base, pmd.octeontx2.base, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_mbox, pmd.octeontx2.mbox, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_npa, pmd.mempool.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_nix, pmd.net.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_npc, pmd.net.octeontx2.flow, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_tm, pmd.net.octeontx2.tm, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_sso, pmd.event.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_tim, pmd.event.octeontx2.timer, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_dpi, pmd.raw.octeontx2.dpi, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_ep, pmd.raw.octeontx2.ep, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_ree, pmd.regex.octeontx2, NOTICE);
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
deleted file mode 100644
index cd52e098e6..0000000000
--- a/drivers/common/octeontx2/otx2_common.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_COMMON_H_
-#define _OTX2_COMMON_H_
-
-#include <rte_atomic.h>
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_kvargs.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_io.h>
-
-#include "hw/otx2_rvu.h"
-#include "hw/otx2_nix.h"
-#include "hw/otx2_npc.h"
-#include "hw/otx2_npa.h"
-#include "hw/otx2_sdp.h"
-#include "hw/otx2_sso.h"
-#include "hw/otx2_ssow.h"
-#include "hw/otx2_tim.h"
-#include "hw/otx2_ree.h"
-
-/* Alignment */
-#define OTX2_ALIGN 128
-
-/* Bits manipulation */
-#ifndef BIT_ULL
-#define BIT_ULL(nr) (1ULL << (nr))
-#endif
-#ifndef BIT
-#define BIT(nr) (1UL << (nr))
-#endif
-
-#ifndef BITS_PER_LONG
-#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
-#endif
-#ifndef BITS_PER_LONG_LONG
-#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8)
-#endif
-
-#ifndef GENMASK
-#define GENMASK(h, l) \
- (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
-#endif
-#ifndef GENMASK_ULL
-#define GENMASK_ULL(h, l) \
- (((~0ULL) - (1ULL << (l)) + 1) & \
- (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
-#endif
-
-#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
-
-/* Intra device related functions */
-struct otx2_npa_lf;
-struct otx2_idev_cfg {
- uint16_t sso_pf_func;
- uint16_t npa_pf_func;
- struct otx2_npa_lf *npa_lf;
- RTE_STD_C11
- union {
- rte_atomic16_t npa_refcnt;
- uint16_t npa_refcnt_u16;
- };
- uint64_t npa_lock_mask;
-};
-
-__rte_internal
-struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
-__rte_internal
-void otx2_sso_pf_func_set(uint16_t sso_pf_func);
-__rte_internal
-uint16_t otx2_sso_pf_func_get(void);
-__rte_internal
-uint16_t otx2_npa_pf_func_get(void);
-__rte_internal
-struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
-__rte_internal
-void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
-__rte_internal
-int otx2_npa_lf_active(void *dev);
-__rte_internal
-int otx2_npa_lf_obj_ref(void);
-__rte_internal
-void otx2_parse_common_devargs(struct rte_kvargs *kvlist);
-
-/* Log */
-extern int otx2_logtype_base;
-extern int otx2_logtype_mbox;
-extern int otx2_logtype_npa;
-extern int otx2_logtype_nix;
-extern int otx2_logtype_sso;
-extern int otx2_logtype_npc;
-extern int otx2_logtype_tm;
-extern int otx2_logtype_tim;
-extern int otx2_logtype_dpi;
-extern int otx2_logtype_ep;
-extern int otx2_logtype_ree;
-
-#define otx2_err(fmt, args...) \
- RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", \
- __func__, __LINE__, ## args)
-
-#define otx2_info(fmt, args...) \
- RTE_LOG(INFO, PMD, fmt"\n", ## args)
-
-#define otx2_dbg(subsystem, fmt, args...) \
- rte_log(RTE_LOG_DEBUG, otx2_logtype_ ## subsystem, \
- "[%s] %s():%u " fmt "\n", \
- #subsystem, __func__, __LINE__, ##args)
-
-#define otx2_base_dbg(fmt, ...) otx2_dbg(base, fmt, ##__VA_ARGS__)
-#define otx2_mbox_dbg(fmt, ...) otx2_dbg(mbox, fmt, ##__VA_ARGS__)
-#define otx2_npa_dbg(fmt, ...) otx2_dbg(npa, fmt, ##__VA_ARGS__)
-#define otx2_nix_dbg(fmt, ...) otx2_dbg(nix, fmt, ##__VA_ARGS__)
-#define otx2_sso_dbg(fmt, ...) otx2_dbg(sso, fmt, ##__VA_ARGS__)
-#define otx2_npc_dbg(fmt, ...) otx2_dbg(npc, fmt, ##__VA_ARGS__)
-#define otx2_tm_dbg(fmt, ...) otx2_dbg(tm, fmt, ##__VA_ARGS__)
-#define otx2_tim_dbg(fmt, ...) otx2_dbg(tim, fmt, ##__VA_ARGS__)
-#define otx2_dpi_dbg(fmt, ...) otx2_dbg(dpi, fmt, ##__VA_ARGS__)
-#define otx2_sdp_dbg(fmt, ...) otx2_dbg(ep, fmt, ##__VA_ARGS__)
-#define otx2_ree_dbg(fmt, ...) otx2_dbg(ree, fmt, ##__VA_ARGS__)
-
-/* PCI IDs */
-#define PCI_VENDOR_ID_CAVIUM 0x177D
-#define PCI_DEVID_OCTEONTX2_RVU_PF 0xA063
-#define PCI_DEVID_OCTEONTX2_RVU_VF 0xA064
-#define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065
-#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF 0xA0F9
-#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF 0xA0FA
-#define PCI_DEVID_OCTEONTX2_RVU_NPA_PF 0xA0FB
-#define PCI_DEVID_OCTEONTX2_RVU_NPA_VF 0xA0FC
-#define PCI_DEVID_OCTEONTX2_RVU_CPT_PF 0xA0FD
-#define PCI_DEVID_OCTEONTX2_RVU_CPT_VF 0xA0FE
-#define PCI_DEVID_OCTEONTX2_RVU_AF_VF 0xA0f8
-#define PCI_DEVID_OCTEONTX2_DPI_VF 0xA081
-#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */
-/* OCTEON TX2 98xx EP mode */
-#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103
-#define PCI_DEVID_OCTEONTX2_EP_RAW_VF 0xB204 /* OCTEON TX2 EP mode */
-#define PCI_DEVID_OCTEONTX2_RVU_SDP_PF 0xA0f6
-#define PCI_DEVID_OCTEONTX2_RVU_SDP_VF 0xA0f7
-#define PCI_DEVID_OCTEONTX2_RVU_REE_PF 0xA0f4
-#define PCI_DEVID_OCTEONTX2_RVU_REE_VF 0xA0f5
-
-/*
- * REVID for RVU PCIe devices.
- * Bits 0..1: minor pass
- * Bits 3..2: major pass
- * Bits 7..4: midr id, 0:96, 1:95, 2:loki, f:unknown
- */
-
-#define RVU_PCI_REV_MIDR_ID(rev_id) (rev_id >> 4)
-#define RVU_PCI_REV_MAJOR(rev_id) ((rev_id >> 2) & 0x3)
-#define RVU_PCI_REV_MINOR(rev_id) (rev_id & 0x3)
-
-#define RVU_PCI_CN96XX_MIDR_ID 0x0
-#define RVU_PCI_CNF95XX_MIDR_ID 0x1
-
-/* PCI Config offsets */
-#define RVU_PCI_REVISION_ID 0x08
-
-/* IO Access */
-#define otx2_read64(addr) rte_read64_relaxed((void *)(addr))
-#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr))
-
-#if defined(RTE_ARCH_ARM64)
-#include "otx2_io_arm64.h"
-#else
-#include "otx2_io_generic.h"
-#endif
-
-/* Fastpath lookup */
-#define OTX2_NIX_FASTPATH_LOOKUP_MEM "otx2_nix_fastpath_lookup_mem"
-#define OTX2_NIX_SA_TBL_START (4096*4 + 69632*2)
-
-#endif /* _OTX2_COMMON_H_ */
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
deleted file mode 100644
index 08dca87848..0000000000
--- a/drivers/common/octeontx2/otx2_dev.c
+++ /dev/null
@@ -1,1074 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <fcntl.h>
-#include <inttypes.h>
-#include <sys/mman.h>
-#include <unistd.h>
-
-#include <rte_alarm.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_memcpy.h>
-#include <rte_eal_paging.h>
-
-#include "otx2_dev.h"
-#include "otx2_mbox.h"
-
-#define RVU_MAX_VF 64 /* RVU_PF_VFPF_MBOX_INT(0..1) */
-#define RVU_MAX_INT_RETRY 3
-
-/* PF/VF message handling timer */
-#define VF_PF_MBOX_TIMER_MS (20 * 1000)
-
-static void *
-mbox_mem_map(off_t off, size_t size)
-{
- void *va = MAP_FAILED;
- int mem_fd;
-
- if (size <= 0)
- goto error;
-
- mem_fd = open("/dev/mem", O_RDWR);
- if (mem_fd < 0)
- goto error;
-
- va = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE,
- RTE_MAP_SHARED, mem_fd, off);
- close(mem_fd);
-
- if (va == NULL)
- otx2_err("Failed to mmap sz=0x%zx, fd=%d, off=%jd",
- size, mem_fd, (intmax_t)off);
-error:
- return va;
-}
-
-static void
-mbox_mem_unmap(void *va, size_t size)
-{
- if (va)
- rte_mem_unmap(va, size);
-}
-
-static int
-pf_af_sync_msg(struct otx2_dev *dev, struct mbox_msghdr **rsp)
-{
- uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- volatile uint64_t int_status;
- struct mbox_msghdr *msghdr;
- uint64_t off;
- int rc = 0;
-
- /* We need to disable PF interrupts. We are in timer interrupt */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- /* Send message */
- otx2_mbox_msg_send(mbox, 0);
-
- do {
- rte_delay_ms(sleep);
- timeout += sleep;
- if (timeout >= MBOX_RSP_TIMEOUT) {
- otx2_err("Message timeout: %dms", MBOX_RSP_TIMEOUT);
- rc = -EIO;
- break;
- }
- int_status = otx2_read64(dev->bar2 + RVU_PF_INT);
- } while ((int_status & 0x1) != 0x1);
-
- /* Clear */
- otx2_write64(int_status, dev->bar2 + RVU_PF_INT);
-
- /* Enable interrupts */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- if (rc == 0) {
- /* Get message */
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + off);
- if (rsp)
- *rsp = msghdr;
- rc = msghdr->rc;
- }
-
- return rc;
-}
-
-static int
-af_pf_wait_msg(struct otx2_dev *dev, uint16_t vf, int num_msg)
-{
- uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- volatile uint64_t int_status;
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- struct mbox_msghdr *rsp;
- uint64_t offset;
- size_t size;
- int i;
-
- /* We need to disable PF interrupts. We are in timer interrupt */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- /* Send message */
- otx2_mbox_msg_send(mbox, 0);
-
- do {
- rte_delay_ms(sleep);
- timeout++;
- if (timeout >= MBOX_RSP_TIMEOUT) {
- otx2_err("Routed messages %d timeout: %dms",
- num_msg, MBOX_RSP_TIMEOUT);
- break;
- }
- int_status = otx2_read64(dev->bar2 + RVU_PF_INT);
- } while ((int_status & 0x1) != 0x1);
-
- /* Clear */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT);
-
- /* Enable interrupts */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- rte_spinlock_lock(&mdev->mbox_lock);
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs != num_msg)
- otx2_err("Routed messages: %d received: %d", num_msg,
- req_hdr->num_msgs);
-
- /* Get messages from mbox */
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- size = mbox->rx_start + msg->next_msgoff - offset;
-
- /* Reserve PF/VF mbox message */
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- rsp = otx2_mbox_alloc_msg(&dev->mbox_vfpf, vf, size);
- otx2_mbox_rsp_init(msg->id, rsp);
-
- /* Copy message from AF<->PF mbox to PF<->VF mbox */
- otx2_mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr),
- (uint8_t *)msg + sizeof(struct mbox_msghdr),
- size - sizeof(struct mbox_msghdr));
-
- /* Set status and sender pf_func data */
- rsp->rc = msg->rc;
- rsp->pcifunc = msg->pcifunc;
-
- /* Whenever a PF comes up, AF sends the link status to it but
- * when VF comes up no such event is sent to respective VF.
- * Using MBOX_MSG_NIX_LF_START_RX response from AF for the
- * purpose and send the link status of PF to VF.
- */
- if (msg->id == MBOX_MSG_NIX_LF_START_RX) {
- /* Send link status to VF */
- struct cgx_link_user_info linfo;
- struct mbox_msghdr *vf_msg;
- size_t sz;
-
- /* Get the link status */
- if (dev->ops && dev->ops->link_status_get)
- dev->ops->link_status_get(dev, &linfo);
-
- sz = RTE_ALIGN(otx2_mbox_id2size(
- MBOX_MSG_CGX_LINK_EVENT), MBOX_MSG_ALIGN);
- /* Prepare the message to be sent */
- vf_msg = otx2_mbox_alloc_msg(&dev->mbox_vfpf_up, vf,
- sz);
- otx2_mbox_req_init(MBOX_MSG_CGX_LINK_EVENT, vf_msg);
- memcpy((uint8_t *)vf_msg + sizeof(struct mbox_msghdr),
- &linfo, sizeof(struct cgx_link_user_info));
-
- vf_msg->rc = msg->rc;
- vf_msg->pcifunc = msg->pcifunc;
- /* Send to VF */
- otx2_mbox_msg_send(&dev->mbox_vfpf_up, vf);
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return req_hdr->num_msgs;
-}
-
-static int
-vf_pf_process_msgs(struct otx2_dev *dev, uint16_t vf)
-{
- int offset, routed = 0; struct otx2_mbox *mbox = &dev->mbox_vfpf;
- struct otx2_mbox_dev *mdev = &mbox->dev[vf];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- size_t size;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (!req_hdr->num_msgs)
- return 0;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < req_hdr->num_msgs; i++) {
-
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- size = mbox->rx_start + msg->next_msgoff - offset;
-
- /* RVU_PF_FUNC_S */
- msg->pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- if (msg->id == MBOX_MSG_READY) {
- struct ready_msg_rsp *rsp;
- uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8;
-
- /* Handle READY message in PF */
- dev->active_vfs[vf / max_bits] |=
- BIT_ULL(vf % max_bits);
- rsp = (struct ready_msg_rsp *)
- otx2_mbox_alloc_msg(mbox, vf, sizeof(*rsp));
- otx2_mbox_rsp_init(msg->id, rsp);
-
- /* PF/VF function ID */
- rsp->hdr.pcifunc = msg->pcifunc;
- rsp->hdr.rc = 0;
- } else {
- struct mbox_msghdr *af_req;
- /* Reserve AF/PF mbox message */
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- af_req = otx2_mbox_alloc_msg(dev->mbox, 0, size);
- otx2_mbox_req_init(msg->id, af_req);
-
- /* Copy message from VF<->PF mbox to PF<->AF mbox */
- otx2_mbox_memcpy((uint8_t *)af_req +
- sizeof(struct mbox_msghdr),
- (uint8_t *)msg + sizeof(struct mbox_msghdr),
- size - sizeof(struct mbox_msghdr));
- af_req->pcifunc = msg->pcifunc;
- routed++;
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
-
- if (routed > 0) {
- otx2_base_dbg("pf:%d routed %d messages from vf:%d to AF",
- dev->pf, routed, vf);
- af_pf_wait_msg(dev, vf, routed);
- otx2_mbox_reset(dev->mbox, 0);
- }
-
- /* Send mbox responses to VF */
- if (mdev->num_msgs) {
- otx2_base_dbg("pf:%d reply %d messages to vf:%d",
- dev->pf, mdev->num_msgs, vf);
- otx2_mbox_msg_send(mbox, vf);
- }
-
- return i;
-}
-
-static int
-vf_pf_process_up_msgs(struct otx2_dev *dev, uint16_t vf)
-{
- struct otx2_mbox *mbox = &dev->mbox_vfpf_up;
- struct otx2_mbox_dev *mdev = &mbox->dev[vf];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int msgs_acked = 0;
- int offset;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return 0;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- msgs_acked++;
- /* RVU_PF_FUNC_S */
- msg->pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- switch (msg->id) {
- case MBOX_MSG_CGX_LINK_EVENT:
- otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc, otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- break;
- case MBOX_MSG_CGX_PTP_RX_INFO:
- otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc, otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- break;
- default:
- otx2_err("Not handled UP msg 0x%x (%s) func:0x%x",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc);
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
- otx2_mbox_reset(mbox, vf);
- mdev->msgs_acked = msgs_acked;
- rte_wmb();
-
- return i;
-}
-
-static void
-otx2_vf_pf_mbox_handle_msg(void *param)
-{
- uint16_t vf, max_vf, max_bits;
- struct otx2_dev *dev = param;
-
- max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t);
- max_vf = max_bits * MAX_VFPF_DWORD_BITS;
-
- for (vf = 0; vf < max_vf; vf++) {
- if (dev->intr.bits[vf/max_bits] & BIT_ULL(vf%max_bits)) {
- otx2_base_dbg("Process vf:%d request (pf:%d, vf:%d)",
- vf, dev->pf, dev->vf);
- vf_pf_process_msgs(dev, vf);
- /* UP messages */
- vf_pf_process_up_msgs(dev, vf);
- dev->intr.bits[vf/max_bits] &= ~(BIT_ULL(vf%max_bits));
- }
- }
- dev->timer_set = 0;
-}
-
-static void
-otx2_vf_pf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- bool alarm_set = false;
- uint64_t intr;
- int vfpf;
-
- for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) {
- intr = otx2_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
- if (!intr)
- continue;
-
- otx2_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)",
- vfpf, intr, dev->pf, dev->vf);
-
- /* Save and clear intr bits */
- dev->intr.bits[vfpf] |= intr;
- otx2_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
- alarm_set = true;
- }
-
- if (!dev->timer_set && alarm_set) {
- dev->timer_set = 1;
- /* Start timer to handle messages */
- rte_eal_alarm_set(VF_PF_MBOX_TIMER_MS,
- otx2_vf_pf_mbox_handle_msg, dev);
- }
-}
-
-static void
-otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int msgs_acked = 0;
- int offset;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- msgs_acked++;
- otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d",
- msg->id, otx2_mbox_id2name(msg->id),
- otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
-
- switch (msg->id) {
- /* Add message id's that are handled here */
- case MBOX_MSG_READY:
- /* Get our identity */
- dev->pf_func = msg->pcifunc;
- break;
-
- default:
- if (msg->rc)
- otx2_err("Message (%s) response has err=%d",
- otx2_mbox_id2name(msg->id), msg->rc);
- break;
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
-
- otx2_mbox_reset(mbox, 0);
- /* Update acked if someone is waiting a message */
- mdev->msgs_acked = msgs_acked;
- rte_wmb();
-}
-
-/* Copies the message received from AF and sends it to VF */
-static void
-pf_vf_mbox_send_up_msg(struct otx2_dev *dev, void *rec_msg)
-{
- uint16_t max_bits = sizeof(dev->active_vfs[0]) * sizeof(uint64_t);
- struct otx2_mbox *vf_mbox = &dev->mbox_vfpf_up;
- struct msg_req *msg = rec_msg;
- struct mbox_msghdr *vf_msg;
- uint16_t vf;
- size_t size;
-
- size = RTE_ALIGN(otx2_mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN);
- /* Send UP message to all VF's */
- for (vf = 0; vf < vf_mbox->ndevs; vf++) {
- /* VF active */
- if (!(dev->active_vfs[vf / max_bits] & (BIT_ULL(vf))))
- continue;
-
- otx2_base_dbg("(%s) size: %zx to VF: %d",
- otx2_mbox_id2name(msg->hdr.id), size, vf);
-
- /* Reserve PF/VF mbox message */
- vf_msg = otx2_mbox_alloc_msg(vf_mbox, vf, size);
- if (!vf_msg) {
- otx2_err("Failed to alloc VF%d UP message", vf);
- continue;
- }
- otx2_mbox_req_init(msg->hdr.id, vf_msg);
-
- /*
- * Copy message from AF<->PF UP mbox
- * to PF<->VF UP mbox
- */
- otx2_mbox_memcpy((uint8_t *)vf_msg +
- sizeof(struct mbox_msghdr), (uint8_t *)msg
- + sizeof(struct mbox_msghdr), size -
- sizeof(struct mbox_msghdr));
-
- vf_msg->rc = msg->hdr.rc;
- /* Set PF to be a sender */
- vf_msg->pcifunc = dev->pf_func;
-
- /* Send to VF */
- otx2_mbox_msg_send(vf_mbox, vf);
- }
-}
-
-static int
-otx2_mbox_up_handler_cgx_link_event(struct otx2_dev *dev,
- struct cgx_link_info_msg *msg,
- struct msg_rsp *rsp)
-{
- struct cgx_link_user_info *linfo = &msg->link_info;
-
- otx2_base_dbg("pf:%d/vf:%d NIC Link %s --> 0x%x (%s) from: pf:%d/vf:%d",
- otx2_get_pf(dev->pf_func), otx2_get_vf(dev->pf_func),
- linfo->link_up ? "UP" : "DOWN", msg->hdr.id,
- otx2_mbox_id2name(msg->hdr.id),
- otx2_get_pf(msg->hdr.pcifunc),
- otx2_get_vf(msg->hdr.pcifunc));
-
- /* PF gets link notification from AF */
- if (otx2_get_pf(msg->hdr.pcifunc) == 0) {
- if (dev->ops && dev->ops->link_status_update)
- dev->ops->link_status_update(dev, linfo);
-
- /* Forward the same message as received from AF to VF */
- pf_vf_mbox_send_up_msg(dev, msg);
- } else {
- /* VF gets link up notification */
- if (dev->ops && dev->ops->link_status_update)
- dev->ops->link_status_update(dev, linfo);
- }
-
- rsp->hdr.rc = 0;
- return 0;
-}
-
-static int
-otx2_mbox_up_handler_cgx_ptp_rx_info(struct otx2_dev *dev,
- struct cgx_ptp_rx_info_msg *msg,
- struct msg_rsp *rsp)
-{
- otx2_nix_dbg("pf:%d/vf:%d PTP mode %s --> 0x%x (%s) from: pf:%d/vf:%d",
- otx2_get_pf(dev->pf_func),
- otx2_get_vf(dev->pf_func),
- msg->ptp_en ? "ENABLED" : "DISABLED",
- msg->hdr.id, otx2_mbox_id2name(msg->hdr.id),
- otx2_get_pf(msg->hdr.pcifunc),
- otx2_get_vf(msg->hdr.pcifunc));
-
- /* PF gets PTP notification from AF */
- if (otx2_get_pf(msg->hdr.pcifunc) == 0) {
- if (dev->ops && dev->ops->ptp_info_update)
- dev->ops->ptp_info_update(dev, msg->ptp_en);
-
- /* Forward the same message as received from AF to VF */
- pf_vf_mbox_send_up_msg(dev, msg);
- } else {
- /* VF gets PTP notification */
- if (dev->ops && dev->ops->ptp_info_update)
- dev->ops->ptp_info_update(dev, msg->ptp_en);
- }
-
- rsp->hdr.rc = 0;
- return 0;
-}
-
-static int
-mbox_process_msgs_up(struct otx2_dev *dev, struct mbox_msghdr *req)
-{
- /* Check if valid, if not reply with a invalid msg */
- if (req->sig != OTX2_MBOX_REQ_SIG)
- return -EIO;
-
- switch (req->id) {
-#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
- case _id: { \
- struct _rsp_type *rsp; \
- int err; \
- \
- rsp = (struct _rsp_type *)otx2_mbox_alloc_msg( \
- &dev->mbox_up, 0, \
- sizeof(struct _rsp_type)); \
- if (!rsp) \
- return -ENOMEM; \
- \
- rsp->hdr.id = _id; \
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG; \
- rsp->hdr.pcifunc = dev->pf_func; \
- rsp->hdr.rc = 0; \
- \
- err = otx2_mbox_up_handler_ ## _fn_name( \
- dev, (struct _req_type *)req, rsp); \
- return err; \
- }
-MBOX_UP_CGX_MESSAGES
-#undef M
-
- default :
- otx2_reply_invalid_msg(&dev->mbox_up, 0, 0, req->id);
- }
-
- return -ENODEV;
-}
-
-static void
-otx2_process_msgs_up(struct otx2_dev *dev, struct otx2_mbox *mbox)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int i, err, offset;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d",
- msg->id, otx2_mbox_id2name(msg->id),
- otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- err = mbox_process_msgs_up(dev, msg);
- if (err)
- otx2_err("Error %d handling 0x%x (%s)",
- err, msg->id, otx2_mbox_id2name(msg->id));
- offset = mbox->rx_start + msg->next_msgoff;
- }
- /* Send mbox responses */
- if (mdev->num_msgs) {
- otx2_base_dbg("Reply num_msgs:%d", mdev->num_msgs);
- otx2_mbox_msg_send(mbox, 0);
- }
-}
-
-static void
-otx2_pf_vf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- uint64_t intr;
-
- intr = otx2_read64(dev->bar2 + RVU_VF_INT);
- if (intr == 0)
- otx2_base_dbg("Proceeding to check mbox UP messages if any");
-
- otx2_write64(intr, dev->bar2 + RVU_VF_INT);
- otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
-
- /* First process all configuration messages */
- otx2_process_msgs(dev, dev->mbox);
-
- /* Process Uplink messages */
- otx2_process_msgs_up(dev, &dev->mbox_up);
-}
-
-static void
-otx2_af_pf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- uint64_t intr;
-
- intr = otx2_read64(dev->bar2 + RVU_PF_INT);
- if (intr == 0)
- otx2_base_dbg("Proceeding to check mbox UP messages if any");
-
- otx2_write64(intr, dev->bar2 + RVU_PF_INT);
- otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
-
- /* First process all configuration messages */
- otx2_process_msgs(dev, dev->mbox);
-
- /* Process Uplink messages */
- otx2_process_msgs_up(dev, &dev->mbox_up);
-}
-
-static int
-mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i, rc;
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- dev->timer_set = 0;
-
- /* MBOX interrupt for VF(0...63) <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX0);
-
- if (rc) {
- otx2_err("Fail to register PF(VF0-63) mbox irq");
- return rc;
- }
- /* MBOX interrupt for VF(64...128) <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX1);
-
- if (rc) {
- otx2_err("Fail to register PF(VF64-128) mbox irq");
- return rc;
- }
- /* MBOX interrupt AF <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_af_pf_mbox_irq,
- dev, RVU_PF_INT_VEC_AFPF_MBOX);
- if (rc) {
- otx2_err("Fail to register AF<->PF mbox irq");
- return rc;
- }
-
- /* HW enable intr */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT);
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- return rc;
-}
-
-static int
-mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int rc;
-
- /* Clear irq */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
-
- /* MBOX interrupt PF <-> VF */
- rc = otx2_register_irq(intr_handle, otx2_pf_vf_mbox_irq,
- dev, RVU_VF_INT_VEC_MBOX);
- if (rc) {
- otx2_err("Fail to register PF<->VF mbox irq");
- return rc;
- }
-
- /* HW enable intr */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT);
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1S);
-
- return rc;
-}
-
-static int
-mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- return mbox_register_vf_irq(pci_dev, dev);
- else
- return mbox_register_pf_irq(pci_dev, dev);
-}
-
-static void
-mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i;
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- dev->timer_set = 0;
-
- rte_eal_alarm_cancel(otx2_vf_pf_mbox_handle_msg, dev);
-
- /* Unregister the interrupt handler for each vectors */
- /* MBOX interrupt for VF(0...63) <-> PF */
- otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX0);
-
- /* MBOX interrupt for VF(64...128) <-> PF */
- otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX1);
-
- /* MBOX interrupt AF <-> PF */
- otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_AFPF_MBOX);
-
-}
-
-static void
-mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-
- /* Clear irq */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
-
- /* Unregister the interrupt handler */
- otx2_unregister_irq(intr_handle, otx2_pf_vf_mbox_irq, dev,
- RVU_VF_INT_VEC_MBOX);
-}
-
-static void
-mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- mbox_unregister_vf_irq(pci_dev, dev);
- else
- mbox_unregister_pf_irq(pci_dev, dev);
-}
-
-static int
-vf_flr_send_msg(struct otx2_dev *dev, uint16_t vf)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct msg_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_vf_flr(mbox);
- /* Overwrite pcifunc to indicate VF */
- req->hdr.pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- /* Sync message in interrupt context */
- rc = pf_af_sync_msg(dev, NULL);
- if (rc)
- otx2_err("Failed to send VF FLR mbox msg, rc=%d", rc);
-
- return rc;
-}
-
-static void
-otx2_pf_vf_flr_irq(void *param)
-{
- struct otx2_dev *dev = (struct otx2_dev *)param;
- uint16_t max_vf = 64, vf;
- uintptr_t bar2;
- uint64_t intr;
- int i;
-
- max_vf = (dev->maxvf > 0) ? dev->maxvf : 64;
- bar2 = dev->bar2;
-
- otx2_base_dbg("FLR VF interrupt: max_vf: %d", max_vf);
-
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
- intr = otx2_read64(bar2 + RVU_PF_VFFLR_INTX(i));
- if (!intr)
- continue;
-
- for (vf = 0; vf < max_vf; vf++) {
- if (!(intr & (1ULL << vf)))
- continue;
-
- otx2_base_dbg("FLR: i :%d intr: 0x%" PRIx64 ", vf-%d",
- i, intr, (64 * i + vf));
- /* Clear interrupt */
- otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFFLR_INTX(i));
- /* Disable the interrupt */
- otx2_write64(BIT_ULL(vf),
- bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
- /* Inform AF about VF reset */
- vf_flr_send_msg(dev, vf);
-
- /* Signal FLR finish */
- otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFTRPENDX(i));
- /* Enable interrupt */
- otx2_write64(~0ull,
- bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
- }
- }
-}
-
-static int
-vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i;
-
- otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
-
- otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR0);
-
- otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR1);
-
- return 0;
-}
-
-static int
-vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int i, rc;
-
- otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
-
- rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR0);
- if (rc)
- otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR0 rc=%d", rc);
-
- rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR1);
- if (rc)
- otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR1 rc=%d", rc);
-
- /* Enable HW interrupt */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INTX(i));
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFTRPENDX(i));
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
- }
- return 0;
-}
-
-/**
- * @internal
- * Get number of active VFs for the given PF device.
- */
-int
-otx2_dev_active_vfs(void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- int i, count = 0;
-
- for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
- count += __builtin_popcount(dev->active_vfs[i]);
-
- return count;
-}
-
-static void
-otx2_update_vf_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- switch (pci_dev->id.device_id) {
- case PCI_DEVID_OCTEONTX2_RVU_PF:
- break;
- case PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF:
- case PCI_DEVID_OCTEONTX2_RVU_NPA_VF:
- case PCI_DEVID_OCTEONTX2_RVU_CPT_VF:
- case PCI_DEVID_OCTEONTX2_RVU_AF_VF:
- case PCI_DEVID_OCTEONTX2_RVU_VF:
- case PCI_DEVID_OCTEONTX2_RVU_SDP_VF:
- dev->hwcap |= OTX2_HWCAP_F_VF;
- break;
- }
-}
-
-/**
- * @internal
- * Initialize the otx2 device
- */
-int
-otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- int up_direction = MBOX_DIR_PFAF_UP;
- int rc, direction = MBOX_DIR_PFAF;
- uint64_t intr_offset = RVU_PF_INT;
- struct otx2_dev *dev = otx2_dev;
- uintptr_t bar2, bar4;
- uint64_t bar4_addr;
- void *hwbase;
-
- bar2 = (uintptr_t)pci_dev->mem_resource[2].addr;
- bar4 = (uintptr_t)pci_dev->mem_resource[4].addr;
-
- if (bar2 == 0 || bar4 == 0) {
- otx2_err("Failed to get pci bars");
- rc = -ENODEV;
- goto error;
- }
-
- dev->node = pci_dev->device.numa_node;
- dev->maxvf = pci_dev->max_vfs;
- dev->bar2 = bar2;
- dev->bar4 = bar4;
-
- otx2_update_vf_hwcap(pci_dev, dev);
-
- if (otx2_dev_is_vf(dev)) {
- direction = MBOX_DIR_VFPF;
- up_direction = MBOX_DIR_VFPF_UP;
- intr_offset = RVU_VF_INT;
- }
-
- /* Initialize the local mbox */
- rc = otx2_mbox_init(&dev->mbox_local, bar4, bar2, direction, 1,
- intr_offset);
- if (rc)
- goto error;
- dev->mbox = &dev->mbox_local;
-
- rc = otx2_mbox_init(&dev->mbox_up, bar4, bar2, up_direction, 1,
- intr_offset);
- if (rc)
- goto error;
-
- /* Register mbox interrupts */
- rc = mbox_register_irq(pci_dev, dev);
- if (rc)
- goto mbox_fini;
-
- /* Check the readiness of PF/VF */
- rc = otx2_send_ready_msg(dev->mbox, &dev->pf_func);
- if (rc)
- goto mbox_unregister;
-
- dev->pf = otx2_get_pf(dev->pf_func);
- dev->vf = otx2_get_vf(dev->pf_func);
- memset(&dev->active_vfs, 0, sizeof(dev->active_vfs));
-
- /* Found VF devices in a PF device */
- if (pci_dev->max_vfs > 0) {
-
- /* Remap mbox area for all vf's */
- bar4_addr = otx2_read64(bar2 + RVU_PF_VF_BAR4_ADDR);
- if (bar4_addr == 0) {
- rc = -ENODEV;
- goto mbox_fini;
- }
-
- hwbase = mbox_mem_map(bar4_addr, MBOX_SIZE * pci_dev->max_vfs);
- if (hwbase == MAP_FAILED) {
- rc = -ENOMEM;
- goto mbox_fini;
- }
- /* Init mbox object */
- rc = otx2_mbox_init(&dev->mbox_vfpf, (uintptr_t)hwbase,
- bar2, MBOX_DIR_PFVF, pci_dev->max_vfs,
- intr_offset);
- if (rc)
- goto iounmap;
-
- /* PF -> VF UP messages */
- rc = otx2_mbox_init(&dev->mbox_vfpf_up, (uintptr_t)hwbase,
- bar2, MBOX_DIR_PFVF_UP, pci_dev->max_vfs,
- intr_offset);
- if (rc)
- goto mbox_fini;
- }
-
- /* Register VF-FLR irq handlers */
- if (otx2_dev_is_pf(dev)) {
- rc = vf_flr_register_irqs(pci_dev, dev);
- if (rc)
- goto iounmap;
- }
- dev->mbox_active = 1;
- return rc;
-
-iounmap:
- mbox_mem_unmap(hwbase, MBOX_SIZE * pci_dev->max_vfs);
-mbox_unregister:
- mbox_unregister_irq(pci_dev, dev);
-mbox_fini:
- otx2_mbox_fini(dev->mbox);
- otx2_mbox_fini(&dev->mbox_up);
-error:
- return rc;
-}
-
-/**
- * @internal
- * Finalize the otx2 device
- */
-void
-otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_mbox *mbox;
-
- /* Clear references to this pci dev */
- idev = otx2_intra_dev_get_cfg();
- if (idev->npa_lf && idev->npa_lf->pci_dev == pci_dev)
- idev->npa_lf = NULL;
-
- mbox_unregister_irq(pci_dev, dev);
-
- if (otx2_dev_is_pf(dev))
- vf_flr_unregister_irqs(pci_dev, dev);
- /* Release PF - VF */
- mbox = &dev->mbox_vfpf;
- if (mbox->hwbase && mbox->dev)
- mbox_mem_unmap((void *)mbox->hwbase,
- MBOX_SIZE * pci_dev->max_vfs);
- otx2_mbox_fini(mbox);
- mbox = &dev->mbox_vfpf_up;
- otx2_mbox_fini(mbox);
-
- /* Release PF - AF */
- mbox = dev->mbox;
- otx2_mbox_fini(mbox);
- mbox = &dev->mbox_up;
- otx2_mbox_fini(mbox);
- dev->mbox_active = 0;
-
- /* Disable MSIX vectors */
- otx2_disable_irqs(intr_handle);
-}
diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
deleted file mode 100644
index d5b2b0d9af..0000000000
--- a/drivers/common/octeontx2/otx2_dev.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_DEV_H
-#define _OTX2_DEV_H
-
-#include <rte_bus_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-#include "otx2_mbox.h"
-#include "otx2_mempool.h"
-
-/* Common HWCAP flags. Use from LSB bits */
-#define OTX2_HWCAP_F_VF BIT_ULL(8) /* VF device */
-#define otx2_dev_is_vf(dev) (dev->hwcap & OTX2_HWCAP_F_VF)
-#define otx2_dev_is_pf(dev) (!(dev->hwcap & OTX2_HWCAP_F_VF))
-#define otx2_dev_is_lbk(dev) ((dev->hwcap & OTX2_HWCAP_F_VF) && \
- (dev->tx_chan_base < 0x700))
-#define otx2_dev_revid(dev) (dev->hwcap & 0xFF)
-#define otx2_dev_is_sdp(dev) (dev->sdp_link)
-
-#define otx2_dev_is_vf_or_sdp(dev) \
- (otx2_dev_is_vf(dev) || otx2_dev_is_sdp(dev))
-
-#define otx2_dev_is_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0))
-#define otx2_dev_is_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_95xx_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1))
-#define otx2_dev_is_95xx_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1))
-
-#define otx2_dev_is_96xx_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-#define otx2_dev_is_96xx_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_96xx_Cx(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_96xx_C0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_98xx(dev) \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x3)
-
-struct otx2_dev;
-
-/* Link status update callback */
-typedef void (*otx2_link_status_update_t)(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-/* PTP info callback */
-typedef int (*otx2_ptp_info_t)(struct otx2_dev *dev, bool ptp_en);
-/* Link status get callback */
-typedef void (*otx2_link_status_get_t)(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-
-struct otx2_dev_ops {
- otx2_link_status_update_t link_status_update;
- otx2_ptp_info_t ptp_info_update;
- otx2_link_status_get_t link_status_get;
-};
-
-#define OTX2_DEV \
- int node __rte_cache_aligned; \
- uint16_t pf; \
- int16_t vf; \
- uint16_t pf_func; \
- uint8_t mbox_active; \
- bool drv_inited; \
- uint64_t active_vfs[MAX_VFPF_DWORD_BITS]; \
- uintptr_t bar2; \
- uintptr_t bar4; \
- struct otx2_mbox mbox_local; \
- struct otx2_mbox mbox_up; \
- struct otx2_mbox mbox_vfpf; \
- struct otx2_mbox mbox_vfpf_up; \
- otx2_intr_t intr; \
- int timer_set; /* ~0 : no alarm handling */ \
- uint64_t hwcap; \
- struct otx2_npa_lf npalf; \
- struct otx2_mbox *mbox; \
- uint16_t maxvf; \
- const struct otx2_dev_ops *ops
-
-struct otx2_dev {
- OTX2_DEV;
-};
-
-__rte_internal
-int otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev);
-
-/* Common dev init and fini routines */
-
-static __rte_always_inline int
-otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- uint8_t rev_id;
- int rc;
-
- rc = rte_pci_read_config(pci_dev, &rev_id,
- 1, RVU_PCI_REVISION_ID);
- if (rc != 1) {
- otx2_err("Failed to read pci revision id, rc=%d", rc);
- return rc;
- }
-
- dev->hwcap = rev_id;
- return otx2_dev_priv_init(pci_dev, otx2_dev);
-}
-
-__rte_internal
-void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev);
-__rte_internal
-int otx2_dev_active_vfs(void *otx2_dev);
-
-#define RVU_PFVF_PF_SHIFT 10
-#define RVU_PFVF_PF_MASK 0x3F
-#define RVU_PFVF_FUNC_SHIFT 0
-#define RVU_PFVF_FUNC_MASK 0x3FF
-
-static inline int
-otx2_get_vf(uint16_t pf_func)
-{
- return (((pf_func >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK) - 1);
-}
-
-static inline int
-otx2_get_pf(uint16_t pf_func)
-{
- return (pf_func >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
-}
-
-static inline int
-otx2_pfvf_func(int pf, int vf)
-{
- return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1);
-}
-
-static inline int
-otx2_is_afvf(uint16_t pf_func)
-{
- return !(pf_func & ~RVU_PFVF_FUNC_MASK);
-}
-
-#endif /* _OTX2_DEV_H */
diff --git a/drivers/common/octeontx2/otx2_io_arm64.h b/drivers/common/octeontx2/otx2_io_arm64.h
deleted file mode 100644
index 34268e3af3..0000000000
--- a/drivers/common/octeontx2/otx2_io_arm64.h
+++ /dev/null
@@ -1,114 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IO_ARM64_H_
-#define _OTX2_IO_ARM64_H_
-
-#define otx2_load_pair(val0, val1, addr) ({ \
- asm volatile( \
- "ldp %x[x0], %x[x1], [%x[p1]]" \
- :[x0]"=r"(val0), [x1]"=r"(val1) \
- :[p1]"r"(addr) \
- ); })
-
-#define otx2_store_pair(val0, val1, addr) ({ \
- asm volatile( \
- "stp %x[x0], %x[x1], [%x[p1],#0]!" \
- ::[x0]"r"(val0), [x1]"r"(val1), [p1]"r"(addr) \
- ); })
-
-#define otx2_prefetch_store_keep(ptr) ({\
- asm volatile("prfm pstl1keep, [%x0]\n" : : "r" (ptr)); })
-
-#if defined(__ARM_FEATURE_SVE)
-#define __LSE_PREAMBLE " .cpu generic+lse+sve\n"
-#else
-#define __LSE_PREAMBLE " .cpu generic+lse\n"
-#endif
-
-static __rte_always_inline uint64_t
-otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr)
-{
- uint64_t result;
-
- /* Atomic add with no ordering */
- asm volatile (
- __LSE_PREAMBLE
- "ldadd %x[i], %x[r], [%[b]]"
- : [r] "=r" (result), "+m" (*ptr)
- : [i] "r" (incr), [b] "r" (ptr)
- : "memory");
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_atomic64_add_sync(int64_t incr, int64_t *ptr)
-{
- uint64_t result;
-
- /* Atomic add with ordering */
- asm volatile (
- __LSE_PREAMBLE
- "ldadda %x[i], %x[r], [%[b]]"
- : [r] "=r" (result), "+m" (*ptr)
- : [i] "r" (incr), [b] "r" (ptr)
- : "memory");
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_lmt_submit(rte_iova_t io_address)
-{
- uint64_t result;
-
- asm volatile (
- __LSE_PREAMBLE
- "ldeor xzr,%x[rf],[%[rs]]" :
- [rf] "=r"(result): [rs] "r"(io_address));
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_lmt_submit_release(rte_iova_t io_address)
-{
- uint64_t result;
-
- asm volatile (
- __LSE_PREAMBLE
- "ldeorl xzr,%x[rf],[%[rs]]" :
- [rf] "=r"(result) : [rs] "r"(io_address));
- return result;
-}
-
-static __rte_always_inline void
-otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext)
-{
- volatile const __uint128_t *src128 = (const __uint128_t *)in;
- volatile __uint128_t *dst128 = (__uint128_t *)out;
- dst128[0] = src128[0];
- dst128[1] = src128[1];
- /* lmtext receives following value:
- * 1: NIX_SUBDC_EXT needed i.e. tx vlan case
- * 2: NIX_SUBDC_EXT + NIX_SUBDC_MEM i.e. tstamp case
- */
- if (lmtext) {
- dst128[2] = src128[2];
- if (lmtext > 1)
- dst128[3] = src128[3];
- }
-}
-
-static __rte_always_inline void
-otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
-{
- volatile const __uint128_t *src128 = (const __uint128_t *)in;
- volatile __uint128_t *dst128 = (__uint128_t *)out;
- uint8_t i;
-
- for (i = 0; i < segdw; i++)
- dst128[i] = src128[i];
-}
-
-#undef __LSE_PREAMBLE
-#endif /* _OTX2_IO_ARM64_H_ */
diff --git a/drivers/common/octeontx2/otx2_io_generic.h b/drivers/common/octeontx2/otx2_io_generic.h
deleted file mode 100644
index 3436a6c3d5..0000000000
--- a/drivers/common/octeontx2/otx2_io_generic.h
+++ /dev/null
@@ -1,75 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IO_GENERIC_H_
-#define _OTX2_IO_GENERIC_H_
-
-#include <string.h>
-
-#define otx2_load_pair(val0, val1, addr) \
-do { \
- val0 = rte_read64_relaxed((void *)(addr)); \
- val1 = rte_read64_relaxed((uint8_t *)(addr) + 8); \
-} while (0)
-
-#define otx2_store_pair(val0, val1, addr) \
-do { \
- rte_write64_relaxed(val0, (void *)(addr)); \
- rte_write64_relaxed(val1, (((uint8_t *)(addr)) + 8)); \
-} while (0)
-
-#define otx2_prefetch_store_keep(ptr) do {} while (0)
-
-static inline uint64_t
-otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr)
-{
- RTE_SET_USED(ptr);
- RTE_SET_USED(incr);
-
- return 0;
-}
-
-static inline uint64_t
-otx2_atomic64_add_sync(int64_t incr, int64_t *ptr)
-{
- RTE_SET_USED(ptr);
- RTE_SET_USED(incr);
-
- return 0;
-}
-
-static inline int64_t
-otx2_lmt_submit(uint64_t io_address)
-{
- RTE_SET_USED(io_address);
-
- return 0;
-}
-
-static inline int64_t
-otx2_lmt_submit_release(uint64_t io_address)
-{
- RTE_SET_USED(io_address);
-
- return 0;
-}
-
-static __rte_always_inline void
-otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext)
-{
- /* Copy four words if lmtext = 0
- * six words if lmtext = 1
- * eight words if lmtext =2
- */
- memcpy(out, in, (4 + (2 * lmtext)) * sizeof(uint64_t));
-}
-
-static __rte_always_inline void
-otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
-{
- RTE_SET_USED(out);
- RTE_SET_USED(in);
- RTE_SET_USED(segdw);
-}
-#endif /* _OTX2_IO_GENERIC_H_ */
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
deleted file mode 100644
index 93fc95c0e1..0000000000
--- a/drivers/common/octeontx2/otx2_irq.c
+++ /dev/null
@@ -1,288 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_alarm.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_interrupts.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-
-#ifdef RTE_EAL_VFIO
-
-#include <inttypes.h>
-#include <linux/vfio.h>
-#include <sys/eventfd.h>
-#include <sys/ioctl.h>
-#include <unistd.h>
-
-#define MAX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID
-#define MSIX_IRQ_SET_BUF_LEN (sizeof(struct vfio_irq_set) + \
- sizeof(int) * (MAX_INTR_VEC_ID))
-
-static int
-irq_get_info(struct rte_intr_handle *intr_handle)
-{
- struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc, vfio_dev_fd;
-
- irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
- if (rc < 0) {
- otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
- return rc;
- }
-
- otx2_base_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x",
- irq.flags, irq.index, irq.count, MAX_INTR_VEC_ID);
-
- if (irq.count > MAX_INTR_VEC_ID) {
- otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- rte_intr_max_intr_get(intr_handle),
- MAX_INTR_VEC_ID);
- if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
- return -1;
- } else {
- if (rte_intr_max_intr_set(intr_handle, irq.count))
- return -1;
- }
-
- return 0;
-}
-
-static int
-irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
-{
- char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- struct vfio_irq_set *irq_set;
- int len, rc, vfio_dev_fd;
- int32_t *fd_ptr;
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("vector=%d greater than max_intr=%d", vec,
- rte_intr_max_intr_get(intr_handle));
- return -EINVAL;
- }
-
- len = sizeof(struct vfio_irq_set) + sizeof(int32_t);
-
- irq_set = (struct vfio_irq_set *)irq_set_buf;
- irq_set->argsz = len;
-
- irq_set->start = vec;
- irq_set->count = 1;
- irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
- VFIO_IRQ_SET_ACTION_TRIGGER;
- irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- /* Use vec fd to set interrupt vectors */
- fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
- if (rc)
- otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
-
- return rc;
-}
-
-static int
-irq_init(struct rte_intr_handle *intr_handle)
-{
- char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- struct vfio_irq_set *irq_set;
- int len, rc, vfio_dev_fd;
- int32_t *fd_ptr;
- uint32_t i;
-
- if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
- otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- rte_intr_max_intr_get(intr_handle),
- MAX_INTR_VEC_ID);
- return -ERANGE;
- }
-
- len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
-
- irq_set = (struct vfio_irq_set *)irq_set_buf;
- irq_set->argsz = len;
- irq_set->start = 0;
- irq_set->count = rte_intr_max_intr_get(intr_handle);
- irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
- VFIO_IRQ_SET_ACTION_TRIGGER;
- irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- fd_ptr = (int32_t *)&irq_set->data[0];
- for (i = 0; i < irq_set->count; i++)
- fd_ptr[i] = -1;
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
- if (rc)
- otx2_err("Failed to set irqs vector rc=%d", rc);
-
- return rc;
-}
-
-/**
- * @internal
- * Disable IRQ
- */
-int
-otx2_disable_irqs(struct rte_intr_handle *intr_handle)
-{
- /* Clear max_intr to indicate re-init next time */
- if (rte_intr_max_intr_set(intr_handle, 0))
- return -1;
- return rte_intr_disable(intr_handle);
-}
-
-/**
- * @internal
- * Register IRQ
- */
-int
-otx2_register_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec)
-{
- struct rte_intr_handle *tmp_handle;
- uint32_t nb_efd, tmp_nb_efd;
- int rc, fd;
-
- /* If no max_intr read from VFIO */
- if (rte_intr_max_intr_get(intr_handle) == 0) {
- irq_get_info(intr_handle);
- irq_init(intr_handle);
- }
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("Vector=%d greater than max_intr=%d", vec,
- rte_intr_max_intr_get(intr_handle));
- return -EINVAL;
- }
-
- tmp_handle = intr_handle;
- /* Create new eventfd for interrupt vector */
- fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (fd == -1)
- return -ENODEV;
-
- if (rte_intr_fd_set(tmp_handle, fd))
- return errno;
-
- /* Register vector interrupt callback */
- rc = rte_intr_callback_register(tmp_handle, cb, data);
- if (rc) {
- otx2_err("Failed to register vector:0x%x irq callback.", vec);
- return rc;
- }
-
- rte_intr_efds_index_set(intr_handle, vec, fd);
- nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
- vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
- rte_intr_nb_efd_set(intr_handle, nb_efd);
-
- tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
- if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
- rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
-
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- rte_intr_nb_efd_get(intr_handle),
- rte_intr_max_intr_get(intr_handle));
-
- /* Enable MSIX vectors to VFIO */
- return irq_config(intr_handle, vec);
-}
-
-/**
- * @internal
- * Unregister IRQ
- */
-void
-otx2_unregister_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec)
-{
- struct rte_intr_handle *tmp_handle;
- uint8_t retries = 5; /* 5 ms */
- int rc, fd;
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, rte_intr_max_intr_get(intr_handle));
- return;
- }
-
- tmp_handle = intr_handle;
- fd = rte_intr_efds_index_get(intr_handle, vec);
- if (fd == -1)
- return;
-
- if (rte_intr_fd_set(tmp_handle, fd))
- return;
-
- do {
- /* Un-register callback func from platform lib */
- rc = rte_intr_callback_unregister(tmp_handle, cb, data);
- /* Retry only if -EAGAIN */
- if (rc != -EAGAIN)
- break;
- rte_delay_ms(1);
- retries--;
- } while (retries);
-
- if (rc < 0) {
- otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
- return;
- }
-
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- rte_intr_nb_efd_get(intr_handle),
- rte_intr_max_intr_get(intr_handle));
-
- if (rte_intr_efds_index_get(intr_handle, vec) != -1)
- close(rte_intr_efds_index_get(intr_handle, vec));
- /* Disable MSIX vectors from VFIO */
- rte_intr_efds_index_set(intr_handle, vec, -1);
- irq_config(intr_handle, vec);
-}
-
-#else
-
-/**
- * @internal
- * Register IRQ
- */
-int otx2_register_irq(__rte_unused struct rte_intr_handle *intr_handle,
- __rte_unused rte_intr_callback_fn cb,
- __rte_unused void *data, __rte_unused unsigned int vec)
-{
- return -ENOTSUP;
-}
-
-
-/**
- * @internal
- * Unregister IRQ
- */
-void otx2_unregister_irq(__rte_unused struct rte_intr_handle *intr_handle,
- __rte_unused rte_intr_callback_fn cb,
- __rte_unused void *data, __rte_unused unsigned int vec)
-{
-}
-
-/**
- * @internal
- * Disable IRQ
- */
-int otx2_disable_irqs(__rte_unused struct rte_intr_handle *intr_handle)
-{
- return -ENOTSUP;
-}
-
-#endif /* RTE_EAL_VFIO */
diff --git a/drivers/common/octeontx2/otx2_irq.h b/drivers/common/octeontx2/otx2_irq.h
deleted file mode 100644
index 0683cf5543..0000000000
--- a/drivers/common/octeontx2/otx2_irq.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IRQ_H_
-#define _OTX2_IRQ_H_
-
-#include <rte_pci.h>
-#include <rte_interrupts.h>
-
-#include "otx2_common.h"
-
-typedef struct {
-/* 128 devices translate to two 64 bits dwords */
-#define MAX_VFPF_DWORD_BITS 2
- uint64_t bits[MAX_VFPF_DWORD_BITS];
-} otx2_intr_t;
-
-__rte_internal
-int otx2_register_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec);
-__rte_internal
-void otx2_unregister_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec);
-__rte_internal
-int otx2_disable_irqs(struct rte_intr_handle *intr_handle);
-
-#endif /* _OTX2_IRQ_H_ */
diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c
deleted file mode 100644
index 6df1e8ea63..0000000000
--- a/drivers/common/octeontx2/otx2_mbox.c
+++ /dev/null
@@ -1,465 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <errno.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-
-#include <rte_atomic.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "otx2_mbox.h"
-#include "otx2_dev.h"
-
-#define RVU_AF_AFPF_MBOX0 (0x02000)
-#define RVU_AF_AFPF_MBOX1 (0x02008)
-
-#define RVU_PF_PFAF_MBOX0 (0xC00)
-#define RVU_PF_PFAF_MBOX1 (0xC08)
-
-#define RVU_PF_VFX_PFVF_MBOX0 (0x0000)
-#define RVU_PF_VFX_PFVF_MBOX1 (0x0008)
-
-#define RVU_VF_VFPF_MBOX0 (0x0000)
-#define RVU_VF_VFPF_MBOX1 (0x0008)
-
-static inline uint16_t
-msgs_offset(void)
-{
- return RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
-}
-
-void
-otx2_mbox_fini(struct otx2_mbox *mbox)
-{
- mbox->reg_base = 0;
- mbox->hwbase = 0;
- rte_free(mbox->dev);
- mbox->dev = NULL;
-}
-
-void
-otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
-
- rte_spinlock_lock(&mdev->mbox_lock);
- mdev->msg_size = 0;
- mdev->rsp_size = 0;
- tx_hdr->msg_size = 0;
- tx_hdr->num_msgs = 0;
- rx_hdr->msg_size = 0;
- rx_hdr->num_msgs = 0;
- rte_spinlock_unlock(&mdev->mbox_lock);
-}
-
-int
-otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
- int direction, int ndevs, uint64_t intr_offset)
-{
- struct otx2_mbox_dev *mdev;
- int devid;
-
- mbox->intr_offset = intr_offset;
- mbox->reg_base = reg_base;
- mbox->hwbase = hwbase;
-
- switch (direction) {
- case MBOX_DIR_AFPF:
- case MBOX_DIR_PFVF:
- mbox->tx_start = MBOX_DOWN_TX_START;
- mbox->rx_start = MBOX_DOWN_RX_START;
- mbox->tx_size = MBOX_DOWN_TX_SIZE;
- mbox->rx_size = MBOX_DOWN_RX_SIZE;
- break;
- case MBOX_DIR_PFAF:
- case MBOX_DIR_VFPF:
- mbox->tx_start = MBOX_DOWN_RX_START;
- mbox->rx_start = MBOX_DOWN_TX_START;
- mbox->tx_size = MBOX_DOWN_RX_SIZE;
- mbox->rx_size = MBOX_DOWN_TX_SIZE;
- break;
- case MBOX_DIR_AFPF_UP:
- case MBOX_DIR_PFVF_UP:
- mbox->tx_start = MBOX_UP_TX_START;
- mbox->rx_start = MBOX_UP_RX_START;
- mbox->tx_size = MBOX_UP_TX_SIZE;
- mbox->rx_size = MBOX_UP_RX_SIZE;
- break;
- case MBOX_DIR_PFAF_UP:
- case MBOX_DIR_VFPF_UP:
- mbox->tx_start = MBOX_UP_RX_START;
- mbox->rx_start = MBOX_UP_TX_START;
- mbox->tx_size = MBOX_UP_RX_SIZE;
- mbox->rx_size = MBOX_UP_TX_SIZE;
- break;
- default:
- return -ENODEV;
- }
-
- switch (direction) {
- case MBOX_DIR_AFPF:
- case MBOX_DIR_AFPF_UP:
- mbox->trigger = RVU_AF_AFPF_MBOX0;
- mbox->tr_shift = 4;
- break;
- case MBOX_DIR_PFAF:
- case MBOX_DIR_PFAF_UP:
- mbox->trigger = RVU_PF_PFAF_MBOX1;
- mbox->tr_shift = 0;
- break;
- case MBOX_DIR_PFVF:
- case MBOX_DIR_PFVF_UP:
- mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
- mbox->tr_shift = 12;
- break;
- case MBOX_DIR_VFPF:
- case MBOX_DIR_VFPF_UP:
- mbox->trigger = RVU_VF_VFPF_MBOX1;
- mbox->tr_shift = 0;
- break;
- default:
- return -ENODEV;
- }
-
- mbox->dev = rte_zmalloc("mbox dev",
- ndevs * sizeof(struct otx2_mbox_dev),
- OTX2_ALIGN);
- if (!mbox->dev) {
- otx2_mbox_fini(mbox);
- return -ENOMEM;
- }
- mbox->ndevs = ndevs;
- for (devid = 0; devid < ndevs; devid++) {
- mdev = &mbox->dev[devid];
- mdev->mbase = (void *)(mbox->hwbase + (devid * MBOX_SIZE));
- rte_spinlock_init(&mdev->mbox_lock);
- /* Init header to reset value */
- otx2_mbox_reset(mbox, devid);
- }
-
- return 0;
-}
-
-/**
- * @internal
- * Allocate a message response
- */
-struct mbox_msghdr *
-otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, int size,
- int size_rsp)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr = NULL;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- size_rsp = RTE_ALIGN(size_rsp, MBOX_MSG_ALIGN);
- /* Check if there is space in mailbox */
- if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset())
- goto exit;
- if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset())
- goto exit;
- if (mdev->msg_size == 0)
- mdev->num_msgs = 0;
- mdev->num_msgs++;
-
- msghdr = (struct mbox_msghdr *)(((uintptr_t)mdev->mbase +
- mbox->tx_start + msgs_offset() + mdev->msg_size));
-
- /* Clear the whole msg region */
- otx2_mbox_memset(msghdr, 0, sizeof(*msghdr) + size);
- /* Init message header with reset values */
- msghdr->ver = OTX2_MBOX_VERSION;
- mdev->msg_size += size;
- mdev->rsp_size += size_rsp;
- msghdr->next_msgoff = mdev->msg_size + msgs_offset();
-exit:
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return msghdr;
-}
-
-/**
- * @internal
- * Send a mailbox message
- */
-void
-otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
-
- /* Reset header for next messages */
- tx_hdr->msg_size = mdev->msg_size;
- mdev->msg_size = 0;
- mdev->rsp_size = 0;
- mdev->msgs_acked = 0;
-
- /* num_msgs != 0 signals to the peer that the buffer has a number of
- * messages. So this should be written after copying txmem
- */
- tx_hdr->num_msgs = mdev->num_msgs;
- rx_hdr->num_msgs = 0;
-
- /* Sync mbox data into memory */
- rte_wmb();
-
- /* The interrupt should be fired after num_msgs is written
- * to the shared memory
- */
- rte_write64(1, (volatile void *)(mbox->reg_base +
- (mbox->trigger | (devid << mbox->tr_shift))));
-}
-
-/**
- * @internal
- * Wait and get mailbox response
- */
-int
-otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr;
- uint64_t offset;
- int rc;
-
- rc = otx2_mbox_wait_for_rsp(mbox, devid);
- if (rc != 1)
- return -EIO;
-
- rte_rmb();
-
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- if (msg != NULL)
- *msg = msghdr;
-
- return msghdr->rc;
-}
-
-/**
- * Polling for given wait time to get mailbox response
- */
-static int
-mbox_poll(struct otx2_mbox *mbox, uint32_t wait)
-{
- uint32_t timeout = 0, sleep = 1;
- uint32_t wait_us = wait * 1000;
- uint64_t rsp_reg = 0;
- uintptr_t reg_addr;
-
- reg_addr = mbox->reg_base + mbox->intr_offset;
- do {
- rsp_reg = otx2_read64(reg_addr);
-
- if (timeout >= wait_us)
- return -ETIMEDOUT;
-
- rte_delay_us(sleep);
- timeout += sleep;
- } while (!rsp_reg);
-
- rte_smp_rmb();
-
- /* Clear interrupt */
- otx2_write64(rsp_reg, reg_addr);
-
- /* Reset mbox */
- otx2_mbox_reset(mbox, 0);
-
- return 0;
-}
-
-/**
- * @internal
- * Wait and get mailbox response with timeout
- */
-int
-otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
- uint32_t tmo)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr;
- uint64_t offset;
- int rc;
-
- rc = otx2_mbox_wait_for_rsp_tmo(mbox, devid, tmo);
- if (rc != 1)
- return -EIO;
-
- rte_rmb();
-
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- if (msg != NULL)
- *msg = msghdr;
-
- return msghdr->rc;
-}
-
-static int
-mbox_wait(struct otx2_mbox *mbox, int devid, uint32_t rst_timo)
-{
- volatile struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- uint32_t timeout = 0, sleep = 1;
-
- rst_timo = rst_timo * 1000; /* Milli seconds to micro seconds */
- while (mdev->num_msgs > mdev->msgs_acked) {
- rte_delay_us(sleep);
- timeout += sleep;
- if (timeout >= rst_timo) {
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase +
- mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase +
- mbox->rx_start);
-
- otx2_err("MBOX[devid: %d] message wait timeout %d, "
- "num_msgs: %d, msgs_acked: %d "
- "(tx/rx num_msgs: %d/%d), msg_size: %d, "
- "rsp_size: %d",
- devid, timeout, mdev->num_msgs,
- mdev->msgs_acked, tx_hdr->num_msgs,
- rx_hdr->num_msgs, mdev->msg_size,
- mdev->rsp_size);
-
- return -EIO;
- }
- rte_rmb();
- }
- return 0;
-}
-
-int
-otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int rc = 0;
-
- /* Sync with mbox region */
- rte_rmb();
-
- if (mbox->trigger == RVU_PF_VFX_PFVF_MBOX1 ||
- mbox->trigger == RVU_PF_VFX_PFVF_MBOX0) {
- /* In case of VF, Wait a bit more to account round trip delay */
- tmo = tmo * 2;
- }
-
- /* Wait message */
- if (rte_thread_is_intr())
- rc = mbox_poll(mbox, tmo);
- else
- rc = mbox_wait(mbox, devid, tmo);
-
- if (!rc)
- rc = mdev->num_msgs;
-
- return rc;
-}
-
-/**
- * @internal
- * Wait for the mailbox response
- */
-int
-otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
-{
- return otx2_mbox_wait_for_rsp_tmo(mbox, devid, MBOX_RSP_TIMEOUT);
-}
-
-int
-otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int avail;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- avail = mbox->tx_size - mdev->msg_size - msgs_offset();
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return avail;
-}
-
-int
-otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pcifunc)
-{
- struct ready_msg_rsp *rsp;
- int rc;
-
- otx2_mbox_alloc_msg_ready(mbox);
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->hdr.ver != OTX2_MBOX_VERSION) {
- otx2_err("Incompatible MBox versions(AF: 0x%04x DPDK: 0x%04x)",
- rsp->hdr.ver, OTX2_MBOX_VERSION);
- return -EPIPE;
- }
-
- if (pcifunc)
- *pcifunc = rsp->hdr.pcifunc;
-
- return 0;
-}
-
-int
-otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pcifunc,
- uint16_t id)
-{
- struct msg_rsp *rsp;
-
- rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp));
- if (!rsp)
- return -ENOMEM;
- rsp->hdr.id = id;
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
- rsp->hdr.rc = MBOX_MSG_INVALID;
- rsp->hdr.pcifunc = pcifunc;
-
- return 0;
-}
-
-/**
- * @internal
- * Convert mail box ID to name
- */
-const char *otx2_mbox_id2name(uint16_t id)
-{
- switch (id) {
-#define M(_name, _id, _1, _2, _3) case _id: return # _name;
- MBOX_MESSAGES
- MBOX_UP_CGX_MESSAGES
-#undef M
- default :
- return "INVALID ID";
- }
-}
-
-int otx2_mbox_id2size(uint16_t id)
-{
- switch (id) {
-#define M(_1, _id, _2, _req_type, _3) case _id: return sizeof(struct _req_type);
- MBOX_MESSAGES
- MBOX_UP_CGX_MESSAGES
-#undef M
- default :
- return 0;
- }
-}
diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h
deleted file mode 100644
index 25b521a7fa..0000000000
--- a/drivers/common/octeontx2/otx2_mbox.h
+++ /dev/null
@@ -1,1958 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_MBOX_H__
-#define __OTX2_MBOX_H__
-
-#include <errno.h>
-#include <stdbool.h>
-
-#include <rte_ether.h>
-#include <rte_spinlock.h>
-
-#include <otx2_common.h>
-
-#define SZ_64K (64ULL * 1024ULL)
-#define SZ_1K (1ULL * 1024ULL)
-#define MBOX_SIZE SZ_64K
-
-/* AF/PF: PF initiated, PF/VF VF initiated */
-#define MBOX_DOWN_RX_START 0
-#define MBOX_DOWN_RX_SIZE (46 * SZ_1K)
-#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
-#define MBOX_DOWN_TX_SIZE (16 * SZ_1K)
-/* AF/PF: AF initiated, PF/VF PF initiated */
-#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
-#define MBOX_UP_RX_SIZE SZ_1K
-#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
-#define MBOX_UP_TX_SIZE SZ_1K
-
-#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
-# error "Incorrect mailbox area sizes"
-#endif
-
-#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
-
-#define MBOX_RSP_TIMEOUT 3000 /* Time to wait for mbox response in ms */
-
-#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */
-
-/* Mailbox directions */
-#define MBOX_DIR_AFPF 0 /* AF replies to PF */
-#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */
-#define MBOX_DIR_PFVF 2 /* PF replies to VF */
-#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */
-#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */
-#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */
-#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */
-#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */
-
-/* Device memory does not support unaligned access, instruct compiler to
- * not optimize the memory access when working with mailbox memory.
- */
-#define __otx2_io volatile
-
-struct otx2_mbox_dev {
- void *mbase; /* This dev's mbox region */
- rte_spinlock_t mbox_lock;
- uint16_t msg_size; /* Total msg size to be sent */
- uint16_t rsp_size; /* Total rsp size to be sure the reply is ok */
- uint16_t num_msgs; /* No of msgs sent or waiting for response */
- uint16_t msgs_acked; /* No of msgs for which response is received */
-};
-
-struct otx2_mbox {
- uintptr_t hwbase; /* Mbox region advertised by HW */
- uintptr_t reg_base;/* CSR base for this dev */
- uint64_t trigger; /* Trigger mbox notification */
- uint16_t tr_shift; /* Mbox trigger shift */
- uint64_t rx_start; /* Offset of Rx region in mbox memory */
- uint64_t tx_start; /* Offset of Tx region in mbox memory */
- uint16_t rx_size; /* Size of Rx region */
- uint16_t tx_size; /* Size of Tx region */
- uint16_t ndevs; /* The number of peers */
- struct otx2_mbox_dev *dev;
- uint64_t intr_offset; /* Offset to interrupt register */
-};
-
-/* Header which precedes all mbox messages */
-struct mbox_hdr {
- uint64_t __otx2_io msg_size; /* Total msgs size embedded */
- uint16_t __otx2_io num_msgs; /* No of msgs embedded */
-};
-
-/* Header which precedes every msg and is also part of it */
-struct mbox_msghdr {
- uint16_t __otx2_io pcifunc; /* Who's sending this msg */
- uint16_t __otx2_io id; /* Mbox message ID */
-#define OTX2_MBOX_REQ_SIG (0xdead)
-#define OTX2_MBOX_RSP_SIG (0xbeef)
- /* Signature, for validating corrupted msgs */
- uint16_t __otx2_io sig;
-#define OTX2_MBOX_VERSION (0x000b)
- /* Version of msg's structure for this ID */
- uint16_t __otx2_io ver;
- /* Offset of next msg within mailbox region */
- uint16_t __otx2_io next_msgoff;
- int __otx2_io rc; /* Msg processed response code */
-};
-
-/* Mailbox message types */
-#define MBOX_MSG_MASK 0xFFFF
-#define MBOX_MSG_INVALID 0xFFFE
-#define MBOX_MSG_MAX 0xFFFF
-
-#define MBOX_MESSAGES \
-/* Generic mbox IDs (range 0x000 - 0x1FF) */ \
-M(READY, 0x001, ready, msg_req, ready_msg_rsp) \
-M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach_req, msg_rsp)\
-M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach_req, msg_rsp)\
-M(FREE_RSRC_CNT, 0x004, free_rsrc_cnt, msg_req, free_rsrcs_rsp) \
-M(MSIX_OFFSET, 0x005, msix_offset, msg_req, msix_offset_rsp) \
-M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \
-M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \
-M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \
-M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \
-/* CGX mbox IDs (range 0x200 - 0x3FF) */ \
-M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \
-M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \
-M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \
-M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get,\
- cgx_mac_addr_set_or_get) \
-M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get,\
- cgx_mac_addr_set_or_get) \
-M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \
-M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \
-M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \
-M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \
-M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, cgx_link_info_msg)\
-M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \
-M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \
-M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \
-M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \
-M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \
- cgx_pause_frm_cfg) \
-M(CGX_FW_DATA_GET, 0x20F, cgx_get_aux_link_info, msg_req, cgx_fw_data) \
-M(CGX_FEC_SET, 0x210, cgx_set_fec_param, fec_mode, fec_mode) \
-M(CGX_MAC_ADDR_ADD, 0x211, cgx_mac_addr_add, cgx_mac_addr_add_req, \
- cgx_mac_addr_add_rsp) \
-M(CGX_MAC_ADDR_DEL, 0x212, cgx_mac_addr_del, cgx_mac_addr_del_req, \
- msg_rsp) \
-M(CGX_MAC_MAX_ENTRIES_GET, 0x213, cgx_mac_max_entries_get, msg_req, \
- cgx_max_dmac_entries_get_rsp) \
-M(CGX_SET_LINK_STATE, 0x214, cgx_set_link_state, \
- cgx_set_link_state_msg, msg_rsp) \
-M(CGX_GET_PHY_MOD_TYPE, 0x215, cgx_get_phy_mod_type, msg_req, \
- cgx_phy_mod_type) \
-M(CGX_SET_PHY_MOD_TYPE, 0x216, cgx_set_phy_mod_type, cgx_phy_mod_type, \
- msg_rsp) \
-M(CGX_FEC_STATS, 0x217, cgx_fec_stats, msg_req, cgx_fec_stats_rsp) \
-M(CGX_SET_LINK_MODE, 0x218, cgx_set_link_mode, cgx_set_link_mode_req,\
- cgx_set_link_mode_rsp) \
-M(CGX_GET_PHY_FEC_STATS, 0x219, cgx_get_phy_fec_stats, msg_req, msg_rsp) \
-M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \
-/* NPA mbox IDs (range 0x400 - 0x5FF) */ \
-M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \
- npa_lf_alloc_rsp) \
-M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \
-M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp)\
-M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, msg_rsp)\
-/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \
-M(SSO_LF_ALLOC, 0x600, sso_lf_alloc, sso_lf_alloc_req, \
- sso_lf_alloc_rsp) \
-M(SSO_LF_FREE, 0x601, sso_lf_free, sso_lf_free_req, msg_rsp) \
-M(SSOW_LF_ALLOC, 0x602, ssow_lf_alloc, ssow_lf_alloc_req, msg_rsp)\
-M(SSOW_LF_FREE, 0x603, ssow_lf_free, ssow_lf_free_req, msg_rsp) \
-M(SSO_HW_SETCONFIG, 0x604, sso_hw_setconfig, sso_hw_setconfig, \
- msg_rsp) \
-M(SSO_GRP_SET_PRIORITY, 0x605, sso_grp_set_priority, sso_grp_priority, \
- msg_rsp) \
-M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \
- sso_grp_priority) \
-M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \
-M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \
- msg_rsp) \
-M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \
- sso_grp_stats) \
-M(SSO_HWS_GET_STATS, 0x610, sso_hws_get_stats, sso_info_req, \
- sso_hws_stats) \
-M(SSO_HW_RELEASE_XAQ, 0x611, sso_hw_release_xaq_aura, \
- sso_release_xaq, msg_rsp) \
-/* TIM mbox IDs (range 0x800 - 0x9FF) */ \
-M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \
- tim_lf_alloc_rsp) \
-M(TIM_LF_FREE, 0x801, tim_lf_free, tim_ring_req, msg_rsp) \
-M(TIM_CONFIG_RING, 0x802, tim_config_ring, tim_config_req, msg_rsp)\
-M(TIM_ENABLE_RING, 0x803, tim_enable_ring, tim_ring_req, \
- tim_enable_rsp) \
-M(TIM_DISABLE_RING, 0x804, tim_disable_ring, tim_ring_req, msg_rsp) \
-/* CPT mbox IDs (range 0xA00 - 0xBFF) */ \
-M(CPT_LF_ALLOC, 0xA00, cpt_lf_alloc, cpt_lf_alloc_req_msg, \
- cpt_lf_alloc_rsp_msg) \
-M(CPT_LF_FREE, 0xA01, cpt_lf_free, msg_req, msg_rsp) \
-M(CPT_RD_WR_REGISTER, 0xA02, cpt_rd_wr_register, cpt_rd_wr_reg_msg, \
- cpt_rd_wr_reg_msg) \
-M(CPT_SET_CRYPTO_GRP, 0xA03, cpt_set_crypto_grp, \
- cpt_set_crypto_grp_req_msg, \
- msg_rsp) \
-M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \
- cpt_inline_ipsec_cfg_msg, msg_rsp) \
-M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, \
- cpt_rx_inline_lf_cfg_msg, msg_rsp) \
-M(CPT_GET_CAPS, 0xBFD, cpt_caps_get, msg_req, cpt_caps_rsp_msg) \
-/* REE mbox IDs (range 0xE00 - 0xFFF) */ \
-M(REE_CONFIG_LF, 0xE01, ree_config_lf, ree_lf_req_msg, \
- msg_rsp) \
-M(REE_RD_WR_REGISTER, 0xE02, ree_rd_wr_register, ree_rd_wr_reg_msg, \
- ree_rd_wr_reg_msg) \
-M(REE_RULE_DB_PROG, 0xE03, ree_rule_db_prog, \
- ree_rule_db_prog_req_msg, \
- msg_rsp) \
-M(REE_RULE_DB_LEN_GET, 0xE04, ree_rule_db_len_get, ree_req_msg, \
- ree_rule_db_len_rsp_msg) \
-M(REE_RULE_DB_GET, 0xE05, ree_rule_db_get, \
- ree_rule_db_get_req_msg, \
- ree_rule_db_get_rsp_msg) \
-/* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \
-M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, \
- npc_mcam_alloc_entry_req, \
- npc_mcam_alloc_entry_rsp) \
-M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \
- npc_mcam_free_entry_req, msg_rsp) \
-M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \
- npc_mcam_write_entry_req, msg_rsp) \
-M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \
- npc_mcam_ena_dis_entry_req, msg_rsp) \
-M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \
- npc_mcam_ena_dis_entry_req, msg_rsp) \
-M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, \
- npc_mcam_shift_entry_req, \
- npc_mcam_shift_entry_rsp) \
-M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \
- npc_mcam_alloc_counter_req, \
- npc_mcam_alloc_counter_rsp) \
-M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \
- npc_mcam_oper_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \
- npc_mcam_unmap_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \
- npc_mcam_oper_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \
- npc_mcam_oper_counter_req, \
- npc_mcam_oper_counter_rsp) \
-M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry,\
- npc_mcam_alloc_and_write_entry_req, \
- npc_mcam_alloc_and_write_entry_rsp) \
-M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, msg_req, \
- npc_get_kex_cfg_rsp) \
-M(NPC_INSTALL_FLOW, 0x600d, npc_install_flow, \
- npc_install_flow_req, \
- npc_install_flow_rsp) \
-M(NPC_DELETE_FLOW, 0x600e, npc_delete_flow, \
- npc_delete_flow_req, msg_rsp) \
-M(NPC_MCAM_READ_ENTRY, 0x600f, npc_mcam_read_entry, \
- npc_mcam_read_entry_req, \
- npc_mcam_read_entry_rsp) \
-M(NPC_SET_PKIND, 0x6010, npc_set_pkind, \
- npc_set_pkind, \
- msg_rsp) \
-M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, msg_req, \
- npc_mcam_read_base_rule_rsp) \
-/* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \
-M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, nix_lf_alloc_req, \
- nix_lf_alloc_rsp) \
-M(NIX_LF_FREE, 0x8001, nix_lf_free, nix_lf_free_req, msg_rsp) \
-M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, \
- nix_aq_enq_rsp) \
-M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, hwctx_disable_req, \
- msg_rsp) \
-M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, nix_txsch_alloc_req, \
- nix_txsch_alloc_rsp) \
-M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, \
- msg_rsp) \
-M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, \
- nix_txschq_config) \
-M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \
-M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \
-M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \
- nix_rss_flowkey_cfg, \
- nix_rss_flowkey_cfg_rsp) \
-M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, \
- msg_rsp) \
-M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \
-M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \
-M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \
-M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \
-M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \
- nix_mark_format_cfg, \
- nix_mark_format_cfg_rsp) \
-M(NIX_SET_RX_CFG, 0x8010, nix_set_rx_cfg, nix_rx_cfg, msg_rsp) \
-M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, nix_lso_format_cfg, \
- nix_lso_format_cfg_rsp) \
-M(NIX_LF_PTP_TX_ENABLE, 0x8013, nix_lf_ptp_tx_enable, msg_req, \
- msg_rsp) \
-M(NIX_LF_PTP_TX_DISABLE, 0x8014, nix_lf_ptp_tx_disable, msg_req, \
- msg_rsp) \
-M(NIX_SET_VLAN_TPID, 0x8015, nix_set_vlan_tpid, nix_set_vlan_tpid, \
- msg_rsp) \
-M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \
- nix_bp_cfg_rsp) \
-M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp)\
-M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, \
- nix_get_mac_addr_rsp) \
-M(NIX_INLINE_IPSEC_CFG, 0x8019, nix_inline_ipsec_cfg, \
- nix_inline_ipsec_cfg, msg_rsp) \
-M(NIX_INLINE_IPSEC_LF_CFG, \
- 0x801a, nix_inline_ipsec_lf_cfg, \
- nix_inline_ipsec_lf_cfg, msg_rsp)
-
-/* Messages initiated by AF (range 0xC00 - 0xDFF) */
-#define MBOX_UP_CGX_MESSAGES \
-M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, \
- msg_rsp) \
-M(CGX_PTP_RX_INFO, 0xC01, cgx_ptp_rx_info, cgx_ptp_rx_info_msg, \
- msg_rsp)
-
-enum {
-#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id,
-MBOX_MESSAGES
-MBOX_UP_CGX_MESSAGES
-#undef M
-};
-
-/* Mailbox message formats */
-
-#define RVU_DEFAULT_PF_FUNC 0xFFFF
-
-/* Generic request msg used for those mbox messages which
- * don't send any data in the request.
- */
-struct msg_req {
- struct mbox_msghdr hdr;
-};
-
-/* Generic response msg used a ack or response for those mbox
- * messages which doesn't have a specific rsp msg format.
- */
-struct msg_rsp {
- struct mbox_msghdr hdr;
-};
-
-/* RVU mailbox error codes
- * Range 256 - 300.
- */
-enum rvu_af_status {
- RVU_INVALID_VF_ID = -256,
-};
-
-struct ready_msg_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sclk_feq; /* SCLK frequency */
- uint16_t __otx2_io rclk_freq; /* RCLK frequency */
-};
-
-enum npc_pkind_type {
- NPC_RX_CUSTOM_PRE_L2_PKIND = 55ULL,
- NPC_RX_VLAN_EXDSA_PKIND = 56ULL,
- NPC_RX_CHLEN24B_PKIND,
- NPC_RX_CPT_HDR_PKIND,
- NPC_RX_CHLEN90B_PKIND,
- NPC_TX_HIGIG_PKIND,
- NPC_RX_HIGIG_PKIND,
- NPC_RX_EXDSA_PKIND,
- NPC_RX_EDSA_PKIND,
- NPC_TX_DEF_PKIND,
-};
-
-#define OTX2_PRIV_FLAGS_CH_LEN_90B 254
-#define OTX2_PRIV_FLAGS_CH_LEN_24B 255
-
-/* Struct to set pkind */
-struct npc_set_pkind {
- struct mbox_msghdr hdr;
-#define OTX2_PRIV_FLAGS_DEFAULT BIT_ULL(0)
-#define OTX2_PRIV_FLAGS_EDSA BIT_ULL(1)
-#define OTX2_PRIV_FLAGS_HIGIG BIT_ULL(2)
-#define OTX2_PRIV_FLAGS_FDSA BIT_ULL(3)
-#define OTX2_PRIV_FLAGS_EXDSA BIT_ULL(4)
-#define OTX2_PRIV_FLAGS_VLAN_EXDSA BIT_ULL(5)
-#define OTX2_PRIV_FLAGS_CUSTOM BIT_ULL(63)
- uint64_t __otx2_io mode;
-#define PKIND_TX BIT_ULL(0)
-#define PKIND_RX BIT_ULL(1)
- uint8_t __otx2_io dir;
- uint8_t __otx2_io pkind; /* valid only in case custom flag */
- uint8_t __otx2_io var_len_off;
- /* Offset of custom header length field.
- * Valid only for pkind NPC_RX_CUSTOM_PRE_L2_PKIND
- */
- uint8_t __otx2_io var_len_off_mask; /* Mask for length with in offset */
- uint8_t __otx2_io shift_dir;
- /* Shift direction to get length of the
- * header at var_len_off
- */
-};
-
-/* Structure for requesting resource provisioning.
- * 'modify' flag to be used when either requesting more
- * or to detach partial of a certain resource type.
- * Rest of the fields specify how many of what type to
- * be attached.
- * To request LFs from two blocks of same type this mailbox
- * can be sent twice as below:
- * struct rsrc_attach *attach;
- * .. Allocate memory for message ..
- * attach->cptlfs = 3; <3 LFs from CPT0>
- * .. Send message ..
- * .. Allocate memory for message ..
- * attach->modify = 1;
- * attach->cpt_blkaddr = BLKADDR_CPT1;
- * attach->cptlfs = 2; <2 LFs from CPT1>
- * .. Send message ..
- */
-struct rsrc_attach_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io modify:1;
- uint8_t __otx2_io npalf:1;
- uint8_t __otx2_io nixlf:1;
- uint16_t __otx2_io sso;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io timlfs;
- uint16_t __otx2_io cptlfs;
- uint16_t __otx2_io reelfs;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- int __otx2_io cpt_blkaddr;
- /* BLKADDR_REE0/BLKADDR_REE1 or 0 for BLKADDR_REE0 */
- int __otx2_io ree_blkaddr;
-};
-
-/* Structure for relinquishing resources.
- * 'partial' flag to be used when relinquishing all resources
- * but only of a certain type. If not set, all resources of all
- * types provisioned to the RVU function will be detached.
- */
-struct rsrc_detach_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io partial:1;
- uint8_t __otx2_io npalf:1;
- uint8_t __otx2_io nixlf:1;
- uint8_t __otx2_io sso:1;
- uint8_t __otx2_io ssow:1;
- uint8_t __otx2_io timlfs:1;
- uint8_t __otx2_io cptlfs:1;
- uint8_t __otx2_io reelfs:1;
-};
-
-/* NIX Transmit schedulers */
-#define NIX_TXSCH_LVL_SMQ 0x0
-#define NIX_TXSCH_LVL_MDQ 0x0
-#define NIX_TXSCH_LVL_TL4 0x1
-#define NIX_TXSCH_LVL_TL3 0x2
-#define NIX_TXSCH_LVL_TL2 0x3
-#define NIX_TXSCH_LVL_TL1 0x4
-#define NIX_TXSCH_LVL_CNT 0x5
-
-/*
- * Number of resources available to the caller.
- * In reply to MBOX_MSG_FREE_RSRC_CNT.
- */
-struct free_rsrcs_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT];
- uint16_t __otx2_io sso;
- uint16_t __otx2_io tim;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io cpt;
- uint8_t __otx2_io npa;
- uint8_t __otx2_io nix;
- uint16_t __otx2_io schq_nix1[NIX_TXSCH_LVL_CNT];
- uint8_t __otx2_io nix1;
- uint8_t __otx2_io cpt1;
- uint8_t __otx2_io ree0;
- uint8_t __otx2_io ree1;
-};
-
-#define MSIX_VECTOR_INVALID 0xFFFF
-#define MAX_RVU_BLKLF_CNT 256
-
-struct msix_offset_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io npa_msixoff;
- uint16_t __otx2_io nix_msixoff;
- uint16_t __otx2_io sso;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io timlfs;
- uint16_t __otx2_io cptlfs;
- uint16_t __otx2_io sso_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ssow_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io timlf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io cptlf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io cpt1_lfs;
- uint16_t __otx2_io ree0_lfs;
- uint16_t __otx2_io ree1_lfs;
- uint16_t __otx2_io cpt1_lf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ree0_lf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ree1_lf_msixoff[MAX_RVU_BLKLF_CNT];
-
-};
-
-/* CGX mbox message formats */
-
-struct cgx_stats_rsp {
- struct mbox_msghdr hdr;
-#define CGX_RX_STATS_COUNT 13
-#define CGX_TX_STATS_COUNT 18
- uint64_t __otx2_io rx_stats[CGX_RX_STATS_COUNT];
- uint64_t __otx2_io tx_stats[CGX_TX_STATS_COUNT];
-};
-
-struct cgx_fec_stats_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io fec_corr_blks;
- uint64_t __otx2_io fec_uncorr_blks;
-};
-/* Structure for requesting the operation for
- * setting/getting mac address in the CGX interface
- */
-struct cgx_mac_addr_set_or_get {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-/* Structure for requesting the operation to
- * add DMAC filter entry into CGX interface
- */
-struct cgx_mac_addr_add_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-/* Structure for response against the operation to
- * add DMAC filter entry into CGX interface
- */
-struct cgx_mac_addr_add_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io index;
-};
-
-/* Structure for requesting the operation to
- * delete DMAC filter entry from CGX interface
- */
-struct cgx_mac_addr_del_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io index;
-};
-
-/* Structure for response against the operation to
- * get maximum supported DMAC filter entries
- */
-struct cgx_max_dmac_entries_get_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io max_dmac_filters;
-};
-
-struct cgx_link_user_info {
- uint64_t __otx2_io link_up:1;
- uint64_t __otx2_io full_duplex:1;
- uint64_t __otx2_io lmac_type_id:4;
- uint64_t __otx2_io speed:20; /* speed in Mbps */
- uint64_t __otx2_io an:1; /* AN supported or not */
- uint64_t __otx2_io fec:2; /* FEC type if enabled else 0 */
- uint64_t __otx2_io port:8;
-#define LMACTYPE_STR_LEN 16
- char lmac_type[LMACTYPE_STR_LEN];
-};
-
-struct cgx_link_info_msg {
- struct mbox_msghdr hdr;
- struct cgx_link_user_info link_info;
-};
-
-struct cgx_ptp_rx_info_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io ptp_en;
-};
-
-struct cgx_pause_frm_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io set;
- /* set = 1 if the request is to config pause frames */
- /* set = 0 if the request is to fetch pause frames config */
- uint8_t __otx2_io rx_pause;
- uint8_t __otx2_io tx_pause;
-};
-
-struct sfp_eeprom_s {
-#define SFP_EEPROM_SIZE 256
- uint16_t __otx2_io sff_id;
- uint8_t __otx2_io buf[SFP_EEPROM_SIZE];
- uint64_t __otx2_io reserved;
-};
-
-enum fec_type {
- OTX2_FEC_NONE,
- OTX2_FEC_BASER,
- OTX2_FEC_RS,
-};
-
-struct phy_s {
- uint64_t __otx2_io can_change_mod_type : 1;
- uint64_t __otx2_io mod_type : 1;
-};
-
-struct cgx_lmac_fwdata_s {
- uint16_t __otx2_io rw_valid;
- uint64_t __otx2_io supported_fec;
- uint64_t __otx2_io supported_an;
- uint64_t __otx2_io supported_link_modes;
- /* Only applicable if AN is supported */
- uint64_t __otx2_io advertised_fec;
- uint64_t __otx2_io advertised_link_modes;
- /* Only applicable if SFP/QSFP slot is present */
- struct sfp_eeprom_s sfp_eeprom;
- struct phy_s phy;
-#define LMAC_FWDATA_RESERVED_MEM 1023
- uint64_t __otx2_io reserved[LMAC_FWDATA_RESERVED_MEM];
-};
-
-struct cgx_fw_data {
- struct mbox_msghdr hdr;
- struct cgx_lmac_fwdata_s fwdata;
-};
-
-struct fec_mode {
- struct mbox_msghdr hdr;
- int __otx2_io fec;
-};
-
-struct cgx_set_link_state_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io enable;
-};
-
-struct cgx_phy_mod_type {
- struct mbox_msghdr hdr;
- int __otx2_io mod;
-};
-
-struct cgx_set_link_mode_args {
- uint32_t __otx2_io speed;
- uint8_t __otx2_io duplex;
- uint8_t __otx2_io an;
- uint8_t __otx2_io ports;
- uint64_t __otx2_io mode;
-};
-
-struct cgx_set_link_mode_req {
- struct mbox_msghdr hdr;
- struct cgx_set_link_mode_args args;
-};
-
-struct cgx_set_link_mode_rsp {
- struct mbox_msghdr hdr;
- int __otx2_io status;
-};
-/* NPA mbox message formats */
-
-/* NPA mailbox error codes
- * Range 301 - 400.
- */
-enum npa_af_status {
- NPA_AF_ERR_PARAM = -301,
- NPA_AF_ERR_AQ_FULL = -302,
- NPA_AF_ERR_AQ_ENQUEUE = -303,
- NPA_AF_ERR_AF_LF_INVALID = -304,
- NPA_AF_ERR_AF_LF_ALLOC = -305,
- NPA_AF_ERR_LF_RESET = -306,
-};
-
-#define NPA_AURA_SZ_0 0
-#define NPA_AURA_SZ_128 1
-#define NPA_AURA_SZ_256 2
-#define NPA_AURA_SZ_512 3
-#define NPA_AURA_SZ_1K 4
-#define NPA_AURA_SZ_2K 5
-#define NPA_AURA_SZ_4K 6
-#define NPA_AURA_SZ_8K 7
-#define NPA_AURA_SZ_16K 8
-#define NPA_AURA_SZ_32K 9
-#define NPA_AURA_SZ_64K 10
-#define NPA_AURA_SZ_128K 11
-#define NPA_AURA_SZ_256K 12
-#define NPA_AURA_SZ_512K 13
-#define NPA_AURA_SZ_1M 14
-#define NPA_AURA_SZ_MAX 15
-
-/* For NPA LF context alloc and init */
-struct npa_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- int __otx2_io aura_sz; /* No of auras. See NPA_AURA_SZ_* */
- uint32_t __otx2_io nr_pools; /* No of pools */
- uint64_t __otx2_io way_mask;
-};
-
-struct npa_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io stack_pg_ptrs; /* No of ptrs per stack page */
- uint32_t __otx2_io stack_pg_bytes; /* Size of stack page */
- uint16_t __otx2_io qints; /* NPA_AF_CONST::QINTS */
-};
-
-/* NPA AQ enqueue msg */
-struct npa_aq_enq_req {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io aura_id;
- uint8_t __otx2_io ctype;
- uint8_t __otx2_io op;
- union {
- /* Valid when op == WRITE/INIT and ctype == AURA.
- * LF fills the pool_id in aura.pool_addr. AF will translate
- * the pool_id to pool context pointer.
- */
- __otx2_io struct npa_aura_s aura;
- /* Valid when op == WRITE/INIT and ctype == POOL */
- __otx2_io struct npa_pool_s pool;
- };
- /* Mask data when op == WRITE (1=write, 0=don't write) */
- union {
- /* Valid when op == WRITE and ctype == AURA */
- __otx2_io struct npa_aura_s aura_mask;
- /* Valid when op == WRITE and ctype == POOL */
- __otx2_io struct npa_pool_s pool_mask;
- };
-};
-
-struct npa_aq_enq_rsp {
- struct mbox_msghdr hdr;
- union {
- /* Valid when op == READ and ctype == AURA */
- __otx2_io struct npa_aura_s aura;
- /* Valid when op == READ and ctype == POOL */
- __otx2_io struct npa_pool_s pool;
- };
-};
-
-/* Disable all contexts of type 'ctype' */
-struct hwctx_disable_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io ctype;
-};
-
-/* NIX mbox message formats */
-
-/* NIX mailbox error codes
- * Range 401 - 500.
- */
-enum nix_af_status {
- NIX_AF_ERR_PARAM = -401,
- NIX_AF_ERR_AQ_FULL = -402,
- NIX_AF_ERR_AQ_ENQUEUE = -403,
- NIX_AF_ERR_AF_LF_INVALID = -404,
- NIX_AF_ERR_AF_LF_ALLOC = -405,
- NIX_AF_ERR_TLX_ALLOC_FAIL = -406,
- NIX_AF_ERR_TLX_INVALID = -407,
- NIX_AF_ERR_RSS_SIZE_INVALID = -408,
- NIX_AF_ERR_RSS_GRPS_INVALID = -409,
- NIX_AF_ERR_FRS_INVALID = -410,
- NIX_AF_ERR_RX_LINK_INVALID = -411,
- NIX_AF_INVAL_TXSCHQ_CFG = -412,
- NIX_AF_SMQ_FLUSH_FAILED = -413,
- NIX_AF_ERR_LF_RESET = -414,
- NIX_AF_ERR_RSS_NOSPC_FIELD = -415,
- NIX_AF_ERR_RSS_NOSPC_ALGO = -416,
- NIX_AF_ERR_MARK_CFG_FAIL = -417,
- NIX_AF_ERR_LSO_CFG_FAIL = -418,
- NIX_AF_INVAL_NPA_PF_FUNC = -419,
- NIX_AF_INVAL_SSO_PF_FUNC = -420,
- NIX_AF_ERR_TX_VTAG_NOSPC = -421,
- NIX_AF_ERR_RX_VTAG_INUSE = -422,
- NIX_AF_ERR_PTP_CONFIG_FAIL = -423,
-};
-
-/* For NIX LF context alloc and init */
-struct nix_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint32_t __otx2_io rq_cnt; /* No of receive queues */
- uint32_t __otx2_io sq_cnt; /* No of send queues */
- uint32_t __otx2_io cq_cnt; /* No of completion queues */
- uint8_t __otx2_io xqe_sz;
- uint16_t __otx2_io rss_sz;
- uint8_t __otx2_io rss_grps;
- uint16_t __otx2_io npa_func;
- /* RVU_DEFAULT_PF_FUNC == default pf_func associated with lf */
- uint16_t __otx2_io sso_func;
- uint64_t __otx2_io rx_cfg; /* See NIX_AF_LF(0..127)_RX_CFG */
- uint64_t __otx2_io way_mask;
-#define NIX_LF_RSS_TAG_LSB_AS_ADDER BIT_ULL(0)
- uint64_t flags;
-};
-
-struct nix_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sqb_size;
- uint16_t __otx2_io rx_chan_base;
- uint16_t __otx2_io tx_chan_base;
- uint8_t __otx2_io rx_chan_cnt; /* Total number of RX channels */
- uint8_t __otx2_io tx_chan_cnt; /* Total number of TX channels */
- uint8_t __otx2_io lso_tsov4_idx;
- uint8_t __otx2_io lso_tsov6_idx;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
- uint8_t __otx2_io lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */
- uint8_t __otx2_io lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */
- uint16_t __otx2_io cints; /* NIX_AF_CONST2::CINTS */
- uint16_t __otx2_io qints; /* NIX_AF_CONST2::QINTS */
- uint8_t __otx2_io hw_rx_tstamp_en; /*set if rx timestamping enabled */
- uint8_t __otx2_io cgx_links; /* No. of CGX links present in HW */
- uint8_t __otx2_io lbk_links; /* No. of LBK links present in HW */
- uint8_t __otx2_io sdp_links; /* No. of SDP links present in HW */
- uint8_t __otx2_io tx_link; /* Transmit channel link number */
-};
-
-struct nix_lf_free_req {
- struct mbox_msghdr hdr;
-#define NIX_LF_DISABLE_FLOWS BIT_ULL(0)
-#define NIX_LF_DONT_FREE_TX_VTAG BIT_ULL(1)
- uint64_t __otx2_io flags;
-};
-
-/* NIX AQ enqueue msg */
-struct nix_aq_enq_req {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io qidx;
- uint8_t __otx2_io ctype;
- uint8_t __otx2_io op;
- union {
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */
- __otx2_io struct nix_rq_ctx_s rq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */
- __otx2_io struct nix_sq_ctx_s sq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */
- __otx2_io struct nix_cq_ctx_s cq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */
- __otx2_io struct nix_rsse_s rss;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */
- __otx2_io struct nix_rx_mce_s mce;
- };
- /* Mask data when op == WRITE (1=write, 0=don't write) */
- union {
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */
- __otx2_io struct nix_rq_ctx_s rq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */
- __otx2_io struct nix_sq_ctx_s sq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */
- __otx2_io struct nix_cq_ctx_s cq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */
- __otx2_io struct nix_rsse_s rss_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */
- __otx2_io struct nix_rx_mce_s mce_mask;
- };
-};
-
-struct nix_aq_enq_rsp {
- struct mbox_msghdr hdr;
- union {
- __otx2_io struct nix_rq_ctx_s rq;
- __otx2_io struct nix_sq_ctx_s sq;
- __otx2_io struct nix_cq_ctx_s cq;
- __otx2_io struct nix_rsse_s rss;
- __otx2_io struct nix_rx_mce_s mce;
- };
-};
-
-/* Tx scheduler/shaper mailbox messages */
-
-#define MAX_TXSCHQ_PER_FUNC 128
-
-struct nix_txsch_alloc_req {
- struct mbox_msghdr hdr;
- /* Scheduler queue count request at each level */
- uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
-};
-
-struct nix_txsch_alloc_rsp {
- struct mbox_msghdr hdr;
- /* Scheduler queue count allocated at each level */
- uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
- /* Scheduler queue list allocated at each level */
- uint16_t __otx2_io
- schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- uint16_t __otx2_io schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- /* Traffic aggregation scheduler level */
- uint8_t __otx2_io aggr_level;
- /* Aggregation lvl's RR_PRIO config */
- uint8_t __otx2_io aggr_lvl_rr_prio;
- /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */
- uint8_t __otx2_io link_cfg_lvl;
-};
-
-struct nix_txsch_free_req {
- struct mbox_msghdr hdr;
-#define TXSCHQ_FREE_ALL BIT_ULL(0)
- uint16_t __otx2_io flags;
- /* Scheduler queue level to be freed */
- uint16_t __otx2_io schq_lvl;
- /* List of scheduler queues to be freed */
- uint16_t __otx2_io schq;
-};
-
-struct nix_txschq_config {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */
- uint8_t __otx2_io read;
-#define TXSCHQ_IDX_SHIFT 16
-#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1)
-#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK)
- uint8_t __otx2_io num_regs;
-#define MAX_REGS_PER_MBOX_MSG 20
- uint64_t __otx2_io reg[MAX_REGS_PER_MBOX_MSG];
- uint64_t __otx2_io regval[MAX_REGS_PER_MBOX_MSG];
- /* All 0's => overwrite with new value */
- uint64_t __otx2_io regval_mask[MAX_REGS_PER_MBOX_MSG];
-};
-
-struct nix_vtag_config {
- struct mbox_msghdr hdr;
- /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */
- uint8_t __otx2_io vtag_size;
- /* cfg_type is '0' for tx vlan cfg
- * cfg_type is '1' for rx vlan cfg
- */
- uint8_t __otx2_io cfg_type;
- union {
- /* Valid when cfg_type is '0' */
- struct {
- uint64_t __otx2_io vtag0;
- uint64_t __otx2_io vtag1;
-
- /* cfg_vtag0 & cfg_vtag1 fields are valid
- * when free_vtag0 & free_vtag1 are '0's.
- */
- /* cfg_vtag0 = 1 to configure vtag0 */
- uint8_t __otx2_io cfg_vtag0 :1;
- /* cfg_vtag1 = 1 to configure vtag1 */
- uint8_t __otx2_io cfg_vtag1 :1;
-
- /* vtag0_idx & vtag1_idx are only valid when
- * both cfg_vtag0 & cfg_vtag1 are '0's,
- * these fields are used along with free_vtag0
- * & free_vtag1 to free the nix lf's tx_vlan
- * configuration.
- *
- * Denotes the indices of tx_vtag def registers
- * that needs to be cleared and freed.
- */
- int __otx2_io vtag0_idx;
- int __otx2_io vtag1_idx;
-
- /* Free_vtag0 & free_vtag1 fields are valid
- * when cfg_vtag0 & cfg_vtag1 are '0's.
- */
- /* Free_vtag0 = 1 clears vtag0 configuration
- * vtag0_idx denotes the index to be cleared.
- */
- uint8_t __otx2_io free_vtag0 :1;
- /* Free_vtag1 = 1 clears vtag1 configuration
- * vtag1_idx denotes the index to be cleared.
- */
- uint8_t __otx2_io free_vtag1 :1;
- } tx;
-
- /* Valid when cfg_type is '1' */
- struct {
- /* Rx vtag type index, valid values are in 0..7 range */
- uint8_t __otx2_io vtag_type;
- /* Rx vtag strip */
- uint8_t __otx2_io strip_vtag :1;
- /* Rx vtag capture */
- uint8_t __otx2_io capture_vtag :1;
- } rx;
- };
-};
-
-struct nix_vtag_config_rsp {
- struct mbox_msghdr hdr;
- /* Indices of tx_vtag def registers used to configure
- * tx vtag0 & vtag1 headers, these indices are valid
- * when nix_vtag_config mbox requested for vtag0 and/
- * or vtag1 configuration.
- */
- int __otx2_io vtag0_idx;
- int __otx2_io vtag1_idx;
-};
-
-struct nix_rss_flowkey_cfg {
- struct mbox_msghdr hdr;
- int __otx2_io mcam_index; /* MCAM entry index to modify */
- uint32_t __otx2_io flowkey_cfg; /* Flowkey types selected */
-#define FLOW_KEY_TYPE_PORT BIT(0)
-#define FLOW_KEY_TYPE_IPV4 BIT(1)
-#define FLOW_KEY_TYPE_IPV6 BIT(2)
-#define FLOW_KEY_TYPE_TCP BIT(3)
-#define FLOW_KEY_TYPE_UDP BIT(4)
-#define FLOW_KEY_TYPE_SCTP BIT(5)
-#define FLOW_KEY_TYPE_NVGRE BIT(6)
-#define FLOW_KEY_TYPE_VXLAN BIT(7)
-#define FLOW_KEY_TYPE_GENEVE BIT(8)
-#define FLOW_KEY_TYPE_ETH_DMAC BIT(9)
-#define FLOW_KEY_TYPE_IPV6_EXT BIT(10)
-#define FLOW_KEY_TYPE_GTPU BIT(11)
-#define FLOW_KEY_TYPE_INNR_IPV4 BIT(12)
-#define FLOW_KEY_TYPE_INNR_IPV6 BIT(13)
-#define FLOW_KEY_TYPE_INNR_TCP BIT(14)
-#define FLOW_KEY_TYPE_INNR_UDP BIT(15)
-#define FLOW_KEY_TYPE_INNR_SCTP BIT(16)
-#define FLOW_KEY_TYPE_INNR_ETH_DMAC BIT(17)
-#define FLOW_KEY_TYPE_CH_LEN_90B BIT(18)
-#define FLOW_KEY_TYPE_CUSTOM0 BIT(19)
-#define FLOW_KEY_TYPE_VLAN BIT(20)
-#define FLOW_KEY_TYPE_L4_DST BIT(28)
-#define FLOW_KEY_TYPE_L4_SRC BIT(29)
-#define FLOW_KEY_TYPE_L3_DST BIT(30)
-#define FLOW_KEY_TYPE_L3_SRC BIT(31)
- uint8_t __otx2_io group; /* RSS context or group */
-};
-
-struct nix_rss_flowkey_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io alg_idx; /* Selected algo index */
-};
-
-struct nix_set_mac_addr {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-struct nix_get_mac_addr_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-struct nix_mark_format_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io offset;
- uint8_t __otx2_io y_mask;
- uint8_t __otx2_io y_val;
- uint8_t __otx2_io r_mask;
- uint8_t __otx2_io r_val;
-};
-
-struct nix_mark_format_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mark_format_idx;
-};
-
-struct nix_lso_format_cfg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io field_mask;
- uint64_t __otx2_io fields[NIX_LSO_FIELD_MAX];
-};
-
-struct nix_lso_format_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io lso_format_idx;
-};
-
-struct nix_rx_mode {
- struct mbox_msghdr hdr;
-#define NIX_RX_MODE_UCAST BIT(0)
-#define NIX_RX_MODE_PROMISC BIT(1)
-#define NIX_RX_MODE_ALLMULTI BIT(2)
- uint16_t __otx2_io mode;
-};
-
-struct nix_rx_cfg {
- struct mbox_msghdr hdr;
-#define NIX_RX_OL3_VERIFY BIT(0)
-#define NIX_RX_OL4_VERIFY BIT(1)
- uint8_t __otx2_io len_verify; /* Outer L3/L4 len check */
-#define NIX_RX_CSUM_OL4_VERIFY BIT(0)
- uint8_t __otx2_io csum_verify; /* Outer L4 checksum verification */
-};
-
-struct nix_frs_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io update_smq; /* Update SMQ's min/max lens */
- uint8_t __otx2_io update_minlen; /* Set minlen also */
- uint8_t __otx2_io sdp_link; /* Set SDP RX link */
- uint16_t __otx2_io maxlen;
- uint16_t __otx2_io minlen;
-};
-
-struct nix_set_vlan_tpid {
- struct mbox_msghdr hdr;
-#define NIX_VLAN_TYPE_INNER 0
-#define NIX_VLAN_TYPE_OUTER 1
- uint8_t __otx2_io vlan_type;
- uint16_t __otx2_io tpid;
-};
-
-struct nix_bp_cfg_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io chan_base; /* Starting channel number */
- uint8_t __otx2_io chan_cnt; /* Number of channels */
- uint8_t __otx2_io bpid_per_chan;
- /* bpid_per_chan = 0 assigns single bp id for range of channels */
- /* bpid_per_chan = 1 assigns separate bp id for each channel */
-};
-
-/* PF can be mapped to either CGX or LBK interface,
- * so maximum 64 channels are possible.
- */
-#define NIX_MAX_CHAN 64
-struct nix_bp_cfg_rsp {
- struct mbox_msghdr hdr;
- /* Channel and bpid mapping */
- uint16_t __otx2_io chan_bpid[NIX_MAX_CHAN];
- /* Number of channel for which bpids are assigned */
- uint8_t __otx2_io chan_cnt;
-};
-
-/* Global NIX inline IPSec configuration */
-struct nix_inline_ipsec_cfg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io cpt_credit;
- struct {
- uint8_t __otx2_io egrp;
- uint8_t __otx2_io opcode;
- } gen_cfg;
- struct {
- uint16_t __otx2_io cpt_pf_func;
- uint8_t __otx2_io cpt_slot;
- } inst_qsel;
- uint8_t __otx2_io enable;
-};
-
-/* Per NIX LF inline IPSec configuration */
-struct nix_inline_ipsec_lf_cfg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io sa_base_addr;
- struct {
- uint32_t __otx2_io tag_const;
- uint16_t __otx2_io lenm1_max;
- uint8_t __otx2_io sa_pow2_size;
- uint8_t __otx2_io tt;
- } ipsec_cfg0;
- struct {
- uint32_t __otx2_io sa_idx_max;
- uint8_t __otx2_io sa_idx_w;
- } ipsec_cfg1;
- uint8_t __otx2_io enable;
-};
-
-/* SSO mailbox error codes
- * Range 501 - 600.
- */
-enum sso_af_status {
- SSO_AF_ERR_PARAM = -501,
- SSO_AF_ERR_LF_INVALID = -502,
- SSO_AF_ERR_AF_LF_ALLOC = -503,
- SSO_AF_ERR_GRP_EBUSY = -504,
- SSO_AF_INVAL_NPA_PF_FUNC = -505,
-};
-
-struct sso_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io xaq_buf_size;
- uint32_t __otx2_io xaq_wq_entries;
- uint32_t __otx2_io in_unit_entries;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_lf_free_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hwgrps;
-};
-
-/* SSOW mailbox error codes
- * Range 601 - 700.
- */
-enum ssow_af_status {
- SSOW_AF_ERR_PARAM = -601,
- SSOW_AF_ERR_LF_INVALID = -602,
- SSOW_AF_ERR_AF_LF_ALLOC = -603,
-};
-
-struct ssow_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hws;
-};
-
-struct ssow_lf_free_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hws;
-};
-
-struct sso_hw_setconfig {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io npa_aura_id;
- uint16_t __otx2_io npa_pf_func;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_release_xaq {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_info_req {
- struct mbox_msghdr hdr;
- union {
- uint16_t __otx2_io grp;
- uint16_t __otx2_io hws;
- };
-};
-
-struct sso_grp_priority {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint8_t __otx2_io priority;
- uint8_t __otx2_io affinity;
- uint8_t __otx2_io weight;
-};
-
-struct sso_grp_qos_cfg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint32_t __otx2_io xaq_limit;
- uint16_t __otx2_io taq_thr;
- uint16_t __otx2_io iaq_thr;
-};
-
-struct sso_grp_stats {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint64_t __otx2_io ws_pc;
- uint64_t __otx2_io ext_pc;
- uint64_t __otx2_io wa_pc;
- uint64_t __otx2_io ts_pc;
- uint64_t __otx2_io ds_pc;
- uint64_t __otx2_io dq_pc;
- uint64_t __otx2_io aw_status;
- uint64_t __otx2_io page_cnt;
-};
-
-struct sso_hws_stats {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io hws;
- uint64_t __otx2_io arbitration;
-};
-
-/* CPT mailbox error codes
- * Range 901 - 1000.
- */
-enum cpt_af_status {
- CPT_AF_ERR_PARAM = -901,
- CPT_AF_ERR_GRP_INVALID = -902,
- CPT_AF_ERR_LF_INVALID = -903,
- CPT_AF_ERR_ACCESS_DENIED = -904,
- CPT_AF_ERR_SSO_PF_FUNC_INVALID = -905,
- CPT_AF_ERR_NIX_PF_FUNC_INVALID = -906,
- CPT_AF_ERR_INLINE_IPSEC_INB_ENA = -907,
- CPT_AF_ERR_INLINE_IPSEC_OUT_ENA = -908
-};
-
-/* CPT mbox message formats */
-
-struct cpt_rd_wr_reg_msg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io reg_offset;
- uint64_t __otx2_io *ret_val;
- uint64_t __otx2_io val;
- uint8_t __otx2_io is_write;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- uint8_t __otx2_io blkaddr;
-};
-
-struct cpt_set_crypto_grp_req_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io crypto_eng_grp;
-};
-
-struct cpt_lf_alloc_req_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io nix_pf_func;
- uint16_t __otx2_io sso_pf_func;
- uint16_t __otx2_io eng_grpmask;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- uint8_t __otx2_io blkaddr;
-};
-
-struct cpt_lf_alloc_rsp_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io eng_grpmsk;
-};
-
-#define CPT_INLINE_INBOUND 0
-#define CPT_INLINE_OUTBOUND 1
-
-struct cpt_inline_ipsec_cfg_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io enable;
- uint8_t __otx2_io slot;
- uint8_t __otx2_io dir;
- uint16_t __otx2_io sso_pf_func; /* Inbound path SSO_PF_FUNC */
- uint16_t __otx2_io nix_pf_func; /* Outbound path NIX_PF_FUNC */
-};
-
-struct cpt_rx_inline_lf_cfg_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sso_pf_func;
-};
-
-enum cpt_eng_type {
- CPT_ENG_TYPE_AE = 1,
- CPT_ENG_TYPE_SE = 2,
- CPT_ENG_TYPE_IE = 3,
- CPT_MAX_ENG_TYPES,
-};
-
-/* CPT HW capabilities */
-union cpt_eng_caps {
- uint64_t __otx2_io u;
- struct {
- uint64_t __otx2_io reserved_0_4:5;
- uint64_t __otx2_io mul:1;
- uint64_t __otx2_io sha1_sha2:1;
- uint64_t __otx2_io chacha20:1;
- uint64_t __otx2_io zuc_snow3g:1;
- uint64_t __otx2_io sha3:1;
- uint64_t __otx2_io aes:1;
- uint64_t __otx2_io kasumi:1;
- uint64_t __otx2_io des:1;
- uint64_t __otx2_io crc:1;
- uint64_t __otx2_io reserved_14_63:50;
- };
-};
-
-struct cpt_caps_rsp_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cpt_pf_drv_version;
- uint8_t __otx2_io cpt_revision;
- union cpt_eng_caps eng_caps[CPT_MAX_ENG_TYPES];
-};
-
-/* NPC mbox message structs */
-
-#define NPC_MCAM_ENTRY_INVALID 0xFFFF
-#define NPC_MCAM_INVALID_MAP 0xFFFF
-
-/* NPC mailbox error codes
- * Range 701 - 800.
- */
-enum npc_af_status {
- NPC_MCAM_INVALID_REQ = -701,
- NPC_MCAM_ALLOC_DENIED = -702,
- NPC_MCAM_ALLOC_FAILED = -703,
- NPC_MCAM_PERM_DENIED = -704,
- NPC_AF_ERR_HIGIG_CONFIG_FAIL = -705,
-};
-
-struct npc_mcam_alloc_entry_req {
- struct mbox_msghdr hdr;
-#define NPC_MAX_NONCONTIG_ENTRIES 256
- uint8_t __otx2_io contig; /* Contiguous entries ? */
-#define NPC_MCAM_ANY_PRIO 0
-#define NPC_MCAM_LOWER_PRIO 1
-#define NPC_MCAM_HIGHER_PRIO 2
- uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */
- uint16_t __otx2_io ref_entry;
- uint16_t __otx2_io count; /* Number of entries requested */
-};
-
-struct npc_mcam_alloc_entry_rsp {
- struct mbox_msghdr hdr;
- /* Entry alloc'ed or start index if contiguous.
- * Invalid in case of non-contiguous.
- */
- uint16_t __otx2_io entry;
- uint16_t __otx2_io count; /* Number of entries allocated */
- uint16_t __otx2_io free_count; /* Number of entries available */
- uint16_t __otx2_io entry_list[NPC_MAX_NONCONTIG_ENTRIES];
-};
-
-struct npc_mcam_free_entry_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry; /* Entry index to be freed */
- uint8_t __otx2_io all; /* Free all entries alloc'ed to this PFVF */
-};
-
-struct mcam_entry {
-#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max key width */
- uint64_t __otx2_io kw[NPC_MAX_KWS_IN_KEY];
- uint64_t __otx2_io kw_mask[NPC_MAX_KWS_IN_KEY];
- uint64_t __otx2_io action;
- uint64_t __otx2_io vtag_action;
-};
-
-struct npc_mcam_write_entry_req {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint16_t __otx2_io entry; /* MCAM entry to write this match key */
- uint16_t __otx2_io cntr; /* Counter for this MCAM entry */
- uint8_t __otx2_io intf; /* Rx or Tx interface */
- uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */
- uint8_t __otx2_io set_cntr; /* Set counter for this entry ? */
-};
-
-/* Enable/Disable a given entry */
-struct npc_mcam_ena_dis_entry_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
-};
-
-struct npc_mcam_shift_entry_req {
- struct mbox_msghdr hdr;
-#define NPC_MCAM_MAX_SHIFTS 64
- uint16_t __otx2_io curr_entry[NPC_MCAM_MAX_SHIFTS];
- uint16_t __otx2_io new_entry[NPC_MCAM_MAX_SHIFTS];
- uint16_t __otx2_io shift_count; /* Number of entries to shift */
-};
-
-struct npc_mcam_shift_entry_rsp {
- struct mbox_msghdr hdr;
- /* Index in 'curr_entry', not entry itself */
- uint16_t __otx2_io failed_entry_idx;
-};
-
-struct npc_mcam_alloc_counter_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io contig; /* Contiguous counters ? */
-#define NPC_MAX_NONCONTIG_COUNTERS 64
- uint16_t __otx2_io count; /* Number of counters requested */
-};
-
-struct npc_mcam_alloc_counter_rsp {
- struct mbox_msghdr hdr;
- /* Counter alloc'ed or start idx if contiguous.
- * Invalid incase of non-contiguous.
- */
- uint16_t __otx2_io cntr;
- uint16_t __otx2_io count; /* Number of counters allocated */
- uint16_t __otx2_io cntr_list[NPC_MAX_NONCONTIG_COUNTERS];
-};
-
-struct npc_mcam_oper_counter_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cntr; /* Free a counter or clear/fetch it's stats */
-};
-
-struct npc_mcam_oper_counter_rsp {
- struct mbox_msghdr hdr;
- /* valid only while fetching counter's stats */
- uint64_t __otx2_io stat;
-};
-
-struct npc_mcam_unmap_counter_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cntr;
- uint16_t __otx2_io entry; /* Entry and counter to be unmapped */
- uint8_t __otx2_io all; /* Unmap all entries using this counter ? */
-};
-
-struct npc_mcam_alloc_and_write_entry_req {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint16_t __otx2_io ref_entry;
- uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */
- uint8_t __otx2_io intf; /* Rx or Tx interface */
- uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */
- uint8_t __otx2_io alloc_cntr; /* Allocate counter and map ? */
-};
-
-struct npc_mcam_alloc_and_write_entry_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io cntr;
-};
-
-struct npc_get_kex_cfg_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */
- uint64_t __otx2_io tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */
-#define NPC_MAX_INTF 2
-#define NPC_MAX_LID 8
-#define NPC_MAX_LT 16
-#define NPC_MAX_LD 2
-#define NPC_MAX_LFL 16
- /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
- uint64_t __otx2_io kex_ld_flags[NPC_MAX_LD];
- /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */
- uint64_t __otx2_io
- intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD];
- /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */
- uint64_t __otx2_io
- intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
-#define MKEX_NAME_LEN 128
- uint8_t __otx2_io mkex_pfl_name[MKEX_NAME_LEN];
-};
-
-enum header_fields {
- NPC_DMAC,
- NPC_SMAC,
- NPC_ETYPE,
- NPC_OUTER_VID,
- NPC_TOS,
- NPC_SIP_IPV4,
- NPC_DIP_IPV4,
- NPC_SIP_IPV6,
- NPC_DIP_IPV6,
- NPC_SPORT_TCP,
- NPC_DPORT_TCP,
- NPC_SPORT_UDP,
- NPC_DPORT_UDP,
- NPC_FDSA_VAL,
- NPC_HEADER_FIELDS_MAX,
-};
-
-struct flow_msg {
- unsigned char __otx2_io dmac[6];
- unsigned char __otx2_io smac[6];
- uint16_t __otx2_io etype;
- uint16_t __otx2_io vlan_etype;
- uint16_t __otx2_io vlan_tci;
- union {
- uint32_t __otx2_io ip4src;
- uint32_t __otx2_io ip6src[4];
- };
- union {
- uint32_t __otx2_io ip4dst;
- uint32_t __otx2_io ip6dst[4];
- };
- uint8_t __otx2_io tos;
- uint8_t __otx2_io ip_ver;
- uint8_t __otx2_io ip_proto;
- uint8_t __otx2_io tc;
- uint16_t __otx2_io sport;
- uint16_t __otx2_io dport;
-};
-
-struct npc_install_flow_req {
- struct mbox_msghdr hdr;
- struct flow_msg packet;
- struct flow_msg mask;
- uint64_t __otx2_io features;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io channel;
- uint8_t __otx2_io intf;
- uint8_t __otx2_io set_cntr;
- uint8_t __otx2_io default_rule;
- /* Overwrite(0) or append(1) flow to default rule? */
- uint8_t __otx2_io append;
- uint16_t __otx2_io vf;
- /* action */
- uint32_t __otx2_io index;
- uint16_t __otx2_io match_id;
- uint8_t __otx2_io flow_key_alg;
- uint8_t __otx2_io op;
- /* vtag action */
- uint8_t __otx2_io vtag0_type;
- uint8_t __otx2_io vtag0_valid;
- uint8_t __otx2_io vtag1_type;
- uint8_t __otx2_io vtag1_valid;
-
- /* vtag tx action */
- uint16_t __otx2_io vtag0_def;
- uint8_t __otx2_io vtag0_op;
- uint16_t __otx2_io vtag1_def;
- uint8_t __otx2_io vtag1_op;
-};
-
-struct npc_install_flow_rsp {
- struct mbox_msghdr hdr;
- /* Negative if no counter else counter number */
- int __otx2_io counter;
-};
-
-struct npc_delete_flow_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io start;/*Disable range of entries */
- uint16_t __otx2_io end;
- uint8_t __otx2_io all; /* PF + VFs */
-};
-
-struct npc_mcam_read_entry_req {
- struct mbox_msghdr hdr;
- /* MCAM entry to read */
- uint16_t __otx2_io entry;
-};
-
-struct npc_mcam_read_entry_rsp {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint8_t __otx2_io intf;
- uint8_t __otx2_io enable;
-};
-
-struct npc_mcam_read_base_rule_rsp {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
-};
-
-/* TIM mailbox error codes
- * Range 801 - 900.
- */
-enum tim_af_status {
- TIM_AF_NO_RINGS_LEFT = -801,
- TIM_AF_INVALID_NPA_PF_FUNC = -802,
- TIM_AF_INVALID_SSO_PF_FUNC = -803,
- TIM_AF_RING_STILL_RUNNING = -804,
- TIM_AF_LF_INVALID = -805,
- TIM_AF_CSIZE_NOT_ALIGNED = -806,
- TIM_AF_CSIZE_TOO_SMALL = -807,
- TIM_AF_CSIZE_TOO_BIG = -808,
- TIM_AF_INTERVAL_TOO_SMALL = -809,
- TIM_AF_INVALID_BIG_ENDIAN_VALUE = -810,
- TIM_AF_INVALID_CLOCK_SOURCE = -811,
- TIM_AF_GPIO_CLK_SRC_NOT_ENABLED = -812,
- TIM_AF_INVALID_BSIZE = -813,
- TIM_AF_INVALID_ENABLE_PERIODIC = -814,
- TIM_AF_INVALID_ENABLE_DONTFREE = -815,
- TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816,
- TIM_AF_RING_ALREADY_DISABLED = -817,
-};
-
-enum tim_clk_srcs {
- TIM_CLK_SRCS_TENNS = 0,
- TIM_CLK_SRCS_GPIO = 1,
- TIM_CLK_SRCS_GTI = 2,
- TIM_CLK_SRCS_PTP = 3,
- TIM_CLK_SRSC_INVALID,
-};
-
-enum tim_gpio_edge {
- TIM_GPIO_NO_EDGE = 0,
- TIM_GPIO_LTOH_TRANS = 1,
- TIM_GPIO_HTOL_TRANS = 2,
- TIM_GPIO_BOTH_TRANS = 3,
- TIM_GPIO_INVALID,
-};
-
-enum ptp_op {
- PTP_OP_ADJFINE = 0, /* adjfine(req.scaled_ppm); */
- PTP_OP_GET_CLOCK = 1, /* rsp.clk = get_clock() */
-};
-
-struct ptp_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io op;
- int64_t __otx2_io scaled_ppm;
- uint8_t __otx2_io is_pmu;
-};
-
-struct ptp_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io clk;
- uint64_t __otx2_io tsc;
-};
-
-struct get_hw_cap_rsp {
- struct mbox_msghdr hdr;
- /* Schq mapping fixed or flexible */
- uint8_t __otx2_io nix_fixed_txschq_mapping;
- uint8_t __otx2_io nix_shaping; /* Is shaping and coloring supported */
-};
-
-struct ndc_sync_op {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io nix_lf_tx_sync;
- uint8_t __otx2_io nix_lf_rx_sync;
- uint8_t __otx2_io npa_lf_sync;
-};
-
-struct tim_lf_alloc_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
- uint16_t __otx2_io npa_pf_func;
- uint16_t __otx2_io sso_pf_func;
-};
-
-struct tim_ring_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
-};
-
-struct tim_config_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
- uint8_t __otx2_io bigendian;
- uint8_t __otx2_io clocksource;
- uint8_t __otx2_io enableperiodic;
- uint8_t __otx2_io enabledontfreebuffer;
- uint32_t __otx2_io bucketsize;
- uint32_t __otx2_io chunksize;
- uint32_t __otx2_io interval;
-};
-
-struct tim_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io tenns_clk;
-};
-
-struct tim_enable_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io timestarted;
- uint32_t __otx2_io currentbucket;
-};
-
-/* REE mailbox error codes
- * Range 1001 - 1100.
- */
-enum ree_af_status {
- REE_AF_ERR_RULE_UNKNOWN_VALUE = -1001,
- REE_AF_ERR_LF_NO_MORE_RESOURCES = -1002,
- REE_AF_ERR_LF_INVALID = -1003,
- REE_AF_ERR_ACCESS_DENIED = -1004,
- REE_AF_ERR_RULE_DB_PARTIAL = -1005,
- REE_AF_ERR_RULE_DB_EQ_BAD_VALUE = -1006,
- REE_AF_ERR_RULE_DB_BLOCK_ALLOC_FAILED = -1007,
- REE_AF_ERR_BLOCK_NOT_IMPLEMENTED = -1008,
- REE_AF_ERR_RULE_DB_INC_OFFSET_TOO_BIG = -1009,
- REE_AF_ERR_RULE_DB_OFFSET_TOO_BIG = -1010,
- REE_AF_ERR_Q_IS_GRACEFUL_DIS = -1011,
- REE_AF_ERR_Q_NOT_GRACEFUL_DIS = -1012,
- REE_AF_ERR_RULE_DB_ALLOC_FAILED = -1013,
- REE_AF_ERR_RULE_DB_TOO_BIG = -1014,
- REE_AF_ERR_RULE_DB_GEQ_BAD_VALUE = -1015,
- REE_AF_ERR_RULE_DB_LEQ_BAD_VALUE = -1016,
- REE_AF_ERR_RULE_DB_WRONG_LENGTH = -1017,
- REE_AF_ERR_RULE_DB_WRONG_OFFSET = -1018,
- REE_AF_ERR_RULE_DB_BLOCK_TOO_BIG = -1019,
- REE_AF_ERR_RULE_DB_SHOULD_FILL_REQUEST = -1020,
- REE_AF_ERR_RULE_DBI_ALLOC_FAILED = -1021,
- REE_AF_ERR_LF_WRONG_PRIORITY = -1022,
- REE_AF_ERR_LF_SIZE_TOO_BIG = -1023,
-};
-
-/* REE mbox message formats */
-
-struct ree_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
-};
-
-struct ree_lf_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io size;
- uint8_t __otx2_io lf;
- uint8_t __otx2_io pri;
-};
-
-struct ree_rule_db_prog_req_msg {
- struct mbox_msghdr hdr;
-#define REE_RULE_DB_REQ_BLOCK_SIZE (MBOX_SIZE >> 1)
- uint8_t __otx2_io rule_db[REE_RULE_DB_REQ_BLOCK_SIZE];
- uint32_t __otx2_io blkaddr; /* REE0 or REE1 */
- uint32_t __otx2_io total_len; /* total len of rule db */
- uint32_t __otx2_io offset; /* offset of current rule db block */
- uint16_t __otx2_io len; /* length of rule db block */
- uint8_t __otx2_io is_last; /* is this the last block */
- uint8_t __otx2_io is_incremental; /* is incremental flow */
- uint8_t __otx2_io is_dbi; /* is rule db incremental */
-};
-
-struct ree_rule_db_get_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io offset; /* retrieve db from this offset */
- uint8_t __otx2_io is_dbi; /* is request for rule db incremental */
-};
-
-struct ree_rd_wr_reg_msg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io reg_offset;
- uint64_t __otx2_io *ret_val;
- uint64_t __otx2_io val;
- uint32_t __otx2_io blkaddr;
- uint8_t __otx2_io is_write;
-};
-
-struct ree_rule_db_len_rsp_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io len;
- uint32_t __otx2_io inc_len;
-};
-
-struct ree_rule_db_get_rsp_msg {
- struct mbox_msghdr hdr;
-#define REE_RULE_DB_RSP_BLOCK_SIZE (MBOX_DOWN_TX_SIZE - SZ_1K)
- uint8_t __otx2_io rule_db[REE_RULE_DB_RSP_BLOCK_SIZE];
- uint32_t __otx2_io total_len; /* total len of rule db */
- uint32_t __otx2_io offset; /* offset of current rule db block */
- uint16_t __otx2_io len; /* length of rule db block */
- uint8_t __otx2_io is_last; /* is this the last block */
-};
-
-__rte_internal
-const char *otx2_mbox_id2name(uint16_t id);
-int otx2_mbox_id2size(uint16_t id);
-void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
-int otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
- int direction, int ndevsi, uint64_t intr_offset);
-void otx2_mbox_fini(struct otx2_mbox *mbox);
-__rte_internal
-void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
-__rte_internal
-int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
-int otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo);
-__rte_internal
-int otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg);
-__rte_internal
-int otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
- uint32_t tmo);
-int otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid);
-__rte_internal
-struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
- int size, int size_rsp);
-
-static inline struct mbox_msghdr *
-otx2_mbox_alloc_msg(struct otx2_mbox *mbox, int devid, int size)
-{
- return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
-}
-
-static inline void
-otx2_mbox_req_init(uint16_t mbox_id, void *msghdr)
-{
- struct mbox_msghdr *hdr = msghdr;
-
- hdr->sig = OTX2_MBOX_REQ_SIG;
- hdr->ver = OTX2_MBOX_VERSION;
- hdr->id = mbox_id;
- hdr->pcifunc = 0;
-}
-
-static inline void
-otx2_mbox_rsp_init(uint16_t mbox_id, void *msghdr)
-{
- struct mbox_msghdr *hdr = msghdr;
-
- hdr->sig = OTX2_MBOX_RSP_SIG;
- hdr->rc = -ETIMEDOUT;
- hdr->id = mbox_id;
-}
-
-static inline bool
-otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- bool ret;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- ret = mdev->num_msgs != 0;
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return ret;
-}
-
-static inline int
-otx2_mbox_process(struct otx2_mbox *mbox)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp(mbox, 0, NULL);
-}
-
-static inline int
-otx2_mbox_process_msg(struct otx2_mbox *mbox, void **msg)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp(mbox, 0, msg);
-}
-
-static inline int
-otx2_mbox_process_tmo(struct otx2_mbox *mbox, uint32_t tmo)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp_tmo(mbox, 0, NULL, tmo);
-}
-
-static inline int
-otx2_mbox_process_msg_tmo(struct otx2_mbox *mbox, void **msg, uint32_t tmo)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp_tmo(mbox, 0, msg, tmo);
-}
-
-int otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pf_func /* out */);
-int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pf_func,
- uint16_t id);
-
-#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
-static inline struct _req_type \
-*otx2_mbox_alloc_msg_ ## _fn_name(struct otx2_mbox *mbox) \
-{ \
- struct _req_type *req; \
- \
- req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \
- mbox, 0, sizeof(struct _req_type), \
- sizeof(struct _rsp_type)); \
- if (!req) \
- return NULL; \
- \
- req->hdr.sig = OTX2_MBOX_REQ_SIG; \
- req->hdr.id = _id; \
- otx2_mbox_dbg("id=0x%x (%s)", \
- req->hdr.id, otx2_mbox_id2name(req->hdr.id)); \
- return req; \
-}
-
-MBOX_MESSAGES
-#undef M
-
-/* This is required for copy operations from device memory which do not work on
- * addresses which are unaligned to 16B. This is because of specific
- * optimizations to libc memcpy.
- */
-static inline volatile void *
-otx2_mbox_memcpy(volatile void *d, const volatile void *s, size_t l)
-{
- const volatile uint8_t *sb;
- volatile uint8_t *db;
- size_t i;
-
- if (!d || !s)
- return NULL;
- db = (volatile uint8_t *)d;
- sb = (const volatile uint8_t *)s;
- for (i = 0; i < l; i++)
- db[i] = sb[i];
- return d;
-}
-
-/* This is required for memory operations from device memory which do not
- * work on addresses which are unaligned to 16B. This is because of specific
- * optimizations to libc memset.
- */
-static inline void
-otx2_mbox_memset(volatile void *d, uint8_t val, size_t l)
-{
- volatile uint8_t *db;
- size_t i = 0;
-
- if (!d || !l)
- return;
- db = (volatile uint8_t *)d;
- for (i = 0; i < l; i++)
- db[i] = val;
-}
-
-#endif /* __OTX2_MBOX_H__ */
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
deleted file mode 100644
index b561b67174..0000000000
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_bus_pci.h>
-#include <ethdev_driver.h>
-#include <rte_spinlock.h>
-
-#include "otx2_common.h"
-#include "otx2_sec_idev.h"
-
-static struct otx2_sec_idev_cfg sec_cfg[OTX2_MAX_INLINE_PORTS];
-
-/**
- * @internal
- * Check if rte_eth_dev is security offload capable otx2_eth_dev
- */
-uint8_t
-otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev;
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_PF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_VF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_AF_VF)
- return 1;
-
- return 0;
-}
-
-int
-otx2_sec_idev_cfg_init(int port_id)
-{
- struct otx2_sec_idev_cfg *cfg;
- int i;
-
- cfg = &sec_cfg[port_id];
- cfg->tx_cpt_idx = 0;
- rte_spinlock_init(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- cfg->tx_cpt[i].qp = NULL;
- rte_atomic16_set(&cfg->tx_cpt[i].ref_cnt, 0);
- }
-
- return 0;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- int i, ret;
-
- if (qp == NULL || port_id >= OTX2_MAX_INLINE_PORTS)
- return -EINVAL;
-
- cfg = &sec_cfg[port_id];
-
- /* Find a free slot to save CPT LF */
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp == NULL) {
- cfg->tx_cpt[i].qp = qp;
- ret = 0;
- goto unlock;
- }
- }
-
- ret = -EINVAL;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t port_id;
- int i, ret;
-
- if (qp == NULL)
- return -EINVAL;
-
- for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) {
- cfg = &sec_cfg[port_id];
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp != qp)
- continue;
-
- /* Don't free if the QP is in use by any sec session */
- if (rte_atomic16_read(&cfg->tx_cpt[i].ref_cnt)) {
- ret = -EBUSY;
- } else {
- cfg->tx_cpt[i].qp = NULL;
- ret = 0;
- }
-
- goto unlock;
- }
-
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- }
-
- return -ENOENT;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t index;
- int i, ret;
-
- if (port_id >= OTX2_MAX_INLINE_PORTS || qp == NULL)
- return -EINVAL;
-
- cfg = &sec_cfg[port_id];
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- index = cfg->tx_cpt_idx;
-
- /* Get the next index with valid data */
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[index].qp != NULL)
- break;
- index = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT;
- }
-
- if (i >= OTX2_MAX_CPT_QP_PER_PORT) {
- ret = -EINVAL;
- goto unlock;
- }
-
- *qp = cfg->tx_cpt[index].qp;
- rte_atomic16_inc(&cfg->tx_cpt[index].ref_cnt);
-
- cfg->tx_cpt_idx = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT;
-
- ret = 0;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t port_id;
- int i;
-
- if (qp == NULL)
- return -EINVAL;
-
- for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) {
- cfg = &sec_cfg[port_id];
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp == qp) {
- rte_atomic16_dec(&cfg->tx_cpt[i].ref_cnt);
- return 0;
- }
- }
- }
-
- return -EINVAL;
-}
diff --git a/drivers/common/octeontx2/otx2_sec_idev.h b/drivers/common/octeontx2/otx2_sec_idev.h
deleted file mode 100644
index 89cdaf66ab..0000000000
--- a/drivers/common/octeontx2/otx2_sec_idev.h
+++ /dev/null
@@ -1,43 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_SEC_IDEV_H_
-#define _OTX2_SEC_IDEV_H_
-
-#include <rte_ethdev.h>
-
-#define OTX2_MAX_CPT_QP_PER_PORT 64
-#define OTX2_MAX_INLINE_PORTS 64
-
-struct otx2_cpt_qp;
-
-struct otx2_sec_idev_cfg {
- struct {
- struct otx2_cpt_qp *qp;
- rte_atomic16_t ref_cnt;
- } tx_cpt[OTX2_MAX_CPT_QP_PER_PORT];
-
- uint16_t tx_cpt_idx;
- rte_spinlock_t tx_cpt_lock;
-};
-
-__rte_internal
-uint8_t otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev);
-
-__rte_internal
-int otx2_sec_idev_cfg_init(int port_id);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp);
-
-#endif /* _OTX2_SEC_IDEV_H_ */
diff --git a/drivers/common/octeontx2/version.map b/drivers/common/octeontx2/version.map
deleted file mode 100644
index b58f19ce32..0000000000
--- a/drivers/common/octeontx2/version.map
+++ /dev/null
@@ -1,44 +0,0 @@
-INTERNAL {
- global:
-
- otx2_dev_active_vfs;
- otx2_dev_fini;
- otx2_dev_priv_init;
- otx2_disable_irqs;
- otx2_eth_dev_is_sec_capable;
- otx2_intra_dev_get_cfg;
- otx2_logtype_base;
- otx2_logtype_dpi;
- otx2_logtype_ep;
- otx2_logtype_mbox;
- otx2_logtype_nix;
- otx2_logtype_npa;
- otx2_logtype_npc;
- otx2_logtype_ree;
- otx2_logtype_sso;
- otx2_logtype_tim;
- otx2_logtype_tm;
- otx2_mbox_alloc_msg_rsp;
- otx2_mbox_get_rsp;
- otx2_mbox_get_rsp_tmo;
- otx2_mbox_id2name;
- otx2_mbox_msg_send;
- otx2_mbox_wait_for_rsp;
- otx2_npa_lf_active;
- otx2_npa_lf_obj_get;
- otx2_npa_lf_obj_ref;
- otx2_npa_pf_func_get;
- otx2_npa_set_defaults;
- otx2_parse_common_devargs;
- otx2_register_irq;
- otx2_sec_idev_cfg_init;
- otx2_sec_idev_tx_cpt_qp_add;
- otx2_sec_idev_tx_cpt_qp_get;
- otx2_sec_idev_tx_cpt_qp_put;
- otx2_sec_idev_tx_cpt_qp_remove;
- otx2_sso_pf_func_get;
- otx2_sso_pf_func_set;
- otx2_unregister_irq;
-
- local: *;
-};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index 59f02ea47c..147b8cf633 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -16,7 +16,6 @@ drivers = [
'nitrox',
'null',
'octeontx',
- 'octeontx2',
'openssl',
'scheduler',
'virtio',
diff --git a/drivers/crypto/octeontx2/meson.build b/drivers/crypto/octeontx2/meson.build
deleted file mode 100644
index 3b387cc570..0000000000
--- a/drivers/crypto/octeontx2/meson.build
+++ /dev/null
@@ -1,30 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright (C) 2019 Marvell International Ltd.
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-deps += ['bus_pci']
-deps += ['common_cpt']
-deps += ['common_octeontx2']
-deps += ['ethdev']
-deps += ['eventdev']
-deps += ['security']
-
-sources = files(
- 'otx2_cryptodev.c',
- 'otx2_cryptodev_capabilities.c',
- 'otx2_cryptodev_hw_access.c',
- 'otx2_cryptodev_mbox.c',
- 'otx2_cryptodev_ops.c',
- 'otx2_cryptodev_sec.c',
-)
-
-includes += include_directories('../../common/cpt')
-includes += include_directories('../../common/octeontx2')
-includes += include_directories('../../crypto/octeontx2')
-includes += include_directories('../../mempool/octeontx2')
-includes += include_directories('../../net/octeontx2')
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c
deleted file mode 100644
index fc7ad05366..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev.c
+++ /dev/null
@@ -1,188 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_crypto.h>
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_dev.h>
-#include <rte_errno.h>
-#include <rte_mempool.h>
-#include <rte_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_sec.h"
-#include "otx2_dev.h"
-
-/* CPT common headers */
-#include "cpt_common.h"
-#include "cpt_pmd_logs.h"
-
-uint8_t otx2_cryptodev_driver_id;
-
-static struct rte_pci_id pci_id_cpt_table[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_CPT_VF)
- },
- /* sentinel */
- {
- .device_id = 0
- },
-};
-
-uint64_t
-otx2_cpt_default_ff_get(void)
-{
- return RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_HW_ACCELERATED |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- RTE_CRYPTODEV_FF_IN_PLACE_SGL |
- RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
- RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
- RTE_CRYPTODEV_FF_SECURITY |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-}
-
-static int
-otx2_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
- struct rte_pci_device *pci_dev)
-{
- struct rte_cryptodev_pmd_init_params init_params = {
- .name = "",
- .socket_id = rte_socket_id(),
- .private_data_size = sizeof(struct otx2_cpt_vf)
- };
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- struct rte_cryptodev *dev;
- struct otx2_dev *otx2_dev;
- struct otx2_cpt_vf *vf;
- uint16_t nb_queues;
- int ret;
-
- rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
-
- dev = rte_cryptodev_pmd_create(name, &pci_dev->device, &init_params);
- if (dev == NULL) {
- ret = -ENODEV;
- goto exit;
- }
-
- dev->dev_ops = &otx2_cpt_ops;
-
- dev->driver_id = otx2_cryptodev_driver_id;
-
- /* Get private data space allocated */
- vf = dev->data->dev_private;
-
- otx2_dev = &vf->otx2_dev;
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- /* Initialize the base otx2_dev object */
- ret = otx2_dev_init(pci_dev, otx2_dev);
- if (ret) {
- CPT_LOG_ERR("Could not initialize otx2_dev");
- goto pmd_destroy;
- }
-
- /* Get number of queues available on the device */
- ret = otx2_cpt_available_queues_get(dev, &nb_queues);
- if (ret) {
- CPT_LOG_ERR("Could not determine the number of queues available");
- goto otx2_dev_fini;
- }
-
- /* Don't exceed the limits set per VF */
- nb_queues = RTE_MIN(nb_queues, OTX2_CPT_MAX_QUEUES_PER_VF);
-
- if (nb_queues == 0) {
- CPT_LOG_ERR("No free queues available on the device");
- goto otx2_dev_fini;
- }
-
- vf->max_queues = nb_queues;
-
- CPT_LOG_INFO("Max queues supported by device: %d",
- vf->max_queues);
-
- ret = otx2_cpt_hardware_caps_get(dev, vf->hw_caps);
- if (ret) {
- CPT_LOG_ERR("Could not determine hardware capabilities");
- goto otx2_dev_fini;
- }
- }
-
- otx2_crypto_capabilities_init(vf->hw_caps);
- otx2_crypto_sec_capabilities_init(vf->hw_caps);
-
- /* Create security ctx */
- ret = otx2_crypto_sec_ctx_create(dev);
- if (ret)
- goto otx2_dev_fini;
-
- dev->feature_flags = otx2_cpt_default_ff_get();
-
- if (rte_eal_process_type() == RTE_PROC_SECONDARY)
- otx2_cpt_set_enqdeq_fns(dev);
-
- rte_cryptodev_pmd_probing_finish(dev);
-
- return 0;
-
-otx2_dev_fini:
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- otx2_dev_fini(pci_dev, otx2_dev);
-pmd_destroy:
- rte_cryptodev_pmd_destroy(dev);
-exit:
- CPT_LOG_ERR("Could not create device (vendor_id: 0x%x device_id: 0x%x)",
- pci_dev->id.vendor_id, pci_dev->id.device_id);
- return ret;
-}
-
-static int
-otx2_cpt_pci_remove(struct rte_pci_device *pci_dev)
-{
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- struct rte_cryptodev *dev;
-
- if (pci_dev == NULL)
- return -EINVAL;
-
- rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
-
- dev = rte_cryptodev_pmd_get_named_dev(name);
- if (dev == NULL)
- return -ENODEV;
-
- /* Destroy security ctx */
- otx2_crypto_sec_ctx_destroy(dev);
-
- return rte_cryptodev_pmd_destroy(dev);
-}
-
-static struct rte_pci_driver otx2_cryptodev_pmd = {
- .id_table = pci_id_cpt_table,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = otx2_cpt_pci_probe,
- .remove = otx2_cpt_pci_remove,
-};
-
-static struct cryptodev_driver otx2_cryptodev_drv;
-
-RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_OCTEONTX2_PMD, otx2_cryptodev_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_OCTEONTX2_PMD, pci_id_cpt_table);
-RTE_PMD_REGISTER_KMOD_DEP(CRYPTODEV_NAME_OCTEONTX2_PMD, "vfio-pci");
-RTE_PMD_REGISTER_CRYPTO_DRIVER(otx2_cryptodev_drv, otx2_cryptodev_pmd.driver,
- otx2_cryptodev_driver_id);
-RTE_LOG_REGISTER_DEFAULT(otx2_cpt_logtype, NOTICE);
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.h b/drivers/crypto/octeontx2/otx2_cryptodev.h
deleted file mode 100644
index 15ecfe45b6..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_H_
-#define _OTX2_CRYPTODEV_H_
-
-#include "cpt_common.h"
-#include "cpt_hw_types.h"
-
-#include "otx2_dev.h"
-
-/* Marvell OCTEON TX2 Crypto PMD device name */
-#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
-
-#define OTX2_CPT_MAX_LFS 128
-#define OTX2_CPT_MAX_QUEUES_PER_VF 64
-#define OTX2_CPT_MAX_BLKS 2
-#define OTX2_CPT_PMD_VERSION 3
-#define OTX2_CPT_REVISION_ID_3 3
-
-/**
- * Device private data
- */
-struct otx2_cpt_vf {
- struct otx2_dev otx2_dev;
- /**< Base class */
- uint16_t max_queues;
- /**< Max queues supported */
- uint8_t nb_queues;
- /**< Number of crypto queues attached */
- uint16_t lf_msixoff[OTX2_CPT_MAX_LFS];
- /**< MSI-X offsets */
- uint8_t lf_blkaddr[OTX2_CPT_MAX_LFS];
- /**< CPT0/1 BLKADDR of LFs */
- uint8_t cpt_revision;
- /**< CPT revision */
- uint8_t err_intr_registered:1;
- /**< Are error interrupts registered? */
- union cpt_eng_caps hw_caps[CPT_MAX_ENG_TYPES];
- /**< CPT device capabilities */
-};
-
-struct cpt_meta_info {
- uint64_t deq_op_info[5];
- uint64_t comp_code_sz;
- union cpt_res_s cpt_res __rte_aligned(16);
- struct cpt_request_info cpt_req;
-};
-
-#define CPT_LOGTYPE otx2_cpt_logtype
-
-extern int otx2_cpt_logtype;
-
-/*
- * Crypto device driver ID
- */
-extern uint8_t otx2_cryptodev_driver_id;
-
-uint64_t otx2_cpt_default_ff_get(void);
-void otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
-
-#endif /* _OTX2_CRYPTODEV_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
deleted file mode 100644
index ba3fbbbe22..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
+++ /dev/null
@@ -1,924 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_security.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_mbox.h"
-
-#define CPT_EGRP_GET(hw_caps, name, egrp) do { \
- if ((hw_caps[CPT_ENG_TYPE_SE].name) && \
- (hw_caps[CPT_ENG_TYPE_IE].name)) \
- *egrp = OTX2_CPT_EGRP_SE_IE; \
- else if (hw_caps[CPT_ENG_TYPE_SE].name) \
- *egrp = OTX2_CPT_EGRP_SE; \
- else if (hw_caps[CPT_ENG_TYPE_AE].name) \
- *egrp = OTX2_CPT_EGRP_AE; \
- else \
- *egrp = OTX2_CPT_EGRP_MAX; \
-} while (0)
-
-#define CPT_CAPS_ADD(hw_caps, name) do { \
- enum otx2_cpt_egrp egrp; \
- CPT_EGRP_GET(hw_caps, name, &egrp); \
- if (egrp < OTX2_CPT_EGRP_MAX) \
- cpt_caps_add(caps_##name, RTE_DIM(caps_##name)); \
-} while (0)
-
-#define SEC_CAPS_ADD(hw_caps, name) do { \
- enum otx2_cpt_egrp egrp; \
- CPT_EGRP_GET(hw_caps, name, &egrp); \
- if (egrp < OTX2_CPT_EGRP_MAX) \
- sec_caps_add(sec_caps_##name, RTE_DIM(sec_caps_##name));\
-} while (0)
-
-#define OTX2_CPT_MAX_CAPS 34
-#define OTX2_SEC_MAX_CAPS 4
-
-static struct rte_cryptodev_capabilities otx2_cpt_caps[OTX2_CPT_MAX_CAPS];
-static struct rte_cryptodev_capabilities otx2_cpt_sec_caps[OTX2_SEC_MAX_CAPS];
-
-static const struct rte_cryptodev_capabilities caps_mul[] = {
- { /* RSA */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,
- .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
- (1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
- (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
- (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
- {.modlen = {
- .min = 17,
- .max = 1024,
- .increment = 1
- }, }
- }
- }, }
- },
- { /* MOD_EXP */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,
- .op_types = 0,
- {.modlen = {
- .min = 17,
- .max = 1024,
- .increment = 1
- }, }
- }
- }, }
- },
- { /* ECDSA */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA,
- .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
- (1 << RTE_CRYPTO_ASYM_OP_VERIFY)),
- }
- },
- }
- },
- { /* ECPM */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM,
- .op_types = 0
- }
- },
- }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_sha1_sha2[] = {
- { /* SHA1 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 20,
- .max = 20,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 20,
- .increment = 8
- },
- }, }
- }, }
- },
- { /* SHA224 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA224,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 28,
- .max = 28,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA224 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 28,
- .max = 28,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA256 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA256 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 16,
- .max = 32,
- .increment = 16
- },
- }, }
- }, }
- },
- { /* SHA384 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA384,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 48,
- .max = 48,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA384 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 24,
- .max = 48,
- .increment = 24
- },
- }, }
- }, }
- },
- { /* SHA512 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA512,
- .block_size = 128,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 64,
- .max = 64,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA512 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
- .block_size = 128,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 32,
- .max = 64,
- .increment = 32
- },
- }, }
- }, }
- },
- { /* MD5 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_MD5,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* MD5 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 8,
- .max = 64,
- .increment = 8
- },
- .digest_size = {
- .min = 12,
- .max = 16,
- .increment = 4
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_chacha20[] = {
- { /* Chacha20-Poly1305 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
- .block_size = 64,
- .key_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 0,
- .max = 1024,
- .increment = 1
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- },
- }, }
- }, }
- }
-};
-
-static const struct rte_cryptodev_capabilities caps_zuc_snow3g[] = {
- { /* SNOW 3G (UEA2) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* ZUC (EEA3) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* SNOW 3G (UIA2) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* ZUC (EIA3) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_ZUC_EIA3,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_aes[] = {
- { /* AES GMAC (AUTH) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_AES_GMAC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 8,
- .max = 16,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CTR */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CTR,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 12,
- .max = 16,
- .increment = 4
- }
- }, }
- }, }
- },
- { /* AES XTS */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_XTS,
- .block_size = 16,
- .key_size = {
- .min = 32,
- .max = 64,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 4,
- .max = 16,
- .increment = 1
- },
- .aad_size = {
- .min = 0,
- .max = 1024,
- .increment = 1
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_kasumi[] = {
- { /* KASUMI (F8) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_KASUMI_F8,
- .block_size = 8,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* KASUMI (F9) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_KASUMI_F9,
- .block_size = 8,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_des[] = {
- { /* 3DES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
- .block_size = 8,
- .key_size = {
- .min = 24,
- .max = 24,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 16,
- .increment = 8
- }
- }, }
- }, }
- },
- { /* 3DES ECB */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
- .block_size = 8,
- .key_size = {
- .min = 24,
- .max = 24,
- .increment = 0
- },
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* DES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_DES_CBC,
- .block_size = 8,
- .key_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_null[] = {
- { /* NULL (AUTH) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_NULL,
- .block_size = 1,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- }, },
- }, },
- },
- { /* NULL (CIPHER) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_NULL,
- .block_size = 1,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
- }, },
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_end[] = {
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 8,
- .max = 12,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 20,
- .increment = 8
- },
- }, }
- }, }
- },
- { /* SHA256 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 16,
- .max = 32,
- .increment = 16
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_security_capability
-otx2_crypto_sec_capabilities[] = {
- { /* IPsec Lookaside Protocol ESP Tunnel Ingress */
- .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_cpt_sec_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- { /* IPsec Lookaside Protocol ESP Tunnel Egress */
- .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_cpt_sec_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- {
- .action = RTE_SECURITY_ACTION_TYPE_NONE
- }
-};
-
-static void
-cpt_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps)
-{
- static int cur_pos;
-
- if (cur_pos + nb_caps > OTX2_CPT_MAX_CAPS)
- return;
-
- memcpy(&otx2_cpt_caps[cur_pos], caps, nb_caps * sizeof(caps[0]));
- cur_pos += nb_caps;
-}
-
-void
-otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps)
-{
- CPT_CAPS_ADD(hw_caps, mul);
- CPT_CAPS_ADD(hw_caps, sha1_sha2);
- CPT_CAPS_ADD(hw_caps, chacha20);
- CPT_CAPS_ADD(hw_caps, zuc_snow3g);
- CPT_CAPS_ADD(hw_caps, aes);
- CPT_CAPS_ADD(hw_caps, kasumi);
- CPT_CAPS_ADD(hw_caps, des);
-
- cpt_caps_add(caps_null, RTE_DIM(caps_null));
- cpt_caps_add(caps_end, RTE_DIM(caps_end));
-}
-
-const struct rte_cryptodev_capabilities *
-otx2_cpt_capabilities_get(void)
-{
- return otx2_cpt_caps;
-}
-
-static void
-sec_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps)
-{
- static int cur_pos;
-
- if (cur_pos + nb_caps > OTX2_SEC_MAX_CAPS)
- return;
-
- memcpy(&otx2_cpt_sec_caps[cur_pos], caps, nb_caps * sizeof(caps[0]));
- cur_pos += nb_caps;
-}
-
-void
-otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps)
-{
- SEC_CAPS_ADD(hw_caps, aes);
- SEC_CAPS_ADD(hw_caps, sha1_sha2);
-
- sec_caps_add(caps_end, RTE_DIM(caps_end));
-}
-
-const struct rte_security_capability *
-otx2_crypto_sec_capabilities_get(void *device __rte_unused)
-{
- return otx2_crypto_sec_capabilities;
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
deleted file mode 100644
index c1e0001190..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
+++ /dev/null
@@ -1,45 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_CAPABILITIES_H_
-#define _OTX2_CRYPTODEV_CAPABILITIES_H_
-
-#include <rte_cryptodev.h>
-
-#include "otx2_mbox.h"
-
-enum otx2_cpt_egrp {
- OTX2_CPT_EGRP_SE = 0,
- OTX2_CPT_EGRP_SE_IE = 1,
- OTX2_CPT_EGRP_AE = 2,
- OTX2_CPT_EGRP_MAX,
-};
-
-/*
- * Initialize crypto capabilities for the device
- *
- */
-void otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps);
-
-/*
- * Get capabilities list for the device
- *
- */
-const struct rte_cryptodev_capabilities *
-otx2_cpt_capabilities_get(void);
-
-/*
- * Initialize security capabilities for the device
- *
- */
-void otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps);
-
-/*
- * Get security capabilities list for the device
- *
- */
-const struct rte_security_capability *
-otx2_crypto_sec_capabilities_get(void *device __rte_unused);
-
-#endif /* _OTX2_CRYPTODEV_CAPABILITIES_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
deleted file mode 100644
index d5d6b5bad7..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ /dev/null
@@ -1,225 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-#include <rte_cryptodev.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_dev.h"
-
-#include "cpt_pmd_logs.h"
-
-static void
-otx2_cpt_lf_err_intr_handler(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t lf_id;
- uint64_t intr;
-
- lf_id = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + OTX2_CPT_LF_MISC_INT);
- if (intr == 0)
- return;
-
- CPT_LOG_ERR("LF %d MISC_INT: 0x%" PRIx64 "", lf_id, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + OTX2_CPT_LF_MISC_INT);
-}
-
-static void
-otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
- uint16_t msix_off, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
-
- otx2_unregister_irq(handle, otx2_cpt_lf_err_intr_handler, (void *)base,
- msix_off);
-}
-
-void
-otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uintptr_t base;
- uint32_t i;
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i);
- otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[i], base);
- }
-
- vf->err_intr_registered = 0;
-}
-
-static int
-otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
- uint16_t msix_off, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int ret;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
-
- /* Register error interrupt handler */
- ret = otx2_register_irq(handle, otx2_cpt_lf_err_intr_handler,
- (void *)base, msix_off);
- if (ret)
- return ret;
-
- /* Enable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1S);
-
- return 0;
-}
-
-int
-otx2_cpt_err_intr_register(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uint32_t i, j, ret;
- uintptr_t base;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) {
- CPT_LOG_ERR("Invalid CPT LF MSI-X offset: 0x%x",
- vf->lf_msixoff[i]);
- return -EINVAL;
- }
- }
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i);
- ret = otx2_cpt_lf_err_intr_register(dev, vf->lf_msixoff[i],
- base);
- if (ret)
- goto intr_unregister;
- }
-
- vf->err_intr_registered = 1;
- return 0;
-
-intr_unregister:
- /* Unregister the ones already registered */
- for (j = 0; j < i; j++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[j], j);
- otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base);
- }
-
- /*
- * Failed to register error interrupt. Not returning error as this would
- * prevent application from enabling larger number of devs.
- *
- * This failure is a known issue because otx2_dev_init() initializes
- * interrupts based on static values from ATF, and the actual number
- * of interrupts needed (which is based on LFs) can be determined only
- * after otx2_dev_init() sets up interrupts which includes mbox
- * interrupts.
- */
- return 0;
-}
-
-int
-otx2_cpt_iq_enable(const struct rte_cryptodev *dev,
- const struct otx2_cpt_qp *qp, uint8_t grp_mask, uint8_t pri,
- uint32_t size_div40)
-{
- union otx2_cpt_af_lf_ctl af_lf_ctl;
- union otx2_cpt_lf_inprog inprog;
- union otx2_cpt_lf_q_base base;
- union otx2_cpt_lf_q_size size;
- union otx2_cpt_lf_ctl lf_ctl;
- int ret;
-
- /* Set engine group mask and priority */
-
- ret = otx2_cpt_af_reg_read(dev, OTX2_CPT_AF_LF_CTL(qp->id),
- qp->blkaddr, &af_lf_ctl.u);
- if (ret)
- return ret;
- af_lf_ctl.s.grp = grp_mask;
- af_lf_ctl.s.pri = pri ? 1 : 0;
- ret = otx2_cpt_af_reg_write(dev, OTX2_CPT_AF_LF_CTL(qp->id),
- qp->blkaddr, af_lf_ctl.u);
- if (ret)
- return ret;
-
- /* Set instruction queue base address */
-
- base.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_BASE);
- base.s.fault = 0;
- base.s.stopped = 0;
- base.s.addr = qp->iq_dma_addr >> 7;
- otx2_write64(base.u, qp->base + OTX2_CPT_LF_Q_BASE);
-
- /* Set instruction queue size */
-
- size.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_SIZE);
- size.s.size_div40 = size_div40;
- otx2_write64(size.u, qp->base + OTX2_CPT_LF_Q_SIZE);
-
- /* Enable instruction queue */
-
- lf_ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL);
- lf_ctl.s.ena = 1;
- otx2_write64(lf_ctl.u, qp->base + OTX2_CPT_LF_CTL);
-
- /* Start instruction execution */
-
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- inprog.s.eena = 1;
- otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG);
-
- return 0;
-}
-
-void
-otx2_cpt_iq_disable(struct otx2_cpt_qp *qp)
-{
- union otx2_cpt_lf_q_grp_ptr grp_ptr;
- union otx2_cpt_lf_inprog inprog;
- union otx2_cpt_lf_ctl ctl;
- int cnt;
-
- /* Stop instruction execution */
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- inprog.s.eena = 0x0;
- otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG);
-
- /* Disable instructions enqueuing */
- ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL);
- ctl.s.ena = 0;
- otx2_write64(ctl.u, qp->base + OTX2_CPT_LF_CTL);
-
- /* Wait for instruction queue to become empty */
- cnt = 0;
- do {
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- if (inprog.s.grb_partial)
- cnt = 0;
- else
- cnt++;
- grp_ptr.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_GRP_PTR);
- } while ((cnt < 10) && (grp_ptr.s.nq_ptr != grp_ptr.s.dq_ptr));
-
- cnt = 0;
- do {
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- if ((inprog.s.inflight == 0) &&
- (inprog.s.gwb_cnt < 40) &&
- ((inprog.s.grb_cnt == 0) || (inprog.s.grb_cnt == 40)))
- cnt++;
- else
- cnt = 0;
- } while (cnt < 10);
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
deleted file mode 100644
index 90a338e05a..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_HW_ACCESS_H_
-#define _OTX2_CRYPTODEV_HW_ACCESS_H_
-
-#include <stdint.h>
-
-#include <rte_cryptodev.h>
-#include <rte_memory.h>
-
-#include "cpt_common.h"
-#include "cpt_hw_types.h"
-#include "cpt_mcode_defines.h"
-
-#include "otx2_dev.h"
-#include "otx2_cryptodev_qp.h"
-
-/* CPT instruction queue length.
- * Use queue size as power of 2 for aiding in pending queue calculations.
- */
-#define OTX2_CPT_DEFAULT_CMD_QLEN 8192
-
-/* Mask which selects all engine groups */
-#define OTX2_CPT_ENG_GRPS_MASK 0xFF
-
-/* Register offsets */
-
-/* LMT LF registers */
-#define OTX2_LMT_LF_LMTLINE(a) (0x0ull | (uint64_t)(a) << 3)
-
-/* CPT LF registers */
-#define OTX2_CPT_LF_CTL 0x10ull
-#define OTX2_CPT_LF_INPROG 0x40ull
-#define OTX2_CPT_LF_MISC_INT 0xb0ull
-#define OTX2_CPT_LF_MISC_INT_ENA_W1S 0xd0ull
-#define OTX2_CPT_LF_MISC_INT_ENA_W1C 0xe0ull
-#define OTX2_CPT_LF_Q_BASE 0xf0ull
-#define OTX2_CPT_LF_Q_SIZE 0x100ull
-#define OTX2_CPT_LF_Q_GRP_PTR 0x120ull
-#define OTX2_CPT_LF_NQ(a) (0x400ull | (uint64_t)(a) << 3)
-
-#define OTX2_CPT_AF_LF_CTL(a) (0x27000ull | (uint64_t)(a) << 3)
-#define OTX2_CPT_AF_LF_CTL2(a) (0x29000ull | (uint64_t)(a) << 3)
-
-#define OTX2_CPT_LF_BAR2(vf, blk_addr, q_id) \
- ((vf)->otx2_dev.bar2 + \
- ((blk_addr << 20) | ((q_id) << 12)))
-
-#define OTX2_CPT_QUEUE_HI_PRIO 0x1
-
-union otx2_cpt_lf_ctl {
- uint64_t u;
- struct {
- uint64_t ena : 1;
- uint64_t fc_ena : 1;
- uint64_t fc_up_crossing : 1;
- uint64_t reserved_3_3 : 1;
- uint64_t fc_hyst_bits : 4;
- uint64_t reserved_8_63 : 56;
- } s;
-};
-
-union otx2_cpt_lf_inprog {
- uint64_t u;
- struct {
- uint64_t inflight : 9;
- uint64_t reserved_9_15 : 7;
- uint64_t eena : 1;
- uint64_t grp_drp : 1;
- uint64_t reserved_18_30 : 13;
- uint64_t grb_partial : 1;
- uint64_t grb_cnt : 8;
- uint64_t gwb_cnt : 8;
- uint64_t reserved_48_63 : 16;
- } s;
-};
-
-union otx2_cpt_lf_q_base {
- uint64_t u;
- struct {
- uint64_t fault : 1;
- uint64_t stopped : 1;
- uint64_t reserved_2_6 : 5;
- uint64_t addr : 46;
- uint64_t reserved_53_63 : 11;
- } s;
-};
-
-union otx2_cpt_lf_q_size {
- uint64_t u;
- struct {
- uint64_t size_div40 : 15;
- uint64_t reserved_15_63 : 49;
- } s;
-};
-
-union otx2_cpt_af_lf_ctl {
- uint64_t u;
- struct {
- uint64_t pri : 1;
- uint64_t reserved_1_8 : 8;
- uint64_t pf_func_inst : 1;
- uint64_t cont_err : 1;
- uint64_t reserved_11_15 : 5;
- uint64_t nixtx_en : 1;
- uint64_t reserved_17_47 : 31;
- uint64_t grp : 8;
- uint64_t reserved_56_63 : 8;
- } s;
-};
-
-union otx2_cpt_af_lf_ctl2 {
- uint64_t u;
- struct {
- uint64_t exe_no_swap : 1;
- uint64_t exe_ldwb : 1;
- uint64_t reserved_2_31 : 30;
- uint64_t sso_pf_func : 16;
- uint64_t nix_pf_func : 16;
- } s;
-};
-
-union otx2_cpt_lf_q_grp_ptr {
- uint64_t u;
- struct {
- uint64_t dq_ptr : 15;
- uint64_t reserved_31_15 : 17;
- uint64_t nq_ptr : 15;
- uint64_t reserved_47_62 : 16;
- uint64_t xq_xor : 1;
- } s;
-};
-
-/*
- * Enumeration cpt_9x_comp_e
- *
- * CPT 9X Completion Enumeration
- * Enumerates the values of CPT_RES_S[COMPCODE].
- */
-enum cpt_9x_comp_e {
- CPT_9X_COMP_E_NOTDONE = 0x00,
- CPT_9X_COMP_E_GOOD = 0x01,
- CPT_9X_COMP_E_FAULT = 0x02,
- CPT_9X_COMP_E_HWERR = 0x04,
- CPT_9X_COMP_E_INSTERR = 0x05,
- CPT_9X_COMP_E_LAST_ENTRY = 0x06
-};
-
-void otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev);
-
-int otx2_cpt_err_intr_register(const struct rte_cryptodev *dev);
-
-int otx2_cpt_iq_enable(const struct rte_cryptodev *dev,
- const struct otx2_cpt_qp *qp, uint8_t grp_mask,
- uint8_t pri, uint32_t size_div40);
-
-void otx2_cpt_iq_disable(struct otx2_cpt_qp *qp);
-
-#endif /* _OTX2_CRYPTODEV_HW_ACCESS_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
deleted file mode 100644
index f9e7b0b474..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
+++ /dev/null
@@ -1,285 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-#include <cryptodev_pmd.h>
-#include <rte_ethdev.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-#include "otx2_sec_idev.h"
-#include "otx2_mbox.h"
-
-#include "cpt_pmd_logs.h"
-
-int
-otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev,
- union cpt_eng_caps *hw_caps)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_dev *otx2_dev = &vf->otx2_dev;
- struct cpt_caps_rsp_msg *rsp;
- int ret;
-
- otx2_mbox_alloc_msg_cpt_caps_get(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- if (rsp->cpt_pf_drv_version != OTX2_CPT_PMD_VERSION) {
- otx2_err("Incompatible CPT PMD version"
- "(Kernel: 0x%04x DPDK: 0x%04x)",
- rsp->cpt_pf_drv_version, OTX2_CPT_PMD_VERSION);
- return -EPIPE;
- }
-
- vf->cpt_revision = rsp->cpt_revision;
- otx2_mbox_memcpy(hw_caps, rsp->eng_caps,
- sizeof(union cpt_eng_caps) * CPT_MAX_ENG_TYPES);
-
- return 0;
-}
-
-int
-otx2_cpt_available_queues_get(const struct rte_cryptodev *dev,
- uint16_t *nb_queues)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_dev *otx2_dev = &vf->otx2_dev;
- struct free_rsrcs_rsp *rsp;
- int ret;
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- *nb_queues = rsp->cpt + rsp->cpt1;
- return 0;
-}
-
-int
-otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int blkaddr[OTX2_CPT_MAX_BLKS];
- struct rsrc_attach_req *req;
- int blknum = 0;
- int i, ret;
-
- blkaddr[0] = RVU_BLOCK_ADDR_CPT0;
- blkaddr[1] = RVU_BLOCK_ADDR_CPT1;
-
- /* Ask AF to attach required LFs */
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
-
- if ((vf->cpt_revision == OTX2_CPT_REVISION_ID_3) &&
- (vf->otx2_dev.pf_func & 0x1))
- blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS;
-
- /* 1 LF = 1 queue */
- req->cptlfs = nb_queues;
- req->cpt_blkaddr = blkaddr[blknum];
-
- ret = otx2_mbox_process(mbox);
- if (ret == -ENOSPC) {
- if (vf->cpt_revision == OTX2_CPT_REVISION_ID_3) {
- blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS;
- req->cpt_blkaddr = blkaddr[blknum];
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- } else {
- return -EIO;
- }
- } else if (ret < 0) {
- return -EIO;
- }
-
- /* Update number of attached queues */
- vf->nb_queues = nb_queues;
- for (i = 0; i < nb_queues; i++)
- vf->lf_blkaddr[i] = req->cpt_blkaddr;
-
- return 0;
-}
-
-int
-otx2_cpt_queues_detach(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->cptlfs = true;
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
-
- /* Queues have been detached */
- vf->nb_queues = 0;
-
- return 0;
-}
-
-int
-otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct msix_offset_rsp *rsp;
- uint32_t i, ret;
-
- /* Get CPT MSI-X vector offsets */
-
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
-
- for (i = 0; i < vf->nb_queues; i++)
- vf->lf_msixoff[i] = (vf->lf_blkaddr[i] == RVU_BLOCK_ADDR_CPT1) ?
- rsp->cpt1_lf_msixoff[i] : rsp->cptlf_msixoff[i];
-
- return 0;
-}
-
-static int
-otx2_cpt_send_mbox_msg(struct otx2_cpt_vf *vf)
-{
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int ret;
-
- otx2_mbox_msg_send(mbox, 0);
-
- ret = otx2_mbox_wait_for_rsp(mbox, 0);
- if (ret < 0) {
- CPT_LOG_ERR("Could not get mailbox response");
- return ret;
- }
-
- return 0;
-}
-
-int
-otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t *val)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct cpt_rd_wr_reg_msg *msg;
- int ret, off;
-
- msg = (struct cpt_rd_wr_reg_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg),
- sizeof(*msg));
- if (msg == NULL) {
- CPT_LOG_ERR("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 0;
- msg->reg_offset = reg;
- msg->ret_val = val;
- msg->blkaddr = blkaddr;
-
- ret = otx2_cpt_send_mbox_msg(vf);
- if (ret < 0)
- return ret;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msg = (struct cpt_rd_wr_reg_msg *) ((uintptr_t)mdev->mbase + off);
-
- *val = msg->val;
-
- return 0;
-}
-
-int
-otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t val)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_rd_wr_reg_msg *msg;
-
- msg = (struct cpt_rd_wr_reg_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg),
- sizeof(*msg));
- if (msg == NULL) {
- CPT_LOG_ERR("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 1;
- msg->reg_offset = reg;
- msg->val = val;
- msg->blkaddr = blkaddr;
-
- return otx2_cpt_send_mbox_msg(vf);
-}
-
-int
-otx2_cpt_inline_init(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_rx_inline_lf_cfg_msg *msg;
- int ret;
-
- msg = otx2_mbox_alloc_msg_cpt_rx_inline_lf_cfg(mbox);
- msg->sso_pf_func = otx2_sso_pf_func_get();
-
- otx2_mbox_msg_send(mbox, 0);
- ret = otx2_mbox_process(mbox);
- if (ret < 0)
- return -EIO;
-
- return 0;
-}
-
-int
-otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp,
- uint16_t port_id)
-{
- struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_inline_ipsec_cfg_msg *msg;
- struct otx2_eth_dev *otx2_eth_dev;
- int ret;
-
- if (!otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id]))
- return -EINVAL;
-
- otx2_eth_dev = otx2_eth_pmd_priv(eth_dev);
-
- msg = otx2_mbox_alloc_msg_cpt_inline_ipsec_cfg(mbox);
- msg->dir = CPT_INLINE_OUTBOUND;
- msg->enable = 1;
- msg->slot = qp->id;
-
- msg->nix_pf_func = otx2_eth_dev->pf_func;
-
- otx2_mbox_msg_send(mbox, 0);
- ret = otx2_mbox_process(mbox);
- if (ret < 0)
- return -EIO;
-
- return 0;
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
deleted file mode 100644
index 03323e418c..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_MBOX_H_
-#define _OTX2_CRYPTODEV_MBOX_H_
-
-#include <rte_cryptodev.h>
-
-#include "otx2_cryptodev_hw_access.h"
-
-int otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev,
- union cpt_eng_caps *hw_caps);
-
-int otx2_cpt_available_queues_get(const struct rte_cryptodev *dev,
- uint16_t *nb_queues);
-
-int otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues);
-
-int otx2_cpt_queues_detach(const struct rte_cryptodev *dev);
-
-int otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev);
-
-__rte_internal
-int otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t *val);
-
-__rte_internal
-int otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t val);
-
-int otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev,
- struct otx2_cpt_qp *qp, uint16_t port_id);
-
-int otx2_cpt_inline_init(const struct rte_cryptodev *dev);
-
-#endif /* _OTX2_CRYPTODEV_MBOX_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
deleted file mode 100644
index 339b82f33e..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ /dev/null
@@ -1,1438 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <unistd.h>
-
-#include <cryptodev_pmd.h>
-#include <rte_errno.h>
-#include <ethdev_driver.h>
-#include <rte_event_crypto_adapter.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_ops_helper.h"
-#include "otx2_ipsec_anti_replay.h"
-#include "otx2_ipsec_po_ops.h"
-#include "otx2_mbox.h"
-#include "otx2_sec_idev.h"
-#include "otx2_security.h"
-
-#include "cpt_hw_types.h"
-#include "cpt_pmd_logs.h"
-#include "cpt_pmd_ops_helper.h"
-#include "cpt_ucode.h"
-#include "cpt_ucode_asym.h"
-
-#define METABUF_POOL_CACHE_SIZE 512
-
-static uint64_t otx2_fpm_iova[CPT_EC_ID_PMAX];
-
-/* Forward declarations */
-
-static int
-otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id);
-
-static void
-qp_memzone_name_get(char *name, int size, int dev_id, int qp_id)
-{
- snprintf(name, size, "otx2_cpt_lf_mem_%u:%u", dev_id, qp_id);
-}
-
-static int
-otx2_cpt_metabuf_mempool_create(const struct rte_cryptodev *dev,
- struct otx2_cpt_qp *qp, uint8_t qp_id,
- unsigned int nb_elements)
-{
- char mempool_name[RTE_MEMPOOL_NAMESIZE];
- struct cpt_qp_meta_info *meta_info;
- int lcore_cnt = rte_lcore_count();
- int ret, max_mlen, mb_pool_sz;
- struct rte_mempool *pool;
- int asym_mlen = 0;
- int lb_mlen = 0;
- int sg_mlen = 0;
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO) {
-
- /* Get meta len for scatter gather mode */
- sg_mlen = cpt_pmd_ops_helper_get_mlen_sg_mode();
-
- /* Extra 32B saved for future considerations */
- sg_mlen += 4 * sizeof(uint64_t);
-
- /* Get meta len for linear buffer (direct) mode */
- lb_mlen = cpt_pmd_ops_helper_get_mlen_direct_mode();
-
- /* Extra 32B saved for future considerations */
- lb_mlen += 4 * sizeof(uint64_t);
- }
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) {
-
- /* Get meta len required for asymmetric operations */
- asym_mlen = cpt_pmd_ops_helper_asym_get_mlen();
- }
-
- /*
- * Check max requirement for meta buffer to
- * support crypto op of any type (sym/asym).
- */
- max_mlen = RTE_MAX(RTE_MAX(lb_mlen, sg_mlen), asym_mlen);
-
- /* Allocate mempool */
-
- snprintf(mempool_name, RTE_MEMPOOL_NAMESIZE, "otx2_cpt_mb_%u:%u",
- dev->data->dev_id, qp_id);
-
- mb_pool_sz = nb_elements;
-
- /* For poll mode, core that enqueues and core that dequeues can be
- * different. For event mode, all cores are allowed to use same crypto
- * queue pair.
- */
- mb_pool_sz += (RTE_MAX(2, lcore_cnt) * METABUF_POOL_CACHE_SIZE);
-
- pool = rte_mempool_create_empty(mempool_name, mb_pool_sz, max_mlen,
- METABUF_POOL_CACHE_SIZE, 0,
- rte_socket_id(), 0);
-
- if (pool == NULL) {
- CPT_LOG_ERR("Could not create mempool for metabuf");
- return rte_errno;
- }
-
- ret = rte_mempool_set_ops_byname(pool, RTE_MBUF_DEFAULT_MEMPOOL_OPS,
- NULL);
- if (ret) {
- CPT_LOG_ERR("Could not set mempool ops");
- goto mempool_free;
- }
-
- ret = rte_mempool_populate_default(pool);
- if (ret <= 0) {
- CPT_LOG_ERR("Could not populate metabuf pool");
- goto mempool_free;
- }
-
- meta_info = &qp->meta_info;
-
- meta_info->pool = pool;
- meta_info->lb_mlen = lb_mlen;
- meta_info->sg_mlen = sg_mlen;
-
- return 0;
-
-mempool_free:
- rte_mempool_free(pool);
- return ret;
-}
-
-static void
-otx2_cpt_metabuf_mempool_destroy(struct otx2_cpt_qp *qp)
-{
- struct cpt_qp_meta_info *meta_info = &qp->meta_info;
-
- rte_mempool_free(meta_info->pool);
-
- meta_info->pool = NULL;
- meta_info->lb_mlen = 0;
- meta_info->sg_mlen = 0;
-}
-
-static int
-otx2_cpt_qp_inline_cfg(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp)
-{
- static rte_atomic16_t port_offset = RTE_ATOMIC16_INIT(-1);
- uint16_t port_id, nb_ethport = rte_eth_dev_count_avail();
- int i, ret;
-
- for (i = 0; i < nb_ethport; i++) {
- port_id = rte_atomic16_add_return(&port_offset, 1) % nb_ethport;
- if (otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id]))
- break;
- }
-
- if (i >= nb_ethport)
- return 0;
-
- ret = otx2_cpt_qp_ethdev_bind(dev, qp, port_id);
- if (ret)
- return ret;
-
- /* Publish inline Tx QP to eth dev security */
- ret = otx2_sec_idev_tx_cpt_qp_add(port_id, qp);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static struct otx2_cpt_qp *
-otx2_cpt_qp_create(const struct rte_cryptodev *dev, uint16_t qp_id,
- uint8_t group)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uint64_t pg_sz = sysconf(_SC_PAGESIZE);
- const struct rte_memzone *lf_mem;
- uint32_t len, iq_len, size_div40;
- char name[RTE_MEMZONE_NAMESIZE];
- uint64_t used_len, iova;
- struct otx2_cpt_qp *qp;
- uint64_t lmtline;
- uint8_t *va;
- int ret;
-
- /* Allocate queue pair */
- qp = rte_zmalloc_socket("OCTEON TX2 Crypto PMD Queue Pair", sizeof(*qp),
- OTX2_ALIGN, 0);
- if (qp == NULL) {
- CPT_LOG_ERR("Could not allocate queue pair");
- return NULL;
- }
-
- /*
- * Pending queue updates make assumption that queue size is a power
- * of 2.
- */
- RTE_BUILD_BUG_ON(!RTE_IS_POWER_OF_2(OTX2_CPT_DEFAULT_CMD_QLEN));
-
- iq_len = OTX2_CPT_DEFAULT_CMD_QLEN;
-
- /*
- * Queue size must be a multiple of 40 and effective queue size to
- * software is (size_div40 - 1) * 40
- */
- size_div40 = (iq_len + 40 - 1) / 40 + 1;
-
- /* For pending queue */
- len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8);
-
- /* Space for instruction group memory */
- len += size_div40 * 16;
-
- /* So that instruction queues start as pg size aligned */
- len = RTE_ALIGN(len, pg_sz);
-
- /* For instruction queues */
- len += OTX2_CPT_DEFAULT_CMD_QLEN * sizeof(union cpt_inst_s);
-
- /* Wastage after instruction queues */
- len = RTE_ALIGN(len, pg_sz);
-
- qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
- qp_id);
-
- lf_mem = rte_memzone_reserve_aligned(name, len, vf->otx2_dev.node,
- RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB,
- RTE_CACHE_LINE_SIZE);
- if (lf_mem == NULL) {
- CPT_LOG_ERR("Could not allocate reserved memzone");
- goto qp_free;
- }
-
- va = lf_mem->addr;
- iova = lf_mem->iova;
-
- memset(va, 0, len);
-
- ret = otx2_cpt_metabuf_mempool_create(dev, qp, qp_id, iq_len);
- if (ret) {
- CPT_LOG_ERR("Could not create mempool for metabuf");
- goto lf_mem_free;
- }
-
- /* Initialize pending queue */
- qp->pend_q.rid_queue = (void **)va;
- qp->pend_q.tail = 0;
- qp->pend_q.head = 0;
-
- used_len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8);
- used_len += size_div40 * 16;
- used_len = RTE_ALIGN(used_len, pg_sz);
- iova += used_len;
-
- qp->iq_dma_addr = iova;
- qp->id = qp_id;
- qp->blkaddr = vf->lf_blkaddr[qp_id];
- qp->base = OTX2_CPT_LF_BAR2(vf, qp->blkaddr, qp_id);
-
- lmtline = vf->otx2_dev.bar2 +
- (RVU_BLOCK_ADDR_LMT << 20 | qp_id << 12) +
- OTX2_LMT_LF_LMTLINE(0);
-
- qp->lmtline = (void *)lmtline;
-
- qp->lf_nq_reg = qp->base + OTX2_CPT_LF_NQ(0);
-
- ret = otx2_sec_idev_tx_cpt_qp_remove(qp);
- if (ret && (ret != -ENOENT)) {
- CPT_LOG_ERR("Could not delete inline configuration");
- goto mempool_destroy;
- }
-
- otx2_cpt_iq_disable(qp);
-
- ret = otx2_cpt_qp_inline_cfg(dev, qp);
- if (ret) {
- CPT_LOG_ERR("Could not configure queue for inline IPsec");
- goto mempool_destroy;
- }
-
- ret = otx2_cpt_iq_enable(dev, qp, group, OTX2_CPT_QUEUE_HI_PRIO,
- size_div40);
- if (ret) {
- CPT_LOG_ERR("Could not enable instruction queue");
- goto mempool_destroy;
- }
-
- return qp;
-
-mempool_destroy:
- otx2_cpt_metabuf_mempool_destroy(qp);
-lf_mem_free:
- rte_memzone_free(lf_mem);
-qp_free:
- rte_free(qp);
- return NULL;
-}
-
-static int
-otx2_cpt_qp_destroy(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp)
-{
- const struct rte_memzone *lf_mem;
- char name[RTE_MEMZONE_NAMESIZE];
- int ret;
-
- ret = otx2_sec_idev_tx_cpt_qp_remove(qp);
- if (ret && (ret != -ENOENT)) {
- CPT_LOG_ERR("Could not delete inline configuration");
- return ret;
- }
-
- otx2_cpt_iq_disable(qp);
-
- otx2_cpt_metabuf_mempool_destroy(qp);
-
- qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
- qp->id);
-
- lf_mem = rte_memzone_lookup(name);
-
- ret = rte_memzone_free(lf_mem);
- if (ret)
- return ret;
-
- rte_free(qp);
-
- return 0;
-}
-
-static int
-sym_xform_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->next) {
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->next->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
- (xform->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC ||
- xform->next->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC))
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- (xform->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC ||
- xform->next->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC))
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->next->auth.algo == RTE_CRYPTO_AUTH_SHA1)
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->auth.algo == RTE_CRYPTO_AUTH_SHA1 &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->next->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC)
- return -ENOTSUP;
-
- } else {
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->auth.algo == RTE_CRYPTO_AUTH_NULL &&
- xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY)
- return -ENOTSUP;
- }
- return 0;
-}
-
-static int
-sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
- struct rte_cryptodev_sym_session *sess,
- struct rte_mempool *pool)
-{
- struct rte_crypto_sym_xform *temp_xform = xform;
- struct cpt_sess_misc *misc;
- vq_cmd_word3_t vq_cmd_w3;
- void *priv;
- int ret;
-
- ret = sym_xform_verify(xform);
- if (unlikely(ret))
- return ret;
-
- if (unlikely(rte_mempool_get(pool, &priv))) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_sess_misc) +
- offsetof(struct cpt_ctx, mc_ctx));
-
- misc = priv;
-
- for ( ; xform != NULL; xform = xform->next) {
- switch (xform->type) {
- case RTE_CRYPTO_SYM_XFORM_AEAD:
- ret = fill_sess_aead(xform, misc);
- break;
- case RTE_CRYPTO_SYM_XFORM_CIPHER:
- ret = fill_sess_cipher(xform, misc);
- break;
- case RTE_CRYPTO_SYM_XFORM_AUTH:
- if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC)
- ret = fill_sess_gmac(xform, misc);
- else
- ret = fill_sess_auth(xform, misc);
- break;
- default:
- ret = -1;
- }
-
- if (ret)
- goto priv_put;
- }
-
- if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
- cpt_mac_len_verify(&temp_xform->auth)) {
- CPT_LOG_ERR("MAC length is not supported");
- struct cpt_ctx *ctx = SESS_PRIV(misc);
- if (ctx->auth_key != NULL) {
- rte_free(ctx->auth_key);
- ctx->auth_key = NULL;
- }
- ret = -ENOTSUP;
- goto priv_put;
- }
-
- set_sym_session_private_data(sess, driver_id, misc);
-
- misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
- sizeof(struct cpt_sess_misc);
-
- vq_cmd_w3.u64 = 0;
- vq_cmd_w3.s.cptr = misc->ctx_dma_addr + offsetof(struct cpt_ctx,
- mc_ctx);
-
- /*
- * IE engines support IPsec operations
- * SE engines support IPsec operations, Chacha-Poly and
- * Air-Crypto operations
- */
- if (misc->zsk_flag || misc->chacha_poly)
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE;
- else
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE_IE;
-
- misc->cpt_inst_w7 = vq_cmd_w3.u64;
-
- return 0;
-
-priv_put:
- rte_mempool_put(pool, priv);
-
- return -ENOTSUP;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
- struct cpt_request_info *req,
- void *lmtline,
- struct rte_crypto_op *op,
- uint64_t cpt_inst_w7)
-{
- union rte_event_crypto_metadata *m_data;
- union cpt_inst_s inst;
- uint64_t lmt_status;
-
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- m_data = rte_cryptodev_sym_session_get_user_data(
- op->sym->session);
- if (m_data == NULL) {
- rte_pktmbuf_free(op->sym->m_src);
- rte_crypto_op_free(op);
- rte_errno = EINVAL;
- return -EINVAL;
- }
- } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
- op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
- ((uint8_t *)op +
- op->private_data_offset);
- } else {
- return -EINVAL;
- }
-
- inst.u[0] = 0;
- inst.s9x.res_addr = req->comp_baddr;
- inst.u[2] = 0;
- inst.u[3] = 0;
-
- inst.s9x.ei0 = req->ist.ei0;
- inst.s9x.ei1 = req->ist.ei1;
- inst.s9x.ei2 = req->ist.ei2;
- inst.s9x.ei3 = cpt_inst_w7;
-
- inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) |
- m_data->response_info.flow_id) |
- ((uint64_t)m_data->response_info.sched_type << 32) |
- ((uint64_t)m_data->response_info.queue_id << 34));
- inst.u[3] = 1 | (((uint64_t)req >> 3) << 3);
- req->qp = qp;
-
- do {
- /* Copy CPT command to LMTLINE */
- memcpy(lmtline, &inst, sizeof(inst));
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- return 0;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp,
- struct pending_queue *pend_q,
- struct cpt_request_info *req,
- struct rte_crypto_op *op,
- uint64_t cpt_inst_w7,
- unsigned int burst_index)
-{
- void *lmtline = qp->lmtline;
- union cpt_inst_s inst;
- uint64_t lmt_status;
-
- if (qp->ca_enable)
- return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7);
-
- inst.u[0] = 0;
- inst.s9x.res_addr = req->comp_baddr;
- inst.u[2] = 0;
- inst.u[3] = 0;
-
- inst.s9x.ei0 = req->ist.ei0;
- inst.s9x.ei1 = req->ist.ei1;
- inst.s9x.ei2 = req->ist.ei2;
- inst.s9x.ei3 = cpt_inst_w7;
-
- req->time_out = rte_get_timer_cycles() +
- DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
-
- do {
- /* Copy CPT command to LMTLINE */
- memcpy(lmtline, &inst, sizeof(inst));
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- pending_queue_push(pend_q, req, burst_index, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- return 0;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp,
- struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- unsigned int burst_index)
-{
- struct cpt_qp_meta_info *minfo = &qp->meta_info;
- struct rte_crypto_asym_op *asym_op = op->asym;
- struct asym_op_params params = {0};
- struct cpt_asym_sess_misc *sess;
- uintptr_t *cop;
- void *mdata;
- int ret;
-
- if (unlikely(rte_mempool_get(minfo->pool, &mdata) < 0)) {
- CPT_LOG_ERR("Could not allocate meta buffer for request");
- return -ENOMEM;
- }
-
- sess = get_asym_session_private_data(asym_op->session,
- otx2_cryptodev_driver_id);
-
- /* Store IO address of the mdata to meta_buf */
- params.meta_buf = rte_mempool_virt2iova(mdata);
-
- cop = mdata;
- cop[0] = (uintptr_t)mdata;
- cop[1] = (uintptr_t)op;
- cop[2] = cop[3] = 0ULL;
-
- params.req = RTE_PTR_ADD(cop, 4 * sizeof(uintptr_t));
- params.req->op = cop;
-
- /* Adjust meta_buf to point to end of cpt_request_info structure */
- params.meta_buf += (4 * sizeof(uintptr_t)) +
- sizeof(struct cpt_request_info);
- switch (sess->xfrm_type) {
- case RTE_CRYPTO_ASYM_XFORM_MODEX:
- ret = cpt_modex_prep(¶ms, &sess->mod_ctx);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_RSA:
- ret = cpt_enqueue_rsa_op(op, ¶ms, sess);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECDSA:
- ret = cpt_enqueue_ecdsa_op(op, ¶ms, sess, otx2_fpm_iova);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECPM:
- ret = cpt_ecpm_prep(&asym_op->ecpm, ¶ms,
- sess->ec_ctx.curveid);
- if (unlikely(ret))
- goto req_fail;
- break;
- default:
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- ret = -EINVAL;
- goto req_fail;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op,
- sess->cpt_inst_w7, burst_index);
- if (unlikely(ret)) {
- CPT_LOG_DP_ERR("Could not enqueue crypto req");
- goto req_fail;
- }
-
- return 0;
-
-req_fail:
- free_op_meta(mdata, minfo->pool);
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q, unsigned int burst_index)
-{
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct cpt_request_info *req;
- struct cpt_sess_misc *sess;
- uint64_t cpt_op;
- void *mdata;
- int ret;
-
- sess = get_sym_session_private_data(sym_op->session,
- otx2_cryptodev_driver_id);
-
- cpt_op = sess->cpt_op;
-
- if (cpt_op & CPT_OP_CIPHER_MASK)
- ret = fill_fc_params(op, sess, &qp->meta_info, &mdata,
- (void **)&req);
- else
- ret = fill_digest_params(op, sess, &qp->meta_info, &mdata,
- (void **)&req);
-
- if (unlikely(ret)) {
- CPT_LOG_DP_ERR("Crypto req : op %p, cpt_op 0x%x ret 0x%x",
- op, (unsigned int)cpt_op, ret);
- return ret;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7,
- burst_index);
- if (unlikely(ret)) {
- /* Free buffer allocated by fill params routines */
- free_op_meta(mdata, qp->meta_info.pool);
- }
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- const unsigned int burst_index)
-{
- uint32_t winsz, esn_low = 0, esn_hi = 0, seql = 0, seqh = 0;
- struct rte_mbuf *m_src = op->sym->m_src;
- struct otx2_sec_session_ipsec_lp *sess;
- struct otx2_ipsec_po_sa_ctl *ctl_wrd;
- struct otx2_ipsec_po_in_sa *sa;
- struct otx2_sec_session *priv;
- struct cpt_request_info *req;
- uint64_t seq_in_sa, seq = 0;
- uint8_t esn;
- int ret;
-
- priv = get_sec_session_private_data(op->sym->sec_session);
- sess = &priv->ipsec.lp;
- sa = &sess->in_sa;
-
- ctl_wrd = &sa->ctl;
- esn = ctl_wrd->esn_en;
- winsz = sa->replay_win_sz;
-
- if (ctl_wrd->direction == OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND)
- ret = process_outb_sa(op, sess, &qp->meta_info, (void **)&req);
- else {
- if (winsz) {
- esn_low = rte_be_to_cpu_32(sa->esn_low);
- esn_hi = rte_be_to_cpu_32(sa->esn_hi);
- seql = *rte_pktmbuf_mtod_offset(m_src, uint32_t *,
- sizeof(struct rte_ipv4_hdr) + 4);
- seql = rte_be_to_cpu_32(seql);
-
- if (!esn)
- seq = (uint64_t)seql;
- else {
- seqh = anti_replay_get_seqh(winsz, seql, esn_hi,
- esn_low);
- seq = ((uint64_t)seqh << 32) | seql;
- }
-
- if (unlikely(seq == 0))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- ret = anti_replay_check(sa->replay, seq, winsz);
- if (unlikely(ret)) {
- otx2_err("Anti replay check failed");
- return IPSEC_ANTI_REPLAY_FAILED;
- }
-
- if (esn) {
- seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low;
- if (seq > seq_in_sa) {
- sa->esn_low = rte_cpu_to_be_32(seql);
- sa->esn_hi = rte_cpu_to_be_32(seqh);
- }
- }
- }
-
- ret = process_inb_sa(op, sess, &qp->meta_info, (void **)&req);
- }
-
- if (unlikely(ret)) {
- otx2_err("Crypto req : op %p, ret 0x%x", op, ret);
- return ret;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7,
- burst_index);
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- unsigned int burst_index)
-{
- const int driver_id = otx2_cryptodev_driver_id;
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct rte_cryptodev_sym_session *sess;
- int ret;
-
- /* Create temporary session */
- sess = rte_cryptodev_sym_session_create(qp->sess_mp);
- if (sess == NULL)
- return -ENOMEM;
-
- ret = sym_session_configure(driver_id, sym_op->xform, sess,
- qp->sess_mp_priv);
- if (ret)
- goto sess_put;
-
- sym_op->session = sess;
-
- ret = otx2_cpt_enqueue_sym(qp, op, pend_q, burst_index);
-
- if (unlikely(ret))
- goto priv_put;
-
- return 0;
-
-priv_put:
- sym_session_clear(driver_id, sess);
-sess_put:
- rte_mempool_put(qp->sess_mp, sess);
- return ret;
-}
-
-static uint16_t
-otx2_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- uint16_t nb_allowed, count = 0;
- struct otx2_cpt_qp *qp = qptr;
- struct pending_queue *pend_q;
- struct rte_crypto_op *op;
- int ret;
-
- pend_q = &qp->pend_q;
-
- nb_allowed = pending_queue_free_slots(pend_q,
- OTX2_CPT_DEFAULT_CMD_QLEN, 0);
- nb_ops = RTE_MIN(nb_ops, nb_allowed);
-
- for (count = 0; count < nb_ops; count++) {
- op = ops[count];
- if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
- ret = otx2_cpt_enqueue_sec(qp, op, pend_q,
- count);
- else if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
- ret = otx2_cpt_enqueue_sym(qp, op, pend_q,
- count);
- else
- ret = otx2_cpt_enqueue_sym_sessless(qp, op,
- pend_q, count);
- } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
- ret = otx2_cpt_enqueue_asym(qp, op, pend_q,
- count);
- else
- break;
- } else
- break;
-
- if (unlikely(ret))
- break;
- }
-
- if (unlikely(!qp->ca_enable))
- pending_queue_commit(pend_q, count, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- return count;
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req,
- struct rte_crypto_rsa_xform *rsa_ctx)
-{
- struct rte_crypto_rsa_op_param *rsa = &cop->asym->rsa;
-
- switch (rsa->op_type) {
- case RTE_CRYPTO_ASYM_OP_ENCRYPT:
- rsa->cipher.length = rsa_ctx->n.length;
- memcpy(rsa->cipher.data, req->rptr, rsa->cipher.length);
- break;
- case RTE_CRYPTO_ASYM_OP_DECRYPT:
- if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) {
- rsa->message.length = rsa_ctx->n.length;
- memcpy(rsa->message.data, req->rptr,
- rsa->message.length);
- } else {
- /* Get length of decrypted output */
- rsa->message.length = rte_cpu_to_be_16
- (*((uint16_t *)req->rptr));
- /*
- * Offset output data pointer by length field
- * (2 bytes) and copy decrypted data.
- */
- memcpy(rsa->message.data, req->rptr + 2,
- rsa->message.length);
- }
- break;
- case RTE_CRYPTO_ASYM_OP_SIGN:
- rsa->sign.length = rsa_ctx->n.length;
- memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
- break;
- case RTE_CRYPTO_ASYM_OP_VERIFY:
- if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) {
- rsa->sign.length = rsa_ctx->n.length;
- memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
- } else {
- /* Get length of signed output */
- rsa->sign.length = rte_cpu_to_be_16
- (*((uint16_t *)req->rptr));
- /*
- * Offset output data pointer by length field
- * (2 bytes) and copy signed data.
- */
- memcpy(rsa->sign.data, req->rptr + 2,
- rsa->sign.length);
- }
- if (memcmp(rsa->sign.data, rsa->message.data,
- rsa->message.length)) {
- CPT_LOG_DP_ERR("RSA verification failed");
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
- break;
- default:
- CPT_LOG_DP_DEBUG("Invalid RSA operation type");
- cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_dequeue_ecdsa_op(struct rte_crypto_ecdsa_op_param *ecdsa,
- struct cpt_request_info *req,
- struct cpt_asym_ec_ctx *ec)
-{
- int prime_len = ec_grp[ec->curveid].prime.length;
-
- if (ecdsa->op_type == RTE_CRYPTO_ASYM_OP_VERIFY)
- return;
-
- /* Separate out sign r and s components */
- memcpy(ecdsa->r.data, req->rptr, prime_len);
- memcpy(ecdsa->s.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8),
- prime_len);
- ecdsa->r.length = prime_len;
- ecdsa->s.length = prime_len;
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_dequeue_ecpm_op(struct rte_crypto_ecpm_op_param *ecpm,
- struct cpt_request_info *req,
- struct cpt_asym_ec_ctx *ec)
-{
- int prime_len = ec_grp[ec->curveid].prime.length;
-
- memcpy(ecpm->r.x.data, req->rptr, prime_len);
- memcpy(ecpm->r.y.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8),
- prime_len);
- ecpm->r.x.length = prime_len;
- ecpm->r.y.length = prime_len;
-}
-
-static void
-otx2_cpt_asym_post_process(struct rte_crypto_op *cop,
- struct cpt_request_info *req)
-{
- struct rte_crypto_asym_op *op = cop->asym;
- struct cpt_asym_sess_misc *sess;
-
- sess = get_asym_session_private_data(op->session,
- otx2_cryptodev_driver_id);
-
- switch (sess->xfrm_type) {
- case RTE_CRYPTO_ASYM_XFORM_RSA:
- otx2_cpt_asym_rsa_op(cop, req, &sess->rsa_ctx);
- break;
- case RTE_CRYPTO_ASYM_XFORM_MODEX:
- op->modex.result.length = sess->mod_ctx.modulus.length;
- memcpy(op->modex.result.data, req->rptr,
- op->modex.result.length);
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECDSA:
- otx2_cpt_asym_dequeue_ecdsa_op(&op->ecdsa, req, &sess->ec_ctx);
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECPM:
- otx2_cpt_asym_dequeue_ecpm_op(&op->ecpm, req, &sess->ec_ctx);
- break;
- default:
- CPT_LOG_DP_DEBUG("Invalid crypto xform type");
- cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-}
-
-static void
-otx2_cpt_sec_post_process(struct rte_crypto_op *cop, uintptr_t *rsp)
-{
- struct cpt_request_info *req = (struct cpt_request_info *)rsp[2];
- vq_cmd_word0_t *word0 = (vq_cmd_word0_t *)&req->ist.ei0;
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m = sym_op->m_src;
- struct rte_ipv6_hdr *ip6;
- struct rte_ipv4_hdr *ip;
- uint16_t m_len = 0;
- int mdata_len;
- char *data;
-
- mdata_len = (int)rsp[3];
- rte_pktmbuf_trim(m, mdata_len);
-
- if (word0->s.opcode.major == OTX2_IPSEC_PO_PROCESS_IPSEC_INB) {
- data = rte_pktmbuf_mtod(m, char *);
- ip = (struct rte_ipv4_hdr *)(data +
- OTX2_IPSEC_PO_INB_RPTR_HDR);
-
- if ((ip->version_ihl >> 4) == 4) {
- m_len = rte_be_to_cpu_16(ip->total_length);
- } else {
- ip6 = (struct rte_ipv6_hdr *)(data +
- OTX2_IPSEC_PO_INB_RPTR_HDR);
- m_len = rte_be_to_cpu_16(ip6->payload_len) +
- sizeof(struct rte_ipv6_hdr);
- }
-
- m->data_len = m_len;
- m->pkt_len = m_len;
- m->data_off += OTX2_IPSEC_PO_INB_RPTR_HDR;
- }
-}
-
-static inline void
-otx2_cpt_dequeue_post_process(struct otx2_cpt_qp *qp, struct rte_crypto_op *cop,
- uintptr_t *rsp, uint8_t cc)
-{
- unsigned int sz;
-
- if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
- if (likely(cc == OTX2_IPSEC_PO_CC_SUCCESS)) {
- otx2_cpt_sec_post_process(cop, rsp);
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
-
- return;
- }
-
- if (likely(cc == NO_ERR)) {
- /* Verify authentication data if required */
- if (unlikely(rsp[2]))
- compl_auth_verify(cop, (uint8_t *)rsp[2],
- rsp[3]);
- else
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else {
- if (cc == ERR_GC_ICV_MISCOMPARE)
- cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-
- if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
- sym_session_clear(otx2_cryptodev_driver_id,
- cop->sym->session);
- sz = rte_cryptodev_sym_get_existing_header_session_size(
- cop->sym->session);
- memset(cop->sym->session, 0, sz);
- rte_mempool_put(qp->sess_mp, cop->sym->session);
- cop->sym->session = NULL;
- }
- }
-
- if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
- if (likely(cc == NO_ERR)) {
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /*
- * Pass cpt_req_info stored in metabuf during
- * enqueue.
- */
- rsp = RTE_PTR_ADD(rsp, 4 * sizeof(uintptr_t));
- otx2_cpt_asym_post_process(cop,
- (struct cpt_request_info *)rsp);
- } else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-}
-
-static uint16_t
-otx2_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- int i, nb_pending, nb_completed;
- struct otx2_cpt_qp *qp = qptr;
- struct pending_queue *pend_q;
- struct cpt_request_info *req;
- struct rte_crypto_op *cop;
- uint8_t cc[nb_ops];
- uintptr_t *rsp;
- void *metabuf;
-
- pend_q = &qp->pend_q;
-
- nb_pending = pending_queue_level(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- /* Ensure pcount isn't read before data lands */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
-
- nb_ops = RTE_MIN(nb_ops, nb_pending);
-
- for (i = 0; i < nb_ops; i++) {
- pending_queue_peek(pend_q, (void **)&req,
- OTX2_CPT_DEFAULT_CMD_QLEN, 0);
-
- cc[i] = otx2_cpt_compcode_get(req);
-
- if (unlikely(cc[i] == ERR_REQ_PENDING))
- break;
-
- ops[i] = req->op;
-
- pending_queue_pop(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN);
- }
-
- nb_completed = i;
-
- for (i = 0; i < nb_completed; i++) {
- rsp = (void *)ops[i];
-
- metabuf = (void *)rsp[0];
- cop = (void *)rsp[1];
-
- ops[i] = cop;
-
- otx2_cpt_dequeue_post_process(qp, cop, rsp, cc[i]);
-
- free_op_meta(metabuf, qp->meta_info.pool);
- }
-
- return nb_completed;
-}
-
-void
-otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev)
-{
- dev->enqueue_burst = otx2_cpt_enqueue_burst;
- dev->dequeue_burst = otx2_cpt_dequeue_burst;
-
- rte_mb();
-}
-
-/* PMD ops */
-
-static int
-otx2_cpt_dev_config(struct rte_cryptodev *dev,
- struct rte_cryptodev_config *conf)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- int ret;
-
- if (conf->nb_queue_pairs > vf->max_queues) {
- CPT_LOG_ERR("Invalid number of queue pairs requested");
- return -EINVAL;
- }
-
- dev->feature_flags = otx2_cpt_default_ff_get() & ~conf->ff_disable;
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) {
- /* Initialize shared FPM table */
- ret = cpt_fpm_init(otx2_fpm_iova);
- if (ret)
- return ret;
- }
-
- /* Unregister error interrupts */
- if (vf->err_intr_registered)
- otx2_cpt_err_intr_unregister(dev);
-
- /* Detach queues */
- if (vf->nb_queues) {
- ret = otx2_cpt_queues_detach(dev);
- if (ret) {
- CPT_LOG_ERR("Could not detach CPT queues");
- return ret;
- }
- }
-
- /* Attach queues */
- ret = otx2_cpt_queues_attach(dev, conf->nb_queue_pairs);
- if (ret) {
- CPT_LOG_ERR("Could not attach CPT queues");
- return -ENODEV;
- }
-
- ret = otx2_cpt_msix_offsets_get(dev);
- if (ret) {
- CPT_LOG_ERR("Could not get MSI-X offsets");
- goto queues_detach;
- }
-
- /* Register error interrupts */
- ret = otx2_cpt_err_intr_register(dev);
- if (ret) {
- CPT_LOG_ERR("Could not register error interrupts");
- goto queues_detach;
- }
-
- ret = otx2_cpt_inline_init(dev);
- if (ret) {
- CPT_LOG_ERR("Could not enable inline IPsec");
- goto intr_unregister;
- }
-
- otx2_cpt_set_enqdeq_fns(dev);
-
- return 0;
-
-intr_unregister:
- otx2_cpt_err_intr_unregister(dev);
-queues_detach:
- otx2_cpt_queues_detach(dev);
- return ret;
-}
-
-static int
-otx2_cpt_dev_start(struct rte_cryptodev *dev)
-{
- RTE_SET_USED(dev);
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- return 0;
-}
-
-static void
-otx2_cpt_dev_stop(struct rte_cryptodev *dev)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO)
- cpt_fpm_clear();
-}
-
-static int
-otx2_cpt_dev_close(struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- int i, ret = 0;
-
- for (i = 0; i < dev->data->nb_queue_pairs; i++) {
- ret = otx2_cpt_queue_pair_release(dev, i);
- if (ret)
- return ret;
- }
-
- /* Unregister error interrupts */
- if (vf->err_intr_registered)
- otx2_cpt_err_intr_unregister(dev);
-
- /* Detach queues */
- if (vf->nb_queues) {
- ret = otx2_cpt_queues_detach(dev);
- if (ret)
- CPT_LOG_ERR("Could not detach CPT queues");
- }
-
- return ret;
-}
-
-static void
-otx2_cpt_dev_info_get(struct rte_cryptodev *dev,
- struct rte_cryptodev_info *info)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
-
- if (info != NULL) {
- info->max_nb_queue_pairs = vf->max_queues;
- info->feature_flags = otx2_cpt_default_ff_get();
- info->capabilities = otx2_cpt_capabilities_get();
- info->sym.max_nb_sessions = 0;
- info->driver_id = otx2_cryptodev_driver_id;
- info->min_mbuf_headroom_req = OTX2_CPT_MIN_HEADROOM_REQ;
- info->min_mbuf_tailroom_req = OTX2_CPT_MIN_TAILROOM_REQ;
- }
-}
-
-static int
-otx2_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
- const struct rte_cryptodev_qp_conf *conf,
- int socket_id __rte_unused)
-{
- uint8_t grp_mask = OTX2_CPT_ENG_GRPS_MASK;
- struct rte_pci_device *pci_dev;
- struct otx2_cpt_qp *qp;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (dev->data->queue_pairs[qp_id] != NULL)
- otx2_cpt_queue_pair_release(dev, qp_id);
-
- if (conf->nb_descriptors > OTX2_CPT_DEFAULT_CMD_QLEN) {
- CPT_LOG_ERR("Could not setup queue pair for %u descriptors",
- conf->nb_descriptors);
- return -EINVAL;
- }
-
- pci_dev = RTE_DEV_TO_PCI(dev->device);
-
- if (pci_dev->mem_resource[2].addr == NULL) {
- CPT_LOG_ERR("Invalid PCI mem address");
- return -EIO;
- }
-
- qp = otx2_cpt_qp_create(dev, qp_id, grp_mask);
- if (qp == NULL) {
- CPT_LOG_ERR("Could not create queue pair %d", qp_id);
- return -ENOMEM;
- }
-
- qp->sess_mp = conf->mp_session;
- qp->sess_mp_priv = conf->mp_session_private;
- dev->data->queue_pairs[qp_id] = qp;
-
- return 0;
-}
-
-static int
-otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id)
-{
- struct otx2_cpt_qp *qp = dev->data->queue_pairs[qp_id];
- int ret;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (qp == NULL)
- return -EINVAL;
-
- CPT_LOG_INFO("Releasing queue pair %d", qp_id);
-
- ret = otx2_cpt_qp_destroy(dev, qp);
- if (ret) {
- CPT_LOG_ERR("Could not destroy queue pair %d", qp_id);
- return ret;
- }
-
- dev->data->queue_pairs[qp_id] = NULL;
-
- return 0;
-}
-
-static unsigned int
-otx2_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
-{
- return cpt_get_session_size();
-}
-
-static int
-otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
- struct rte_crypto_sym_xform *xform,
- struct rte_cryptodev_sym_session *sess,
- struct rte_mempool *pool)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- return sym_session_configure(dev->driver_id, xform, sess, pool);
-}
-
-static void
-otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- return sym_session_clear(dev->driver_id, sess);
-}
-
-static unsigned int
-otx2_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
-{
- return sizeof(struct cpt_asym_sess_misc);
-}
-
-static int
-otx2_cpt_asym_session_cfg(struct rte_cryptodev *dev,
- struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
-{
- struct cpt_asym_sess_misc *priv;
- vq_cmd_word3_t vq_cmd_w3;
- int ret;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session_private_data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
- ret = cpt_fill_asym_session_parameters(priv, xform);
- if (ret) {
- CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
- return ret;
- }
-
- vq_cmd_w3.u64 = 0;
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_AE;
- priv->cpt_inst_w7 = vq_cmd_w3.u64;
-
- set_asym_session_private_data(sess, dev->driver_id, priv);
-
- return 0;
-}
-
-static void
-otx2_cpt_asym_session_clear(struct rte_cryptodev *dev,
- struct rte_cryptodev_asym_session *sess)
-{
- struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- priv = get_asym_session_private_data(sess, dev->driver_id);
- if (priv == NULL)
- return;
-
- /* Free resources allocated in session_cfg */
- cpt_free_asym_session_parameters(priv);
-
- /* Reset and free object back to pool */
- memset(priv, 0, otx2_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
-}
-
-struct rte_cryptodev_ops otx2_cpt_ops = {
- /* Device control ops */
- .dev_configure = otx2_cpt_dev_config,
- .dev_start = otx2_cpt_dev_start,
- .dev_stop = otx2_cpt_dev_stop,
- .dev_close = otx2_cpt_dev_close,
- .dev_infos_get = otx2_cpt_dev_info_get,
-
- .stats_get = NULL,
- .stats_reset = NULL,
- .queue_pair_setup = otx2_cpt_queue_pair_setup,
- .queue_pair_release = otx2_cpt_queue_pair_release,
-
- /* Symmetric crypto ops */
- .sym_session_get_size = otx2_cpt_sym_session_get_size,
- .sym_session_configure = otx2_cpt_sym_session_configure,
- .sym_session_clear = otx2_cpt_sym_session_clear,
-
- /* Asymmetric crypto ops */
- .asym_session_get_size = otx2_cpt_asym_session_size_get,
- .asym_session_configure = otx2_cpt_asym_session_cfg,
- .asym_session_clear = otx2_cpt_asym_session_clear,
-
-};
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
deleted file mode 100644
index 7faf7ad034..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
+++ /dev/null
@@ -1,15 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_OPS_H_
-#define _OTX2_CRYPTODEV_OPS_H_
-
-#include <cryptodev_pmd.h>
-
-#define OTX2_CPT_MIN_HEADROOM_REQ 48
-#define OTX2_CPT_MIN_TAILROOM_REQ 208
-
-extern struct rte_cryptodev_ops otx2_cpt_ops;
-
-#endif /* _OTX2_CRYPTODEV_OPS_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
deleted file mode 100644
index 01c081a216..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
+++ /dev/null
@@ -1,82 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_OPS_HELPER_H_
-#define _OTX2_CRYPTODEV_OPS_HELPER_H_
-
-#include "cpt_pmd_logs.h"
-
-static void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
-{
- void *priv = get_sym_session_private_data(sess, driver_id);
- struct cpt_sess_misc *misc;
- struct rte_mempool *pool;
- struct cpt_ctx *ctx;
-
- if (priv == NULL)
- return;
-
- misc = priv;
- ctx = SESS_PRIV(misc);
-
- if (ctx->auth_key != NULL)
- rte_free(ctx->auth_key);
-
- memset(priv, 0, cpt_get_session_size());
-
- pool = rte_mempool_from_obj(priv);
-
- set_sym_session_private_data(sess, driver_id, NULL);
-
- rte_mempool_put(pool, priv);
-}
-
-static __rte_always_inline uint8_t
-otx2_cpt_compcode_get(struct cpt_request_info *req)
-{
- volatile struct cpt_res_s_9s *res;
- uint8_t ret;
-
- res = (volatile struct cpt_res_s_9s *)req->completion_addr;
-
- if (unlikely(res->compcode == CPT_9X_COMP_E_NOTDONE)) {
- if (rte_get_timer_cycles() < req->time_out)
- return ERR_REQ_PENDING;
-
- CPT_LOG_DP_ERR("Request timed out");
- return ERR_REQ_TIMEOUT;
- }
-
- if (likely(res->compcode == CPT_9X_COMP_E_GOOD)) {
- ret = NO_ERR;
- if (unlikely(res->uc_compcode)) {
- ret = res->uc_compcode;
- CPT_LOG_DP_DEBUG("Request failed with microcode error");
- CPT_LOG_DP_DEBUG("MC completion code 0x%x",
- res->uc_compcode);
- }
- } else {
- CPT_LOG_DP_DEBUG("HW completion code 0x%x", res->compcode);
-
- ret = res->compcode;
- switch (res->compcode) {
- case CPT_9X_COMP_E_INSTERR:
- CPT_LOG_DP_ERR("Request failed with instruction error");
- break;
- case CPT_9X_COMP_E_FAULT:
- CPT_LOG_DP_ERR("Request failed with DMA fault");
- break;
- case CPT_9X_COMP_E_HWERR:
- CPT_LOG_DP_ERR("Request failed with hardware error");
- break;
- default:
- CPT_LOG_DP_ERR("Request failed with unknown completion code");
- }
- }
-
- return ret;
-}
-
-#endif /* _OTX2_CRYPTODEV_OPS_HELPER_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h b/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
deleted file mode 100644
index 95bce3621a..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020-2021 Marvell.
- */
-
-#ifndef _OTX2_CRYPTODEV_QP_H_
-#define _OTX2_CRYPTODEV_QP_H_
-
-#include <rte_common.h>
-#include <rte_eventdev.h>
-#include <rte_mempool.h>
-#include <rte_spinlock.h>
-
-#include "cpt_common.h"
-
-struct otx2_cpt_qp {
- uint32_t id;
- /**< Queue pair id */
- uint8_t blkaddr;
- /**< CPT0/1 BLKADDR of LF */
- uintptr_t base;
- /**< Base address where BAR is mapped */
- void *lmtline;
- /**< Address of LMTLINE */
- rte_iova_t lf_nq_reg;
- /**< LF enqueue register address */
- struct pending_queue pend_q;
- /**< Pending queue */
- struct rte_mempool *sess_mp;
- /**< Session mempool */
- struct rte_mempool *sess_mp_priv;
- /**< Session private data mempool */
- struct cpt_qp_meta_info meta_info;
- /**< Metabuf info required to support operations on the queue pair */
- rte_iova_t iq_dma_addr;
- /**< Instruction queue address */
- struct rte_event ev;
- /**< Event information required for binding cryptodev queue to
- * eventdev queue. Used by crypto adapter.
- */
- uint8_t ca_enable;
- /**< Set when queue pair is added to crypto adapter */
- uint8_t qp_ev_bind;
- /**< Set when queue pair is bound to event queue */
-};
-
-#endif /* _OTX2_CRYPTODEV_QP_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
deleted file mode 100644
index 9a4f84f8d8..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
+++ /dev/null
@@ -1,655 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_esp.h>
-#include <rte_ethdev.h>
-#include <rte_ip.h>
-#include <rte_malloc.h>
-#include <rte_security.h>
-#include <rte_security_driver.h>
-#include <rte_udp.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_sec.h"
-#include "otx2_security.h"
-
-static int
-ipsec_lp_len_precalc(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_sec_session_ipsec_lp *lp)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
-
- lp->partial_len = 0;
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- lp->partial_len = sizeof(struct rte_ipv4_hdr);
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- lp->partial_len = sizeof(struct rte_ipv6_hdr);
- else
- return -EINVAL;
- }
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
- lp->partial_len += sizeof(struct rte_esp_hdr);
- lp->roundup_len = sizeof(struct rte_esp_tail);
- } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) {
- lp->partial_len += OTX2_SEC_AH_HDR_LEN;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->options.udp_encap)
- lp->partial_len += sizeof(struct rte_udp_hdr);
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- lp->partial_len += OTX2_SEC_AES_GCM_IV_LEN;
- lp->partial_len += OTX2_SEC_AES_GCM_MAC_LEN;
- lp->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN;
- return 0;
- } else {
- return -EINVAL;
- }
- }
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- lp->partial_len += OTX2_SEC_AES_CBC_IV_LEN;
- lp->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN;
- } else {
- return -EINVAL;
- }
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- lp->partial_len += OTX2_SEC_SHA1_HMAC_LEN;
- else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
- lp->partial_len += OTX2_SEC_SHA2_HMAC_LEN;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int
-otx2_cpt_enq_sa_write(struct otx2_sec_session_ipsec_lp *lp,
- struct otx2_cpt_qp *qptr, uint8_t opcode)
-{
- uint64_t lmt_status, time_out;
- void *lmtline = qptr->lmtline;
- struct otx2_cpt_inst_s inst;
- struct otx2_cpt_res *res;
- uint64_t *mdata;
- int ret = 0;
-
- if (unlikely(rte_mempool_get(qptr->meta_info.pool,
- (void **)&mdata) < 0))
- return -ENOMEM;
-
- res = (struct otx2_cpt_res *)RTE_PTR_ALIGN(mdata, 16);
- res->compcode = CPT_9X_COMP_E_NOTDONE;
-
- inst.opcode = opcode | (lp->ctx_len << 8);
- inst.param1 = 0;
- inst.param2 = 0;
- inst.dlen = lp->ctx_len << 3;
- inst.dptr = rte_mempool_virt2iova(lp);
- inst.rptr = 0;
- inst.cptr = rte_mempool_virt2iova(lp);
- inst.egrp = OTX2_CPT_EGRP_SE;
-
- inst.u64[0] = 0;
- inst.u64[2] = 0;
- inst.u64[3] = 0;
- inst.res_addr = rte_mempool_virt2iova(res);
-
- rte_io_wmb();
-
- do {
- /* Copy CPT command to LMTLINE */
- otx2_lmt_mov(lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(qptr->lf_nq_reg);
- } while (lmt_status == 0);
-
- time_out = rte_get_timer_cycles() +
- DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
-
- while (res->compcode == CPT_9X_COMP_E_NOTDONE) {
- if (rte_get_timer_cycles() > time_out) {
- rte_mempool_put(qptr->meta_info.pool, mdata);
- otx2_err("Request timed out");
- return -ETIMEDOUT;
- }
- rte_io_rmb();
- }
-
- if (unlikely(res->compcode != CPT_9X_COMP_E_GOOD)) {
- ret = res->compcode;
- switch (ret) {
- case CPT_9X_COMP_E_INSTERR:
- otx2_err("Request failed with instruction error");
- break;
- case CPT_9X_COMP_E_FAULT:
- otx2_err("Request failed with DMA fault");
- break;
- case CPT_9X_COMP_E_HWERR:
- otx2_err("Request failed with hardware error");
- break;
- default:
- otx2_err("Request failed with unknown hardware "
- "completion code : 0x%x", ret);
- }
- goto mempool_put;
- }
-
- if (unlikely(res->uc_compcode != OTX2_IPSEC_PO_CC_SUCCESS)) {
- ret = res->uc_compcode;
- switch (ret) {
- case OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED:
- otx2_err("Invalid auth type");
- break;
- case OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED:
- otx2_err("Invalid encrypt type");
- break;
- default:
- otx2_err("Request failed with unknown microcode "
- "completion code : 0x%x", ret);
- }
- }
-
-mempool_put:
- rte_mempool_put(qptr->meta_info.pool, mdata);
- return ret;
-}
-
-static void
-set_session_misc_attributes(struct otx2_sec_session_ipsec_lp *sess,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_crypto_sym_xform *auth_xform,
- struct rte_crypto_sym_xform *cipher_xform)
-{
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- sess->iv_offset = crypto_xform->aead.iv.offset;
- sess->iv_length = crypto_xform->aead.iv.length;
- sess->aad_length = crypto_xform->aead.aad_length;
- sess->mac_len = crypto_xform->aead.digest_length;
- } else {
- sess->iv_offset = cipher_xform->cipher.iv.offset;
- sess->iv_length = cipher_xform->cipher.iv.length;
- sess->auth_iv_offset = auth_xform->auth.iv.offset;
- sess->auth_iv_length = auth_xform->auth.iv.length;
- sess->mac_len = auth_xform->auth.digest_length;
- }
-}
-
-static int
-crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_ipsec_po_ip_template *template = NULL;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_sec_session_ipsec_lp *lp;
- struct otx2_ipsec_po_sa_ctl *ctl;
- int cipher_key_len, auth_key_len;
- struct otx2_ipsec_po_out_sa *sa;
- struct otx2_sec_session *sess;
- struct otx2_cpt_inst_s inst;
- struct rte_ipv6_hdr *ip6;
- struct rte_ipv4_hdr *ip;
- int ret, ctx_len;
-
- sess = get_sec_session_private_data(sec_sess);
- sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
- lp = &sess->ipsec.lp;
-
- sa = &lp->out_sa;
- ctl = &sa->ctl;
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_po_out_sa));
-
- /* Initialize lookaside ipsec private data */
- lp->ip_id = 0;
- lp->seq_lo = 1;
- lp->seq_hi = 0;
-
- ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- return ret;
-
- ret = ipsec_lp_len_precalc(ipsec, crypto_xform, lp);
- if (ret)
- return ret;
-
- /* Start ip id from 1 */
- lp->ip_id = 1;
-
- if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) {
- template = &sa->aes_gcm.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- aes_gcm.template) + sizeof(
- sa->aes_gcm.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA1) {
- template = &sa->sha1.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha1.template) + sizeof(
- sa->sha1.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256) {
- template = &sa->sha2.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha2.template) + sizeof(
- sa->sha2.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else {
- return -EINVAL;
- }
- ip = &template->ip4.ipv4_hdr;
- if (ipsec->options.udp_encap) {
- ip->next_proto_id = IPPROTO_UDP;
- template->ip4.udp_src = rte_be_to_cpu_16(4500);
- template->ip4.udp_dst = rte_be_to_cpu_16(4500);
- } else {
- ip->next_proto_id = IPPROTO_ESP;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- ip->version_ihl = RTE_IPV4_VHL_DEF;
- ip->time_to_live = ipsec->tunnel.ipv4.ttl;
- ip->type_of_service |= (ipsec->tunnel.ipv4.dscp << 2);
- if (ipsec->tunnel.ipv4.df)
- ip->fragment_offset = BIT(14);
- memcpy(&ip->src_addr, &ipsec->tunnel.ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&ip->dst_addr, &ipsec->tunnel.ipv4.dst_ip,
- sizeof(struct in_addr));
- } else if (ipsec->tunnel.type ==
- RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
-
- if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) {
- template = &sa->aes_gcm.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- aes_gcm.template) + sizeof(
- sa->aes_gcm.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA1) {
- template = &sa->sha1.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha1.template) + sizeof(
- sa->sha1.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256) {
- template = &sa->sha2.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha2.template) + sizeof(
- sa->sha2.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else {
- return -EINVAL;
- }
-
- ip6 = &template->ip6.ipv6_hdr;
- if (ipsec->options.udp_encap) {
- ip6->proto = IPPROTO_UDP;
- template->ip6.udp_src = rte_be_to_cpu_16(4500);
- template->ip6.udp_dst = rte_be_to_cpu_16(4500);
- } else {
- ip6->proto = (ipsec->proto ==
- RTE_SECURITY_IPSEC_SA_PROTO_ESP) ?
- IPPROTO_ESP : IPPROTO_AH;
- }
- ip6->vtc_flow = rte_cpu_to_be_32(0x60000000 |
- ((ipsec->tunnel.ipv6.dscp <<
- RTE_IPV6_HDR_TC_SHIFT) &
- RTE_IPV6_HDR_TC_MASK) |
- ((ipsec->tunnel.ipv6.flabel <<
- RTE_IPV6_HDR_FL_SHIFT) &
- RTE_IPV6_HDR_FL_MASK));
- ip6->hop_limits = ipsec->tunnel.ipv6.hlimit;
- memcpy(&ip6->src_addr, &ipsec->tunnel.ipv6.src_addr,
- sizeof(struct in6_addr));
- memcpy(&ip6->dst_addr, &ipsec->tunnel.ipv6.dst_addr,
- sizeof(struct in6_addr));
- }
- }
-
- cipher_xform = crypto_xform;
- auth_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- memcpy(sa->sha1.hmac_key, auth_key, auth_key_len);
- else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
- memcpy(sa->sha2.hmac_key, auth_key, auth_key_len);
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_SE;
- inst.cptr = rte_mempool_virt2iova(sa);
-
- lp->cpt_inst_w7 = inst.u64[7];
- lp->ucmd_opcode = (lp->ctx_len << 8) |
- (OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB);
-
- /* Set per packet IV and IKEv2 bits */
- lp->ucmd_param1 = BIT(11) | BIT(9);
- lp->ucmd_param2 = 0;
-
- set_session_misc_attributes(lp, crypto_xform,
- auth_xform, cipher_xform);
-
- return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0],
- OTX2_IPSEC_PO_WRITE_IPSEC_OUTB);
-}
-
-static int
-crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_sec_session_ipsec_lp *lp;
- struct otx2_ipsec_po_sa_ctl *ctl;
- int cipher_key_len, auth_key_len;
- struct otx2_ipsec_po_in_sa *sa;
- struct otx2_sec_session *sess;
- struct otx2_cpt_inst_s inst;
- int ret;
-
- sess = get_sec_session_private_data(sec_sess);
- sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
- lp = &sess->ipsec.lp;
-
- sa = &lp->in_sa;
- ctl = &sa->ctl;
-
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_po_in_sa));
- sa->replay_win_sz = ipsec->replay_win_sz;
-
- ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- return ret;
-
- auth_xform = crypto_xform;
- cipher_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
-
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- aes_gcm.hmac_key[0]) >> 3;
- RTE_ASSERT(lp->ctx_len == OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN);
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- memcpy(sa->aes_gcm.hmac_key, auth_key, auth_key_len);
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- aes_gcm.selector) >> 3;
- } else if (auth_xform->auth.algo ==
- RTE_CRYPTO_AUTH_SHA256_HMAC) {
- memcpy(sa->sha2.hmac_key, auth_key, auth_key_len);
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- sha2.selector) >> 3;
- }
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_SE;
- inst.cptr = rte_mempool_virt2iova(sa);
-
- lp->cpt_inst_w7 = inst.u64[7];
- lp->ucmd_opcode = (lp->ctx_len << 8) |
- (OTX2_IPSEC_PO_PROCESS_IPSEC_INB);
- lp->ucmd_param1 = 0;
-
- /* Set IKEv2 bit */
- lp->ucmd_param2 = BIT(12);
-
- set_session_misc_attributes(lp, crypto_xform,
- auth_xform, cipher_xform);
-
- if (sa->replay_win_sz) {
- if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) {
- otx2_err("Replay window size is not supported");
- return -ENOTSUP;
- }
- sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay),
- 0);
- if (sa->replay == NULL)
- return -ENOMEM;
-
- /* Set window bottom to 1, base and top to size of window */
- sa->replay->winb = 1;
- sa->replay->wint = sa->replay_win_sz;
- sa->replay->base = sa->replay_win_sz;
- sa->esn_low = 0;
- sa->esn_hi = 0;
- }
-
- return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0],
- OTX2_IPSEC_PO_WRITE_IPSEC_INB);
-}
-
-static int
-crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sess)
-{
- int ret;
-
- if (crypto_dev->data->queue_pairs[0] == NULL) {
- otx2_err("Setup cpt queue pair before creating sec session");
- return -EPERM;
- }
-
- ret = ipsec_po_xform_verify(ipsec, crypto_xform);
- if (ret)
- return ret;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
- return crypto_sec_ipsec_inb_session_create(crypto_dev, ipsec,
- crypto_xform, sess);
- else
- return crypto_sec_ipsec_outb_session_create(crypto_dev, ipsec,
- crypto_xform, sess);
-}
-
-static int
-otx2_crypto_sec_session_create(void *device,
- struct rte_security_session_conf *conf,
- struct rte_security_session *sess,
- struct rte_mempool *mempool)
-{
- struct otx2_sec_session *priv;
- int ret;
-
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
- return -ENOTSUP;
-
- if (rte_security_dynfield_register() < 0)
- return -rte_errno;
-
- if (rte_mempool_get(mempool, (void **)&priv)) {
- otx2_err("Could not allocate security session private data");
- return -ENOMEM;
- }
-
- set_sec_session_private_data(sess, priv);
-
- priv->userdata = conf->userdata;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
- ret = crypto_sec_ipsec_session_create(device, &conf->ipsec,
- conf->crypto_xform,
- sess);
- else
- ret = -ENOTSUP;
-
- if (ret)
- goto mempool_put;
-
- return 0;
-
-mempool_put:
- rte_mempool_put(mempool, priv);
- set_sec_session_private_data(sess, NULL);
- return ret;
-}
-
-static int
-otx2_crypto_sec_session_destroy(void *device __rte_unused,
- struct rte_security_session *sess)
-{
- struct otx2_sec_session *priv;
- struct rte_mempool *sess_mp;
-
- priv = get_sec_session_private_data(sess);
-
- if (priv == NULL)
- return 0;
-
- sess_mp = rte_mempool_from_obj(priv);
-
- memset(priv, 0, sizeof(*priv));
-
- set_sec_session_private_data(sess, NULL);
- rte_mempool_put(sess_mp, priv);
-
- return 0;
-}
-
-static unsigned int
-otx2_crypto_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct otx2_sec_session);
-}
-
-static int
-otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused,
- struct rte_security_session *session,
- struct rte_mbuf *m, void *params __rte_unused)
-{
- /* Set security session as the pkt metadata */
- *rte_security_dynfield(m) = (rte_security_dynfield_t)session;
-
- return 0;
-}
-
-static int
-otx2_crypto_sec_get_userdata(void *device __rte_unused, uint64_t md,
- void **userdata)
-{
- /* Retrieve userdata */
- *userdata = (void *)md;
-
- return 0;
-}
-
-static struct rte_security_ops otx2_crypto_sec_ops = {
- .session_create = otx2_crypto_sec_session_create,
- .session_destroy = otx2_crypto_sec_session_destroy,
- .session_get_size = otx2_crypto_sec_session_get_size,
- .set_pkt_metadata = otx2_crypto_sec_set_pkt_mdata,
- .get_userdata = otx2_crypto_sec_get_userdata,
- .capabilities_get = otx2_crypto_sec_capabilities_get
-};
-
-int
-otx2_crypto_sec_ctx_create(struct rte_cryptodev *cdev)
-{
- struct rte_security_ctx *ctx;
-
- ctx = rte_malloc("otx2_cpt_dev_sec_ctx",
- sizeof(struct rte_security_ctx), 0);
-
- if (ctx == NULL)
- return -ENOMEM;
-
- /* Populate ctx */
- ctx->device = cdev;
- ctx->ops = &otx2_crypto_sec_ops;
- ctx->sess_cnt = 0;
-
- cdev->security_ctx = ctx;
-
- return 0;
-}
-
-void
-otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *cdev)
-{
- rte_free(cdev->security_ctx);
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h b/drivers/crypto/octeontx2/otx2_cryptodev_sec.h
deleted file mode 100644
index ff3329c9c1..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h
+++ /dev/null
@@ -1,64 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_CRYPTODEV_SEC_H__
-#define __OTX2_CRYPTODEV_SEC_H__
-
-#include <rte_cryptodev.h>
-
-#include "otx2_ipsec_po.h"
-
-struct otx2_sec_session_ipsec_lp {
- RTE_STD_C11
- union {
- /* Inbound SA */
- struct otx2_ipsec_po_in_sa in_sa;
- /* Outbound SA */
- struct otx2_ipsec_po_out_sa out_sa;
- };
-
- uint64_t cpt_inst_w7;
- union {
- uint64_t ucmd_w0;
- struct {
- uint16_t ucmd_dlen;
- uint16_t ucmd_param2;
- uint16_t ucmd_param1;
- uint16_t ucmd_opcode;
- };
- };
-
- uint8_t partial_len;
- uint8_t roundup_len;
- uint8_t roundup_byte;
- uint16_t ip_id;
- union {
- uint64_t esn;
- struct {
- uint32_t seq_lo;
- uint32_t seq_hi;
- };
- };
-
- /** Context length in 8-byte words */
- size_t ctx_len;
- /** Auth IV offset in bytes */
- uint16_t auth_iv_offset;
- /** IV offset in bytes */
- uint16_t iv_offset;
- /** AAD length */
- uint16_t aad_length;
- /** MAC len in bytes */
- uint8_t mac_len;
- /** IV length in bytes */
- uint8_t iv_length;
- /** Auth IV length in bytes */
- uint8_t auth_iv_length;
-};
-
-int otx2_crypto_sec_ctx_create(struct rte_cryptodev *crypto_dev);
-
-void otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *crypto_dev);
-
-#endif /* __OTX2_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h b/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
deleted file mode 100644
index 089a3d073a..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
+++ /dev/null
@@ -1,227 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_ANTI_REPLAY_H__
-#define __OTX2_IPSEC_ANTI_REPLAY_H__
-
-#include <rte_mbuf.h>
-
-#include "otx2_ipsec_fp.h"
-
-#define WORD_SHIFT 6
-#define WORD_SIZE (1 << WORD_SHIFT)
-#define WORD_MASK (WORD_SIZE - 1)
-
-#define IPSEC_ANTI_REPLAY_FAILED (-1)
-
-static inline int
-anti_replay_check(struct otx2_ipsec_replay *replay, uint64_t seq,
- uint64_t winsz)
-{
- uint64_t *window = &replay->window[0];
- uint64_t ex_winsz = winsz + WORD_SIZE;
- uint64_t winwords = ex_winsz >> WORD_SHIFT;
- uint64_t base = replay->base;
- uint32_t winb = replay->winb;
- uint32_t wint = replay->wint;
- uint64_t seqword, shiftwords;
- uint64_t bit_pos;
- uint64_t shift;
- uint64_t *wptr;
- uint64_t tmp;
-
- if (winsz > 64)
- goto slow_shift;
- /* Check if the seq is the biggest one yet */
- if (likely(seq > base)) {
- shift = seq - base;
- if (shift < winsz) { /* In window */
- /*
- * If more than 64-bit anti-replay window,
- * use slow shift routine
- */
- wptr = window + (shift >> WORD_SHIFT);
- *wptr <<= shift;
- *wptr |= 1ull;
- } else {
- /* No special handling of window size > 64 */
- wptr = window + ((winsz - 1) >> WORD_SHIFT);
- /*
- * Zero out the whole window (especially for
- * bigger than 64b window) till the last 64b word
- * as the incoming sequence number minus
- * base sequence is more than the window size.
- */
- while (window != wptr)
- *window++ = 0ull;
- /*
- * Set the last bit (of the window) to 1
- * as that corresponds to the base sequence number.
- * Now any incoming sequence number which is
- * (base - window size - 1) will pass anti-replay check
- */
- *wptr = 1ull;
- }
- /*
- * Set the base to incoming sequence number as
- * that is the biggest sequence number seen yet
- */
- replay->base = seq;
- return 0;
- }
-
- bit_pos = base - seq;
-
- /* If seq falls behind the window, return failure */
- if (bit_pos >= winsz)
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* seq is within anti-replay window */
- wptr = window + ((winsz - bit_pos - 1) >> WORD_SHIFT);
- bit_pos &= WORD_MASK;
-
- /* Check if this is a replayed packet */
- if (*wptr & ((1ull) << bit_pos))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* mark as seen */
- *wptr |= ((1ull) << bit_pos);
- return 0;
-
-slow_shift:
- if (likely(seq > base)) {
- uint32_t i;
-
- shift = seq - base;
- if (unlikely(shift >= winsz)) {
- /*
- * shift is bigger than the window,
- * so just zero out everything
- */
- for (i = 0; i < winwords; i++)
- window[i] = 0;
-winupdate:
- /* Find out the word */
- seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
-
- /* Find out the bit in the word */
- bit_pos = (seq - 1) & WORD_MASK;
-
- /*
- * Set the bit corresponding to sequence number
- * in window to mark it as received
- */
- window[seqword] |= (1ull << (63 - bit_pos));
-
- /* wint and winb range from 1 to ex_winsz */
- replay->wint = ((wint + shift - 1) % ex_winsz) + 1;
- replay->winb = ((winb + shift - 1) % ex_winsz) + 1;
-
- replay->base = seq;
- return 0;
- }
-
- /*
- * New sequence number is bigger than the base but
- * it's not bigger than base + window size
- */
-
- shiftwords = ((wint + shift - 1) >> WORD_SHIFT) -
- ((wint - 1) >> WORD_SHIFT);
- if (unlikely(shiftwords)) {
- tmp = (wint + WORD_SIZE - 1) / WORD_SIZE;
- for (i = 0; i < shiftwords; i++) {
- tmp %= winwords;
- window[tmp++] = 0;
- }
- }
-
- goto winupdate;
- }
-
- /* Sequence number is before the window */
- if (unlikely((seq + winsz) <= base))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* Sequence number is within the window */
-
- /* Find out the word */
- seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
-
- /* Find out the bit in the word */
- bit_pos = (seq - 1) & WORD_MASK;
-
- /* Check if this is a replayed packet */
- if (window[seqword] & (1ull << (63 - bit_pos)))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /*
- * Set the bit corresponding to sequence number
- * in window to mark it as received
- */
- window[seqword] |= (1ull << (63 - bit_pos));
-
- return 0;
-}
-
-static inline int
-cpt_ipsec_ip_antireplay_check(struct otx2_ipsec_fp_in_sa *sa, void *l3_ptr)
-{
- struct otx2_ipsec_fp_res_hdr *hdr = l3_ptr;
- uint64_t seq_in_sa;
- uint32_t seqh = 0;
- uint32_t seql;
- uint64_t seq;
- uint8_t esn;
- int ret;
-
- esn = sa->ctl.esn_en;
- seql = rte_be_to_cpu_32(hdr->seq_no_lo);
-
- if (!esn)
- seq = (uint64_t)seql;
- else {
- seqh = rte_be_to_cpu_32(hdr->seq_no_hi);
- seq = ((uint64_t)seqh << 32) | seql;
- }
-
- if (unlikely(seq == 0))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- rte_spinlock_lock(&sa->replay->lock);
- ret = anti_replay_check(sa->replay, seq, sa->replay_win_sz);
- if (esn && (ret == 0)) {
- seq_in_sa = ((uint64_t)rte_be_to_cpu_32(sa->esn_hi) << 32) |
- rte_be_to_cpu_32(sa->esn_low);
- if (seq > seq_in_sa) {
- sa->esn_low = rte_cpu_to_be_32(seql);
- sa->esn_hi = rte_cpu_to_be_32(seqh);
- }
- }
- rte_spinlock_unlock(&sa->replay->lock);
-
- return ret;
-}
-
-static inline uint32_t
-anti_replay_get_seqh(uint32_t winsz, uint32_t seql,
- uint32_t esn_hi, uint32_t esn_low)
-{
- uint32_t win_low = esn_low - winsz + 1;
-
- if (esn_low > winsz - 1) {
- /* Window is in one sequence number subspace */
- if (seql > win_low)
- return esn_hi;
- else
- return esn_hi + 1;
- } else {
- /* Window is split across two sequence number subspaces */
- if (seql > win_low)
- return esn_hi - 1;
- else
- return esn_hi;
- }
-}
-#endif /* __OTX2_IPSEC_ANTI_REPLAY_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_fp.h b/drivers/crypto/octeontx2/otx2_ipsec_fp.h
deleted file mode 100644
index 2461e7462b..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_fp.h
+++ /dev/null
@@ -1,371 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_FP_H__
-#define __OTX2_IPSEC_FP_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_security.h>
-
-/* Macros for anti replay and ESN */
-#define OTX2_IPSEC_MAX_REPLAY_WIN_SZ 1024
-
-struct otx2_ipsec_fp_res_hdr {
- uint32_t spi;
- uint32_t seq_no_lo;
- uint32_t seq_no_hi;
- uint32_t rsvd;
-};
-
-enum {
- OTX2_IPSEC_FP_SA_DIRECTION_INBOUND = 0,
- OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_IP_VERSION_4 = 0,
- OTX2_IPSEC_FP_SA_IP_VERSION_6 = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_MODE_TRANSPORT = 0,
- OTX2_IPSEC_FP_SA_MODE_TUNNEL = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_PROTOCOL_AH = 0,
- OTX2_IPSEC_FP_SA_PROTOCOL_ESP = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_128 = 1,
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_192 = 2,
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_256 = 3,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_ENC_NULL = 0,
- OTX2_IPSEC_FP_SA_ENC_DES_CBC = 1,
- OTX2_IPSEC_FP_SA_ENC_3DES_CBC = 2,
- OTX2_IPSEC_FP_SA_ENC_AES_CBC = 3,
- OTX2_IPSEC_FP_SA_ENC_AES_CTR = 4,
- OTX2_IPSEC_FP_SA_ENC_AES_GCM = 5,
- OTX2_IPSEC_FP_SA_ENC_AES_CCM = 6,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_AUTH_NULL = 0,
- OTX2_IPSEC_FP_SA_AUTH_MD5 = 1,
- OTX2_IPSEC_FP_SA_AUTH_SHA1 = 2,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_224 = 3,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_256 = 4,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_384 = 5,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_512 = 6,
- OTX2_IPSEC_FP_SA_AUTH_AES_GMAC = 7,
- OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128 = 8,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_FRAG_POST = 0,
- OTX2_IPSEC_FP_SA_FRAG_PRE = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_ENCAP_NONE = 0,
- OTX2_IPSEC_FP_SA_ENCAP_UDP = 1,
-};
-
-struct otx2_ipsec_fp_sa_ctl {
- rte_be32_t spi : 32;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_42_40 : 3;
- uint64_t esn_en : 1;
- uint64_t rsvd_45_44 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct otx2_ipsec_fp_out_sa {
- /* w0 */
- struct otx2_ipsec_fp_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4];
- uint16_t udp_src;
- uint16_t udp_dst;
-
- /* w2 */
- uint32_t ip_src;
- uint32_t ip_dst;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-};
-
-struct otx2_ipsec_replay {
- rte_spinlock_t lock;
- uint32_t winb;
- uint32_t wint;
- uint64_t base; /**< base of the anti-replay window */
- uint64_t window[17]; /**< anti-replay window */
-};
-
-struct otx2_ipsec_fp_in_sa {
- /* w0 */
- struct otx2_ipsec_fp_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4]; /* Only for AES-GCM */
- uint32_t unused;
-
- /* w2 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-
- RTE_STD_C11
- union {
- void *userdata;
- uint64_t udata64;
- };
- union {
- struct otx2_ipsec_replay *replay;
- uint64_t replay64;
- };
- uint32_t replay_win_sz;
-
- uint32_t reserved1;
-};
-
-static inline int
-ipsec_fp_xform_cipher_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- switch (xform->cipher.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -ENOTSUP;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_auth_verify(struct rte_crypto_sym_xform *xform)
-{
- uint16_t keylen = xform->auth.key.length;
-
- if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- if (keylen >= 20 && keylen <= 64)
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_aead_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
- return -EINVAL;
-
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- switch (xform->aead.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -EINVAL;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- int ret;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- return ipsec_fp_xform_aead_verify(ipsec, xform);
-
- if (xform->next == NULL)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- /* Ingress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- /* Egress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- cipher_xform = xform;
- auth_xform = xform->next;
- }
-
- ret = ipsec_fp_xform_cipher_verify(cipher_xform);
- if (ret)
- return ret;
-
- ret = ipsec_fp_xform_auth_verify(auth_xform);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static inline int
-ipsec_fp_sa_ctl_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_ipsec_fp_sa_ctl *ctl)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
- int aes_key_len;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND;
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_INBOUND;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4;
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_6;
- else
- return -EINVAL;
- }
-
- ctl->inner_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4;
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT)
- ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TRANSPORT;
- else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TUNNEL;
- else
- return -EINVAL;
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
- ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_AH;
- else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
- ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_ESP;
- else
- return -EINVAL;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_GCM;
- aes_key_len = xform->aead.key.length;
- } else {
- return -ENOTSUP;
- }
- } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_CBC;
- aes_key_len = cipher_xform->cipher.key.length;
- } else {
- return -ENOTSUP;
- }
-
- switch (aes_key_len) {
- case 16:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_128;
- break;
- case 24:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_192;
- break;
- case 32:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) {
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_NULL:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_NULL;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_MD5;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA1;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_224;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_256;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_384;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_512;
- break;
- case RTE_CRYPTO_AUTH_AES_GMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_GMAC;
- break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128;
- break;
- default:
- return -ENOTSUP;
- }
- }
-
- if (ipsec->options.esn == 1)
- ctl->esn_en = 1;
-
- ctl->spi = rte_cpu_to_be_32(ipsec->spi);
-
- return 0;
-}
-
-#endif /* __OTX2_IPSEC_FP_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po.h b/drivers/crypto/octeontx2/otx2_ipsec_po.h
deleted file mode 100644
index 695f552644..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_po.h
+++ /dev/null
@@ -1,447 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_PO_H__
-#define __OTX2_IPSEC_PO_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_ip.h>
-#include <rte_security.h>
-
-#define OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN 0x09
-
-#define OTX2_IPSEC_PO_WRITE_IPSEC_OUTB 0x20
-#define OTX2_IPSEC_PO_WRITE_IPSEC_INB 0x21
-#define OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB 0x23
-#define OTX2_IPSEC_PO_PROCESS_IPSEC_INB 0x24
-
-#define OTX2_IPSEC_PO_INB_RPTR_HDR 0x8
-
-enum otx2_ipsec_po_comp_e {
- OTX2_IPSEC_PO_CC_SUCCESS = 0x00,
- OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED = 0xB0,
- OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED = 0xB1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_DIRECTION_INBOUND = 0,
- OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_IP_VERSION_4 = 0,
- OTX2_IPSEC_PO_SA_IP_VERSION_6 = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_MODE_TRANSPORT = 0,
- OTX2_IPSEC_PO_SA_MODE_TUNNEL = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_PROTOCOL_AH = 0,
- OTX2_IPSEC_PO_SA_PROTOCOL_ESP = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_128 = 1,
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_192 = 2,
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_256 = 3,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_ENC_NULL = 0,
- OTX2_IPSEC_PO_SA_ENC_DES_CBC = 1,
- OTX2_IPSEC_PO_SA_ENC_3DES_CBC = 2,
- OTX2_IPSEC_PO_SA_ENC_AES_CBC = 3,
- OTX2_IPSEC_PO_SA_ENC_AES_CTR = 4,
- OTX2_IPSEC_PO_SA_ENC_AES_GCM = 5,
- OTX2_IPSEC_PO_SA_ENC_AES_CCM = 6,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_AUTH_NULL = 0,
- OTX2_IPSEC_PO_SA_AUTH_MD5 = 1,
- OTX2_IPSEC_PO_SA_AUTH_SHA1 = 2,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_224 = 3,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256 = 4,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_384 = 5,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_512 = 6,
- OTX2_IPSEC_PO_SA_AUTH_AES_GMAC = 7,
- OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128 = 8,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_FRAG_POST = 0,
- OTX2_IPSEC_PO_SA_FRAG_PRE = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_ENCAP_NONE = 0,
- OTX2_IPSEC_PO_SA_ENCAP_UDP = 1,
-};
-
-struct otx2_ipsec_po_out_hdr {
- uint32_t ip_id;
- uint32_t seq;
- uint8_t iv[16];
-};
-
-union otx2_ipsec_po_bit_perfect_iv {
- uint8_t aes_iv[16];
- uint8_t des_iv[8];
- struct {
- uint8_t nonce[4];
- uint8_t iv[8];
- uint8_t counter[4];
- } gcm;
-};
-
-struct otx2_ipsec_po_traffic_selector {
- rte_be16_t src_port[2];
- rte_be16_t dst_port[2];
- RTE_STD_C11
- union {
- struct {
- rte_be32_t src_addr[2];
- rte_be32_t dst_addr[2];
- } ipv4;
- struct {
- uint8_t src_addr[32];
- uint8_t dst_addr[32];
- } ipv6;
- };
-};
-
-struct otx2_ipsec_po_sa_ctl {
- rte_be32_t spi : 32;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_42_40 : 3;
- uint64_t esn_en : 1;
- uint64_t rsvd_45_44 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct otx2_ipsec_po_in_sa {
- /* w0 */
- struct otx2_ipsec_po_sa_ctl ctl;
-
- /* w1-w4 */
- uint8_t cipher_key[32];
-
- /* w5-w6 */
- union otx2_ipsec_po_bit_perfect_iv iv;
-
- /* w7 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w8 */
- uint8_t udp_encap[8];
-
- /* w9-w33 */
- union {
- struct {
- uint8_t hmac_key[48];
- struct otx2_ipsec_po_traffic_selector selector;
- } aes_gcm;
- struct {
- uint8_t hmac_key[64];
- uint8_t hmac_iv[64];
- struct otx2_ipsec_po_traffic_selector selector;
- } sha2;
- };
- union {
- struct otx2_ipsec_replay *replay;
- uint64_t replay64;
- };
- uint32_t replay_win_sz;
-};
-
-struct otx2_ipsec_po_ip_template {
- RTE_STD_C11
- union {
- struct {
- struct rte_ipv4_hdr ipv4_hdr;
- uint16_t udp_src;
- uint16_t udp_dst;
- } ip4;
- struct {
- struct rte_ipv6_hdr ipv6_hdr;
- uint16_t udp_src;
- uint16_t udp_dst;
- } ip6;
- };
-};
-
-struct otx2_ipsec_po_out_sa {
- /* w0 */
- struct otx2_ipsec_po_sa_ctl ctl;
-
- /* w1-w4 */
- uint8_t cipher_key[32];
-
- /* w5-w6 */
- union otx2_ipsec_po_bit_perfect_iv iv;
-
- /* w7 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w8-w55 */
- union {
- struct {
- struct otx2_ipsec_po_ip_template template;
- } aes_gcm;
- struct {
- uint8_t hmac_key[24];
- uint8_t unused[24];
- struct otx2_ipsec_po_ip_template template;
- } sha1;
- struct {
- uint8_t hmac_key[64];
- uint8_t hmac_iv[64];
- struct otx2_ipsec_po_ip_template template;
- } sha2;
- };
-};
-
-static inline int
-ipsec_po_xform_cipher_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- switch (xform->cipher.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -ENOTSUP;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_auth_verify(struct rte_crypto_sym_xform *xform)
-{
- uint16_t keylen = xform->auth.key.length;
-
- if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- if (keylen >= 20 && keylen <= 64)
- return 0;
- } else if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) {
- if (keylen >= 32 && keylen <= 64)
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_aead_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
- return -EINVAL;
-
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- switch (xform->aead.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -EINVAL;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- int ret;
-
- if (ipsec->life.bytes_hard_limit != 0 ||
- ipsec->life.bytes_soft_limit != 0 ||
- ipsec->life.packets_hard_limit != 0 ||
- ipsec->life.packets_soft_limit != 0)
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- return ipsec_po_xform_aead_verify(ipsec, xform);
-
- if (xform->next == NULL)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- /* Ingress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- /* Egress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- cipher_xform = xform;
- auth_xform = xform->next;
- }
-
- ret = ipsec_po_xform_cipher_verify(cipher_xform);
- if (ret)
- return ret;
-
- ret = ipsec_po_xform_auth_verify(auth_xform);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static inline int
-ipsec_po_sa_ctl_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_ipsec_po_sa_ctl *ctl)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
- int aes_key_len;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND;
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_INBOUND;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_4;
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_6;
- else
- return -EINVAL;
- }
-
- ctl->inner_ip_ver = ctl->outer_ip_ver;
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT)
- ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TRANSPORT;
- else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TUNNEL;
- else
- return -EINVAL;
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
- ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_AH;
- else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
- ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_ESP;
- else
- return -EINVAL;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_GCM;
- aes_key_len = xform->aead.key.length;
- } else {
- return -ENOTSUP;
- }
- } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_CBC;
- aes_key_len = cipher_xform->cipher.key.length;
- } else {
- return -ENOTSUP;
- }
-
-
- switch (aes_key_len) {
- case 16:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_128;
- break;
- case 24:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_192;
- break;
- case 32:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) {
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_NULL:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_NULL;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_MD5;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA1;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_224;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_256;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_384;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_512;
- break;
- case RTE_CRYPTO_AUTH_AES_GMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_GMAC;
- break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128;
- break;
- default:
- return -ENOTSUP;
- }
- }
-
- if (ipsec->options.esn)
- ctl->esn_en = 1;
-
- if (ipsec->options.udp_encap == 1)
- ctl->encap_type = OTX2_IPSEC_PO_SA_ENCAP_UDP;
-
- ctl->spi = rte_cpu_to_be_32(ipsec->spi);
- ctl->valid = 1;
-
- return 0;
-}
-
-#endif /* __OTX2_IPSEC_PO_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h b/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
deleted file mode 100644
index c3abf02187..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
+++ /dev/null
@@ -1,167 +0,0 @@
-
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_PO_OPS_H__
-#define __OTX2_IPSEC_PO_OPS_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_security.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_security.h"
-
-static __rte_always_inline int32_t
-otx2_ipsec_po_out_rlen_get(struct otx2_sec_session_ipsec_lp *sess,
- uint32_t plen)
-{
- uint32_t enc_payload_len;
-
- enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len,
- sess->roundup_byte);
-
- return sess->partial_len + enc_payload_len;
-}
-
-static __rte_always_inline struct cpt_request_info *
-alloc_request_struct(char *maddr, void *cop, int mdata_len)
-{
- struct cpt_request_info *req;
- struct cpt_meta_info *meta;
- uint8_t *resp_addr;
- uintptr_t *op;
-
- meta = (void *)RTE_PTR_ALIGN((uint8_t *)maddr, 16);
-
- op = (uintptr_t *)meta->deq_op_info;
- req = &meta->cpt_req;
- resp_addr = (uint8_t *)&meta->cpt_res;
-
- req->completion_addr = (uint64_t *)((uint8_t *)resp_addr);
- *req->completion_addr = COMPLETION_CODE_INIT;
- req->comp_baddr = rte_mem_virt2iova(resp_addr);
- req->op = op;
-
- op[0] = (uintptr_t)((uint64_t)meta | 1ull);
- op[1] = (uintptr_t)cop;
- op[2] = (uintptr_t)req;
- op[3] = mdata_len;
-
- return req;
-}
-
-static __rte_always_inline int
-process_outb_sa(struct rte_crypto_op *cop,
- struct otx2_sec_session_ipsec_lp *sess,
- struct cpt_qp_meta_info *m_info, void **prep_req)
-{
- uint32_t dlen, rlen, extend_head, extend_tail;
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- struct cpt_request_info *req = NULL;
- struct otx2_ipsec_po_out_hdr *hdr;
- struct otx2_ipsec_po_out_sa *sa;
- int hdr_len, mdata_len, ret = 0;
- vq_cmd_word0_t word0;
- char *mdata, *data;
-
- sa = &sess->out_sa;
- hdr_len = sizeof(*hdr);
-
- dlen = rte_pktmbuf_pkt_len(m_src) + hdr_len;
- rlen = otx2_ipsec_po_out_rlen_get(sess, dlen - hdr_len);
-
- extend_head = hdr_len + RTE_ETHER_HDR_LEN;
- extend_tail = rlen - dlen;
- mdata_len = m_info->lb_mlen + 8;
-
- mdata = rte_pktmbuf_append(m_src, extend_tail + mdata_len);
- if (unlikely(mdata == NULL)) {
- otx2_err("Not enough tail room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- mdata += extend_tail; /* mdata follows encrypted data */
- req = alloc_request_struct(mdata, (void *)cop, mdata_len);
-
- data = rte_pktmbuf_prepend(m_src, extend_head);
- if (unlikely(data == NULL)) {
- otx2_err("Not enough head room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- /*
- * Move the Ethernet header, to insert otx2_ipsec_po_out_hdr prior
- * to the IP header
- */
- memcpy(data, data + hdr_len, RTE_ETHER_HDR_LEN);
-
- hdr = (struct otx2_ipsec_po_out_hdr *)rte_pktmbuf_adj(m_src,
- RTE_ETHER_HDR_LEN);
-
- memcpy(&hdr->iv[0], rte_crypto_op_ctod_offset(cop, uint8_t *,
- sess->iv_offset), sess->iv_length);
-
- /* Prepare CPT instruction */
- word0.u64 = sess->ucmd_w0;
- word0.s.dlen = dlen;
-
- req->ist.ei0 = word0.u64;
- req->ist.ei1 = rte_pktmbuf_iova(m_src);
- req->ist.ei2 = req->ist.ei1;
-
- sa->esn_hi = sess->seq_hi;
-
- hdr->seq = rte_cpu_to_be_32(sess->seq_lo);
- hdr->ip_id = rte_cpu_to_be_32(sess->ip_id);
-
- sess->ip_id++;
- sess->esn++;
-
-exit:
- *prep_req = req;
-
- return ret;
-}
-
-static __rte_always_inline int
-process_inb_sa(struct rte_crypto_op *cop,
- struct otx2_sec_session_ipsec_lp *sess,
- struct cpt_qp_meta_info *m_info, void **prep_req)
-{
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- struct cpt_request_info *req = NULL;
- int mdata_len, ret = 0;
- vq_cmd_word0_t word0;
- uint32_t dlen;
- char *mdata;
-
- dlen = rte_pktmbuf_pkt_len(m_src);
- mdata_len = m_info->lb_mlen + 8;
-
- mdata = rte_pktmbuf_append(m_src, mdata_len);
- if (unlikely(mdata == NULL)) {
- otx2_err("Not enough tail room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- req = alloc_request_struct(mdata, (void *)cop, mdata_len);
-
- /* Prepare CPT instruction */
- word0.u64 = sess->ucmd_w0;
- word0.s.dlen = dlen;
-
- req->ist.ei0 = word0.u64;
- req->ist.ei1 = rte_pktmbuf_iova(m_src);
- req->ist.ei2 = req->ist.ei1;
-
-exit:
- *prep_req = req;
- return ret;
-}
-#endif /* __OTX2_IPSEC_PO_OPS_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_security.h b/drivers/crypto/octeontx2/otx2_security.h
deleted file mode 100644
index 29c8fc351b..0000000000
--- a/drivers/crypto/octeontx2/otx2_security.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SECURITY_H__
-#define __OTX2_SECURITY_H__
-
-#include <rte_security.h>
-
-#include "otx2_cryptodev_sec.h"
-#include "otx2_ethdev_sec.h"
-
-#define OTX2_SEC_AH_HDR_LEN 12
-#define OTX2_SEC_AES_GCM_IV_LEN 8
-#define OTX2_SEC_AES_GCM_MAC_LEN 16
-#define OTX2_SEC_AES_CBC_IV_LEN 16
-#define OTX2_SEC_SHA1_HMAC_LEN 12
-#define OTX2_SEC_SHA2_HMAC_LEN 16
-
-#define OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN 4
-#define OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN 16
-
-struct otx2_sec_session_ipsec {
- union {
- struct otx2_sec_session_ipsec_ip ip;
- struct otx2_sec_session_ipsec_lp lp;
- };
- enum rte_security_ipsec_sa_direction dir;
-};
-
-struct otx2_sec_session {
- struct otx2_sec_session_ipsec ipsec;
- void *userdata;
- /**< Userdata registered by the application */
-} __rte_cache_aligned;
-
-#endif /* __OTX2_SECURITY_H__ */
diff --git a/drivers/crypto/octeontx2/version.map b/drivers/crypto/octeontx2/version.map
deleted file mode 100644
index d36663132a..0000000000
--- a/drivers/crypto/octeontx2/version.map
+++ /dev/null
@@ -1,13 +0,0 @@
-DPDK_22 {
- local: *;
-};
-
-INTERNAL {
- global:
-
- otx2_cryptodev_driver_id;
- otx2_cpt_af_reg_read;
- otx2_cpt_af_reg_write;
-
- local: *;
-};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index b68ce6c0a4..8db9775d7b 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -1127,6 +1127,16 @@ cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index 63d6b410b2..d6706b57f7 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -11,7 +11,6 @@ drivers = [
'dpaa',
'dpaa2',
'dsw',
- 'octeontx2',
'opdl',
'skeleton',
'sw',
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
deleted file mode 100644
index ce360af5f8..0000000000
--- a/drivers/event/octeontx2/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_worker.c',
- 'otx2_worker_dual.c',
- 'otx2_evdev.c',
- 'otx2_evdev_adptr.c',
- 'otx2_evdev_crypto_adptr.c',
- 'otx2_evdev_irq.c',
- 'otx2_evdev_selftest.c',
- 'otx2_tim_evdev.c',
- 'otx2_tim_worker.c',
-)
-
-deps += ['bus_pci', 'common_octeontx2', 'crypto_octeontx2', 'mempool_octeontx2', 'net_octeontx2']
-
-includes += include_directories('../../crypto/octeontx2')
-includes += include_directories('../../common/cpt')
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
deleted file mode 100644
index ccf28b678b..0000000000
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ /dev/null
@@ -1,1900 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <eventdev_pmd_pci.h>
-#include <rte_kvargs.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_pci.h>
-
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_tx.h"
-#include "otx2_evdev_stats.h"
-#include "otx2_irq.h"
-#include "otx2_tim_evdev.h"
-
-static inline int
-sso_get_msix_offsets(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int i, rc;
-
- /* Get SSO and SSOW MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- for (i = 0; i < nb_ports; i++)
- dev->ssow_msixoff[i] = msix_rsp->ssow_msixoff[i];
-
- for (i = 0; i < dev->nb_event_queues; i++)
- dev->sso_msixoff[i] = msix_rsp->sso_msixoff[i];
-
- return rc;
-}
-
-void
-sso_fastpath_fns_set(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- /* Single WS modes */
- const event_dequeue_t ssogws_deq[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t ssogws_deq_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_seg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_seg_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_seg_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_seg_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
-
- /* Dual WS modes */
- const event_dequeue_t ssogws_dual_deq[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_dual_deq_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_dual_deq_seg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_seg_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_seg_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t
- ssogws_dual_deq_seg_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_seg_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- /* Tx modes */
- const event_tx_adapter_enqueue_t
- ssogws_tx_adptr_enq[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_tx_adptr_enq_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_tx_adptr_enq_seg_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_tx_adptr_enq_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_tx_adptr_enq_seg_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- event_dev->enqueue = otx2_ssogws_enq;
- event_dev->enqueue_burst = otx2_ssogws_enq_burst;
- event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst;
- event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst;
- if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
- event_dev->dequeue = ssogws_deq_seg
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_deq_seg_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue = ssogws_deq_seg_timeout
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_deq_seg_timeout_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- }
- } else {
- event_dev->dequeue = ssogws_deq
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_deq_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue = ssogws_deq_timeout
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_deq_timeout_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- }
- }
-
- if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
- /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
- event_dev->txa_enqueue = ssogws_tx_adptr_enq_seg
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- } else {
- event_dev->txa_enqueue = ssogws_tx_adptr_enq
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- }
- event_dev->ca_enqueue = otx2_ssogws_ca_enq;
-
- if (dev->dual_ws) {
- event_dev->enqueue = otx2_ssogws_dual_enq;
- event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst;
- event_dev->enqueue_new_burst =
- otx2_ssogws_dual_enq_new_burst;
- event_dev->enqueue_forward_burst =
- otx2_ssogws_dual_enq_fwd_burst;
-
- if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
- event_dev->dequeue = ssogws_dual_deq_seg
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_dual_deq_seg_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue =
- ssogws_dual_deq_seg_timeout
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_dual_deq_seg_timeout_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- }
- } else {
- event_dev->dequeue = ssogws_dual_deq
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_dual_deq_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue =
- ssogws_dual_deq_timeout
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_dual_deq_timeout_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- }
- }
-
- if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
- /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
- event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq_seg
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- } else {
- event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- }
- event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq;
- }
-
- event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
- rte_mb();
-}
-
-static void
-otx2_sso_info_get(struct rte_eventdev *event_dev,
- struct rte_event_dev_info *dev_info)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
-
- dev_info->driver_name = RTE_STR(EVENTDEV_NAME_OCTEONTX2_PMD);
- dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
- dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
- dev_info->max_event_queues = dev->max_event_queues;
- dev_info->max_event_queue_flows = (1ULL << 20);
- dev_info->max_event_queue_priority_levels = 8;
- dev_info->max_event_priority_levels = 1;
- dev_info->max_event_ports = dev->max_event_ports;
- dev_info->max_event_port_dequeue_depth = 1;
- dev_info->max_event_port_enqueue_depth = 1;
- dev_info->max_num_events = dev->max_num_events;
- dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
- RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
- RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
- RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
- RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE |
- RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
- RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-}
-
-static void
-sso_port_link_modify(struct otx2_ssogws *ws, uint8_t queue, uint8_t enable)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
- uint64_t val;
-
- val = queue;
- val |= 0ULL << 12; /* SET 0 */
- val |= 0x8000800080000000; /* Dont modify rest of the masks */
- val |= (uint64_t)enable << 14; /* Enable/Disable Membership. */
-
- otx2_write64(val, base + SSOW_LF_GWS_GRPMSK_CHG);
-}
-
-static int
-otx2_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t port_id = 0;
- uint16_t link;
-
- RTE_SET_USED(priorities);
- for (link = 0; link < nb_links; link++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], queues[link], true);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], queues[link], true);
- } else {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[link], true);
- }
- }
- sso_func_trace("Port=%d nb_links=%d", port_id, nb_links);
-
- return (int)nb_links;
-}
-
-static int
-otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t port_id = 0;
- uint16_t unlink;
-
- for (unlink = 0; unlink < nb_unlinks; unlink++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], queues[unlink],
- false);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], queues[unlink],
- false);
- } else {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[unlink], false);
- }
- }
- sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks);
-
- return (int)nb_unlinks;
-}
-
-static int
-sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type,
- uint16_t nb_lf, uint8_t attach)
-{
- if (attach) {
- struct rsrc_attach_req *req;
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- switch (type) {
- case SSO_LF_GGRP:
- req->sso = nb_lf;
- break;
- case SSO_LF_GWS:
- req->ssow = nb_lf;
- break;
- default:
- return -EINVAL;
- }
- req->modify = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- } else {
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- switch (type) {
- case SSO_LF_GGRP:
- req->sso = true;
- break;
- case SSO_LF_GWS:
- req->ssow = true;
- break;
- default:
- return -EINVAL;
- }
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- }
-
- return 0;
-}
-
-static int
-sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox,
- enum otx2_sso_lf_type type, uint16_t nb_lf, uint8_t alloc)
-{
- void *rsp;
- int rc;
-
- if (alloc) {
- switch (type) {
- case SSO_LF_GGRP:
- {
- struct sso_lf_alloc_req *req_ggrp;
- req_ggrp = otx2_mbox_alloc_msg_sso_lf_alloc(mbox);
- req_ggrp->hwgrps = nb_lf;
- }
- break;
- case SSO_LF_GWS:
- {
- struct ssow_lf_alloc_req *req_hws;
- req_hws = otx2_mbox_alloc_msg_ssow_lf_alloc(mbox);
- req_hws->hws = nb_lf;
- }
- break;
- default:
- return -EINVAL;
- }
- } else {
- switch (type) {
- case SSO_LF_GGRP:
- {
- struct sso_lf_free_req *req_ggrp;
- req_ggrp = otx2_mbox_alloc_msg_sso_lf_free(mbox);
- req_ggrp->hwgrps = nb_lf;
- }
- break;
- case SSO_LF_GWS:
- {
- struct ssow_lf_free_req *req_hws;
- req_hws = otx2_mbox_alloc_msg_ssow_lf_free(mbox);
- req_hws->hws = nb_lf;
- }
- break;
- default:
- return -EINVAL;
- }
- }
-
- rc = otx2_mbox_process_msg_tmo(mbox, (void **)&rsp, ~0);
- if (rc < 0)
- return rc;
-
- if (alloc && type == SSO_LF_GGRP) {
- struct sso_lf_alloc_rsp *rsp_ggrp = rsp;
-
- dev->xaq_buf_size = rsp_ggrp->xaq_buf_size;
- dev->xae_waes = rsp_ggrp->xaq_wq_entries;
- dev->iue = rsp_ggrp->in_unit_entries;
- }
-
- return 0;
-}
-
-static void
-otx2_sso_port_release(void *port)
-{
- struct otx2_ssogws_cookie *gws_cookie = ssogws_get_cookie(port);
- struct otx2_sso_evdev *dev;
- int i;
-
- if (!gws_cookie->configured)
- goto free;
-
- dev = sso_pmd_priv(gws_cookie->event_dev);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], i, false);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], i, false);
- }
- memset(ws, 0, sizeof(*ws));
- } else {
- struct otx2_ssogws *ws = port;
-
- for (i = 0; i < dev->nb_event_queues; i++)
- sso_port_link_modify(ws, i, false);
- memset(ws, 0, sizeof(*ws));
- }
-
- memset(gws_cookie, 0, sizeof(*gws_cookie));
-
-free:
- rte_free(gws_cookie);
-}
-
-static void
-otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(queue_id);
-}
-
-static void
-sso_restore_links(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t *links_map;
- int i, j;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- links_map = event_dev->data->links_map;
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws;
-
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], j, true);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], j, true);
- sso_func_trace("Restoring port %d queue %d "
- "link", i, j);
- }
- } else {
- struct otx2_ssogws *ws;
-
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- sso_port_link_modify(ws, j, true);
- sso_func_trace("Restoring port %d queue %d "
- "link", i, j);
- }
- }
- }
-}
-
-static void
-sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base)
-{
- ws->tag_op = base + SSOW_LF_GWS_TAG;
- ws->wqp_op = base + SSOW_LF_GWS_WQP;
- ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK;
- ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
- ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
- ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
-}
-
-static int
-sso_configure_dual_ports(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t vws = 0;
- uint8_t nb_lf;
- int i, rc;
-
- otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
-
- nb_lf = dev->nb_event_ports * 2;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GWS LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- otx2_err("Failed to init SSO GWS LF");
- return -ENODEV;
- }
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- struct otx2_ssogws_cookie *gws_cookie;
- struct otx2_ssogws_dual *ws;
- uintptr_t base;
-
- if (event_dev->data->ports[i] != NULL) {
- ws = event_dev->data->ports[i];
- } else {
- /* Allocate event port memory */
- ws = rte_zmalloc_socket("otx2_sso_ws",
- sizeof(struct otx2_ssogws_dual) +
- RTE_CACHE_LINE_SIZE,
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL) {
- otx2_err("Failed to alloc memory for port=%d",
- i);
- rc = -ENOMEM;
- break;
- }
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws_dual *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
- }
-
- ws->port = i;
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
- sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[0], base);
- ws->base[0] = base;
- vws++;
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
- sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[1], base);
- ws->base[1] = base;
- vws++;
-
- gws_cookie = ssogws_get_cookie(ws);
- gws_cookie->event_dev = event_dev;
- gws_cookie->configured = 1;
-
- event_dev->data->ports[i] = ws;
- }
-
- if (rc < 0) {
- sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- }
-
- return rc;
-}
-
-static int
-sso_configure_ports(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t nb_lf;
- int i, rc;
-
- otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
-
- nb_lf = dev->nb_event_ports;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GWS LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- otx2_err("Failed to init SSO GWS LF");
- return -ENODEV;
- }
-
- for (i = 0; i < nb_lf; i++) {
- struct otx2_ssogws_cookie *gws_cookie;
- struct otx2_ssogws *ws;
- uintptr_t base;
-
- if (event_dev->data->ports[i] != NULL) {
- ws = event_dev->data->ports[i];
- } else {
- /* Allocate event port memory */
- ws = rte_zmalloc_socket("otx2_sso_ws",
- sizeof(struct otx2_ssogws) +
- RTE_CACHE_LINE_SIZE,
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL) {
- otx2_err("Failed to alloc memory for port=%d",
- i);
- rc = -ENOMEM;
- break;
- }
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
- }
-
- ws->port = i;
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12);
- sso_set_port_ops(ws, base);
- ws->base = base;
-
- gws_cookie = ssogws_get_cookie(ws);
- gws_cookie->event_dev = event_dev;
- gws_cookie->configured = 1;
-
- event_dev->data->ports[i] = ws;
- }
-
- if (rc < 0) {
- sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- }
-
- return rc;
-}
-
-static int
-sso_configure_queues(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t nb_lf;
- int rc;
-
- otx2_sso_dbg("Configuring event queues %d", dev->nb_event_queues);
-
- nb_lf = dev->nb_event_queues;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GGRP LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GGRP, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, false);
- otx2_err("Failed to init SSO GGRP LF");
- return -ENODEV;
- }
-
- return rc;
-}
-
-static int
-sso_xaq_allocate(struct otx2_sso_evdev *dev)
-{
- const struct rte_memzone *mz;
- struct npa_aura_s *aura;
- static int reconfig_cnt;
- char pool_name[RTE_MEMZONE_NAMESIZE];
- uint32_t xaq_cnt;
- int rc;
-
- if (dev->xaq_pool)
- rte_mempool_free(dev->xaq_pool);
-
- /*
- * Allocate memory for Add work backpressure.
- */
- mz = rte_memzone_lookup(OTX2_SSO_FC_NAME);
- if (mz == NULL)
- mz = rte_memzone_reserve_aligned(OTX2_SSO_FC_NAME,
- OTX2_ALIGN +
- sizeof(struct npa_aura_s),
- rte_socket_id(),
- RTE_MEMZONE_IOVA_CONTIG,
- OTX2_ALIGN);
- if (mz == NULL) {
- otx2_err("Failed to allocate mem for fcmem");
- return -ENOMEM;
- }
-
- dev->fc_iova = mz->iova;
- dev->fc_mem = mz->addr;
- *dev->fc_mem = 0;
- aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + OTX2_ALIGN);
- memset(aura, 0, sizeof(struct npa_aura_s));
-
- aura->fc_ena = 1;
- aura->fc_addr = dev->fc_iova;
- aura->fc_hyst_bits = 0; /* Store count on all updates */
-
- /* Taken from HRM 14.3.3(4) */
- xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT;
- if (dev->xae_cnt)
- xaq_cnt += dev->xae_cnt / dev->xae_waes;
- else if (dev->adptr_xae_cnt)
- xaq_cnt += (dev->adptr_xae_cnt / dev->xae_waes) +
- (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
- else
- xaq_cnt += (dev->iue / dev->xae_waes) +
- (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
-
- otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
- /* Setup XAQ based on number of nb queues. */
- snprintf(pool_name, 30, "otx2_xaq_buf_pool_%d", reconfig_cnt);
- dev->xaq_pool = (void *)rte_mempool_create_empty(pool_name,
- xaq_cnt, dev->xaq_buf_size, 0, 0,
- rte_socket_id(), 0);
-
- if (dev->xaq_pool == NULL) {
- otx2_err("Unable to create empty mempool.");
- rte_memzone_free(mz);
- return -ENOMEM;
- }
-
- rc = rte_mempool_set_ops_byname(dev->xaq_pool,
- rte_mbuf_platform_mempool_ops(), aura);
- if (rc != 0) {
- otx2_err("Unable to set xaqpool ops.");
- goto alloc_fail;
- }
-
- rc = rte_mempool_populate_default(dev->xaq_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate xaqpool.");
- goto alloc_fail;
- }
- reconfig_cnt++;
- /* When SW does addwork (enqueue) check if there is space in XAQ by
- * comparing fc_addr above against the xaq_lmt calculated below.
- * There should be a minimum headroom (OTX2_SSO_XAQ_SLACK / 2) for SSO
- * to request XAQ to cache them even before enqueue is called.
- */
- dev->xaq_lmt = xaq_cnt - (OTX2_SSO_XAQ_SLACK / 2 *
- dev->nb_event_queues);
- dev->nb_xaq_cfg = xaq_cnt;
-
- return 0;
-alloc_fail:
- rte_mempool_free(dev->xaq_pool);
- rte_memzone_free(mz);
- return rc;
-}
-
-static int
-sso_ggrp_alloc_xaq(struct otx2_sso_evdev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_hw_setconfig *req;
-
- otx2_sso_dbg("Configuring XAQ for GGRPs");
- req = otx2_mbox_alloc_msg_sso_hw_setconfig(mbox);
- req->npa_pf_func = otx2_npa_pf_func_get();
- req->npa_aura_id = npa_lf_aura_handle_to_aura(dev->xaq_pool->pool_id);
- req->hwgrps = dev->nb_event_queues;
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-sso_ggrp_free_xaq(struct otx2_sso_evdev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_release_xaq *req;
-
- otx2_sso_dbg("Freeing XAQ for GGRPs");
- req = otx2_mbox_alloc_msg_sso_hw_release_xaq_aura(mbox);
- req->hwgrps = dev->nb_event_queues;
-
- return otx2_mbox_process(mbox);
-}
-
-static void
-sso_lf_teardown(struct otx2_sso_evdev *dev,
- enum otx2_sso_lf_type lf_type)
-{
- uint8_t nb_lf;
-
- switch (lf_type) {
- case SSO_LF_GGRP:
- nb_lf = dev->nb_event_queues;
- break;
- case SSO_LF_GWS:
- nb_lf = dev->nb_event_ports;
- nb_lf *= dev->dual_ws ? 2 : 1;
- break;
- default:
- return;
- }
-
- sso_lf_cfg(dev, dev->mbox, lf_type, nb_lf, false);
- sso_hw_lf_cfg(dev->mbox, lf_type, nb_lf, false);
-}
-
-static int
-otx2_sso_configure(const struct rte_eventdev *event_dev)
-{
- struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint32_t deq_tmo_ns;
- int rc;
-
- sso_func_trace();
- deq_tmo_ns = conf->dequeue_timeout_ns;
-
- if (deq_tmo_ns == 0)
- deq_tmo_ns = dev->min_dequeue_timeout_ns;
-
- if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
- deq_tmo_ns > dev->max_dequeue_timeout_ns) {
- otx2_err("Unsupported dequeue timeout requested");
- return -EINVAL;
- }
-
- if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
- dev->is_timeout_deq = 1;
-
- dev->deq_tmo_ns = deq_tmo_ns;
-
- if (conf->nb_event_ports > dev->max_event_ports ||
- conf->nb_event_queues > dev->max_event_queues) {
- otx2_err("Unsupported event queues/ports requested");
- return -EINVAL;
- }
-
- if (conf->nb_event_port_dequeue_depth > 1) {
- otx2_err("Unsupported event port deq depth requested");
- return -EINVAL;
- }
-
- if (conf->nb_event_port_enqueue_depth > 1) {
- otx2_err("Unsupported event port enq depth requested");
- return -EINVAL;
- }
-
- if (dev->configured)
- sso_unregister_irqs(event_dev);
-
- if (dev->nb_event_queues) {
- /* Finit any previous queues. */
- sso_lf_teardown(dev, SSO_LF_GGRP);
- }
- if (dev->nb_event_ports) {
- /* Finit any previous ports. */
- sso_lf_teardown(dev, SSO_LF_GWS);
- }
-
- dev->nb_event_queues = conf->nb_event_queues;
- dev->nb_event_ports = conf->nb_event_ports;
-
- if (dev->dual_ws)
- rc = sso_configure_dual_ports(event_dev);
- else
- rc = sso_configure_ports(event_dev);
-
- if (rc < 0) {
- otx2_err("Failed to configure event ports");
- return -ENODEV;
- }
-
- if (sso_configure_queues(event_dev) < 0) {
- otx2_err("Failed to configure event queues");
- rc = -ENODEV;
- goto teardown_hws;
- }
-
- if (sso_xaq_allocate(dev) < 0) {
- rc = -ENOMEM;
- goto teardown_hwggrp;
- }
-
- /* Restore any prior port-queue mapping. */
- sso_restore_links(event_dev);
- rc = sso_ggrp_alloc_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq to ggrp %d", rc);
- goto teardown_hwggrp;
- }
-
- rc = sso_get_msix_offsets(event_dev);
- if (rc < 0) {
- otx2_err("Failed to get msix offsets %d", rc);
- goto teardown_hwggrp;
- }
-
- rc = sso_register_irqs(event_dev);
- if (rc < 0) {
- otx2_err("Failed to register irq %d", rc);
- goto teardown_hwggrp;
- }
-
- dev->configured = 1;
- rte_mb();
-
- return 0;
-teardown_hwggrp:
- sso_lf_teardown(dev, SSO_LF_GGRP);
-teardown_hws:
- sso_lf_teardown(dev, SSO_LF_GWS);
- dev->nb_event_queues = 0;
- dev->nb_event_ports = 0;
- dev->configured = 0;
- return rc;
-}
-
-static void
-otx2_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
- struct rte_event_queue_conf *queue_conf)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(queue_id);
-
- queue_conf->nb_atomic_flows = (1ULL << 20);
- queue_conf->nb_atomic_order_sequences = (1ULL << 20);
- queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
- queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
-}
-
-static int
-otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
- const struct rte_event_queue_conf *queue_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_grp_priority *req;
- int rc;
-
- sso_func_trace("Queue=%d prio=%d", queue_id, queue_conf->priority);
-
- req = otx2_mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
- req->grp = queue_id;
- req->weight = 0xFF;
- req->affinity = 0xFF;
- /* Normalize <0-255> to <0-7> */
- req->priority = queue_conf->priority / 32;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to set priority queue=%d", queue_id);
- return rc;
- }
-
- return 0;
-}
-
-static void
-otx2_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
- struct rte_event_port_conf *port_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
-
- RTE_SET_USED(port_id);
- port_conf->new_event_threshold = dev->max_num_events;
- port_conf->dequeue_depth = 1;
- port_conf->enqueue_depth = 1;
-}
-
-static int
-otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
- const struct rte_event_port_conf *port_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP] = {0};
- uint64_t val;
- uint16_t q;
-
- sso_func_trace("Port=%d", port_id);
- RTE_SET_USED(port_conf);
-
- if (event_dev->data->ports[port_id] == NULL) {
- otx2_err("Invalid port Id %d", port_id);
- return -EINVAL;
- }
-
- for (q = 0; q < dev->nb_event_queues; q++) {
- grps_base[q] = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | q << 12);
- if (grps_base[q] == 0) {
- otx2_err("Failed to get grp[%d] base addr", q);
- return -EINVAL;
- }
- }
-
- /* Set get_work timeout for HWS */
- val = NSEC2USEC(dev->deq_tmo_ns) - 1;
-
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[port_id];
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- ws->tstamp = dev->tstamp;
- otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
- ws->ws_state[0].getwrk_op) + SSOW_LF_GWS_NW_TIM);
- otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
- ws->ws_state[1].getwrk_op) + SSOW_LF_GWS_NW_TIM);
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[port_id];
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- ws->tstamp = dev->tstamp;
- otx2_write64(val, base + SSOW_LF_GWS_NW_TIM);
- }
-
- otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
-
- return 0;
-}
-
-static int
-otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
- uint64_t *tmo_ticks)
-{
- RTE_SET_USED(event_dev);
- *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
-
- return 0;
-}
-
-static void
-ssogws_dump(struct otx2_ssogws *ws, FILE *f)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- fprintf(f, "SSOW_LF_GWS Base addr 0x%" PRIx64 "\n", (uint64_t)base);
- fprintf(f, "SSOW_LF_GWS_LINKS 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_LINKS));
- fprintf(f, "SSOW_LF_GWS_PENDWQP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDWQP));
- fprintf(f, "SSOW_LF_GWS_PENDSTATE 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDSTATE));
- fprintf(f, "SSOW_LF_GWS_NW_TIM 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_NW_TIM));
- fprintf(f, "SSOW_LF_GWS_TAG 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_TAG));
- fprintf(f, "SSOW_LF_GWS_WQP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_TAG));
- fprintf(f, "SSOW_LF_GWS_SWTP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_SWTP));
- fprintf(f, "SSOW_LF_GWS_PENDTAG 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDTAG));
-}
-
-static void
-ssoggrp_dump(uintptr_t base, FILE *f)
-{
- fprintf(f, "SSO_LF_GGRP Base addr 0x%" PRIx64 "\n", (uint64_t)base);
- fprintf(f, "SSO_LF_GGRP_QCTL 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_QCTL));
- fprintf(f, "SSO_LF_GGRP_XAQ_CNT 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_XAQ_CNT));
- fprintf(f, "SSO_LF_GGRP_INT_THR 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_INT_THR));
- fprintf(f, "SSO_LF_GGRP_INT_CNT 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_INT_CNT));
- fprintf(f, "SSO_LF_GGRP_AQ_CNT 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_AQ_CNT));
- fprintf(f, "SSO_LF_GGRP_AQ_THR 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_AQ_THR));
- fprintf(f, "SSO_LF_GGRP_MISC_CNT 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_MISC_CNT));
-}
-
-static void
-otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t queue;
- uint8_t port;
-
- fprintf(f, "[%s] SSO running in [%s] mode\n", __func__, dev->dual_ws ?
- "dual_ws" : "single_ws");
- /* Dump SSOW registers */
- for (port = 0; port < dev->nb_event_ports; port++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws =
- event_dev->data->ports[port];
-
- fprintf(f, "[%s] SSO dual workslot[%d] vws[%d] dump\n",
- __func__, port, 0);
- ssogws_dump((struct otx2_ssogws *)&ws->ws_state[0], f);
- fprintf(f, "[%s]SSO dual workslot[%d] vws[%d] dump\n",
- __func__, port, 1);
- ssogws_dump((struct otx2_ssogws *)&ws->ws_state[1], f);
- } else {
- fprintf(f, "[%s]SSO single workslot[%d] dump\n",
- __func__, port);
- ssogws_dump(event_dev->data->ports[port], f);
- }
- }
-
- /* Dump SSO registers */
- for (queue = 0; queue < dev->nb_event_queues; queue++) {
- fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
- }
- }
-}
-
-static void
-otx2_handle_event(void *arg, struct rte_event event)
-{
- struct rte_eventdev *event_dev = arg;
-
- if (event_dev->dev_ops->dev_stop_flush != NULL)
- event_dev->dev_ops->dev_stop_flush(event_dev->data->dev_id,
- event, event_dev->data->dev_stop_flush_arg);
-}
-
-static void
-sso_qos_cfg(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct sso_grp_qos_cfg *req;
- uint16_t i;
-
- for (i = 0; i < dev->qos_queue_cnt; i++) {
- uint8_t xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
- uint8_t iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
- uint8_t taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
-
- if (dev->qos_parse_data[i].queue >= dev->nb_event_queues)
- continue;
-
- req = otx2_mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
- req->xaq_limit = (dev->nb_xaq_cfg *
- (xaq_prcnt ? xaq_prcnt : 100)) / 100;
- req->taq_thr = (SSO_HWGRP_IAQ_MAX_THR_MASK *
- (iaq_prcnt ? iaq_prcnt : 100)) / 100;
- req->iaq_thr = (SSO_HWGRP_TAQ_MAX_THR_MASK *
- (taq_prcnt ? taq_prcnt : 100)) / 100;
- }
-
- if (dev->qos_queue_cnt)
- otx2_mbox_process(dev->mbox);
-}
-
-static void
-sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t i;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws;
-
- ws = event_dev->data->ports[i];
- ssogws_reset((struct otx2_ssogws *)&ws->ws_state[0]);
- ssogws_reset((struct otx2_ssogws *)&ws->ws_state[1]);
- ws->swtag_req = 0;
- ws->vws = 0;
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- } else {
- struct otx2_ssogws *ws;
-
- ws = event_dev->data->ports[i];
- ssogws_reset(ws);
- ws->swtag_req = 0;
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- }
- }
-
- rte_mb();
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
- struct otx2_ssogws temp_ws;
-
- memcpy(&temp_ws, &ws->ws_state[0],
- sizeof(struct otx2_ssogws_state));
- for (i = 0; i < dev->nb_event_queues; i++) {
- /* Consume all the events through HWS0 */
- ssogws_flush_events(&temp_ws, i, ws->grps_base[i],
- otx2_handle_event, event_dev);
- /* Enable/Disable SSO GGRP */
- otx2_write64(enable, ws->grps_base[i] +
- SSO_LF_GGRP_QCTL);
- }
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[0];
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- /* Consume all the events through HWS0 */
- ssogws_flush_events(ws, i, ws->grps_base[i],
- otx2_handle_event, event_dev);
- /* Enable/Disable SSO GGRP */
- otx2_write64(enable, ws->grps_base[i] +
- SSO_LF_GGRP_QCTL);
- }
- }
-
- /* reset SSO GWS cache */
- otx2_mbox_alloc_msg_sso_ws_cache_inv(dev->mbox);
- otx2_mbox_process(dev->mbox);
-}
-
-int
-sso_xae_reconfigure(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int rc = 0;
-
- if (event_dev->data->dev_started)
- sso_cleanup(event_dev, 0);
-
- rc = sso_ggrp_free_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to free XAQ\n");
- return rc;
- }
-
- rte_mempool_free(dev->xaq_pool);
- dev->xaq_pool = NULL;
- rc = sso_xaq_allocate(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq pool %d", rc);
- return rc;
- }
- rc = sso_ggrp_alloc_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq to ggrp %d", rc);
- return rc;
- }
-
- rte_mb();
- if (event_dev->data->dev_started)
- sso_cleanup(event_dev, 1);
-
- return 0;
-}
-
-static int
-otx2_sso_start(struct rte_eventdev *event_dev)
-{
- sso_func_trace();
- sso_qos_cfg(event_dev);
- sso_cleanup(event_dev, 1);
- sso_fastpath_fns_set(event_dev);
-
- return 0;
-}
-
-static void
-otx2_sso_stop(struct rte_eventdev *event_dev)
-{
- sso_func_trace();
- sso_cleanup(event_dev, 0);
- rte_mb();
-}
-
-static int
-otx2_sso_close(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- uint16_t i;
-
- if (!dev->configured)
- return 0;
-
- sso_unregister_irqs(event_dev);
-
- for (i = 0; i < dev->nb_event_queues; i++)
- all_queues[i] = i;
-
- for (i = 0; i < dev->nb_event_ports; i++)
- otx2_sso_port_unlink(event_dev, event_dev->data->ports[i],
- all_queues, dev->nb_event_queues);
-
- sso_lf_teardown(dev, SSO_LF_GGRP);
- sso_lf_teardown(dev, SSO_LF_GWS);
- dev->nb_event_ports = 0;
- dev->nb_event_queues = 0;
- rte_mempool_free(dev->xaq_pool);
- rte_memzone_free(rte_memzone_lookup(OTX2_SSO_FC_NAME));
-
- return 0;
-}
-
-/* Initialize and register event driver with DPDK Application */
-static struct eventdev_ops otx2_sso_ops = {
- .dev_infos_get = otx2_sso_info_get,
- .dev_configure = otx2_sso_configure,
- .queue_def_conf = otx2_sso_queue_def_conf,
- .queue_setup = otx2_sso_queue_setup,
- .queue_release = otx2_sso_queue_release,
- .port_def_conf = otx2_sso_port_def_conf,
- .port_setup = otx2_sso_port_setup,
- .port_release = otx2_sso_port_release,
- .port_link = otx2_sso_port_link,
- .port_unlink = otx2_sso_port_unlink,
- .timeout_ticks = otx2_sso_timeout_ticks,
-
- .eth_rx_adapter_caps_get = otx2_sso_rx_adapter_caps_get,
- .eth_rx_adapter_queue_add = otx2_sso_rx_adapter_queue_add,
- .eth_rx_adapter_queue_del = otx2_sso_rx_adapter_queue_del,
- .eth_rx_adapter_start = otx2_sso_rx_adapter_start,
- .eth_rx_adapter_stop = otx2_sso_rx_adapter_stop,
-
- .eth_tx_adapter_caps_get = otx2_sso_tx_adapter_caps_get,
- .eth_tx_adapter_queue_add = otx2_sso_tx_adapter_queue_add,
- .eth_tx_adapter_queue_del = otx2_sso_tx_adapter_queue_del,
-
- .timer_adapter_caps_get = otx2_tim_caps_get,
-
- .crypto_adapter_caps_get = otx2_ca_caps_get,
- .crypto_adapter_queue_pair_add = otx2_ca_qp_add,
- .crypto_adapter_queue_pair_del = otx2_ca_qp_del,
-
- .xstats_get = otx2_sso_xstats_get,
- .xstats_reset = otx2_sso_xstats_reset,
- .xstats_get_names = otx2_sso_xstats_get_names,
-
- .dump = otx2_sso_dump,
- .dev_start = otx2_sso_start,
- .dev_stop = otx2_sso_stop,
- .dev_close = otx2_sso_close,
- .dev_selftest = otx2_sso_selftest,
-};
-
-#define OTX2_SSO_XAE_CNT "xae_cnt"
-#define OTX2_SSO_SINGLE_WS "single_ws"
-#define OTX2_SSO_GGRP_QOS "qos"
-#define OTX2_SSO_FORCE_BP "force_rx_bp"
-
-static void
-parse_queue_param(char *value, void *opaque)
-{
- struct otx2_sso_qos queue_qos = {0};
- uint8_t *val = (uint8_t *)&queue_qos;
- struct otx2_sso_evdev *dev = opaque;
- char *tok = strtok(value, "-");
- struct otx2_sso_qos *old_ptr;
-
- if (!strlen(value))
- return;
-
- while (tok != NULL) {
- *val = atoi(tok);
- tok = strtok(NULL, "-");
- val++;
- }
-
- if (val != (&queue_qos.iaq_prcnt + 1)) {
- otx2_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
- return;
- }
-
- dev->qos_queue_cnt++;
- old_ptr = dev->qos_parse_data;
- dev->qos_parse_data = rte_realloc(dev->qos_parse_data,
- sizeof(struct otx2_sso_qos) *
- dev->qos_queue_cnt, 0);
- if (dev->qos_parse_data == NULL) {
- dev->qos_parse_data = old_ptr;
- dev->qos_queue_cnt--;
- return;
- }
- dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
-}
-
-static void
-parse_qos_list(const char *value, void *opaque)
-{
- char *s = strdup(value);
- char *start = NULL;
- char *end = NULL;
- char *f = s;
-
- while (*s) {
- if (*s == '[')
- start = s;
- else if (*s == ']')
- end = s;
-
- if (start && start < end) {
- *end = 0;
- parse_queue_param(start + 1, opaque);
- s = end;
- start = end;
- }
- s++;
- }
-
- free(f);
-}
-
-static int
-parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
- * isn't allowed. Everything is expressed in percentages, 0 represents
- * default.
- */
- parse_qos_list(value, opaque);
-
- return 0;
-}
-
-static void
-sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
-{
- struct rte_kvargs *kvlist;
- uint8_t single_ws = 0;
-
- if (devargs == NULL)
- return;
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value,
- &dev->xae_cnt);
- rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag,
- &single_ws);
- rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
- dev);
- rte_kvargs_process(kvlist, OTX2_SSO_FORCE_BP, &parse_kvargs_flag,
- &dev->force_rx_bp);
- otx2_parse_common_devargs(kvlist);
- dev->dual_ws = !single_ws;
- rte_kvargs_free(kvlist);
-}
-
-static int
-otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- return rte_event_pmd_pci_probe(pci_drv, pci_dev,
- sizeof(struct otx2_sso_evdev),
- otx2_sso_init);
-}
-
-static int
-otx2_sso_remove(struct rte_pci_device *pci_dev)
-{
- return rte_event_pmd_pci_remove(pci_dev, otx2_sso_fini);
-}
-
-static const struct rte_pci_id pci_sso_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_sso = {
- .id_table = pci_sso_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
- .probe = otx2_sso_probe,
- .remove = otx2_sso_remove,
-};
-
-int
-otx2_sso_init(struct rte_eventdev *event_dev)
-{
- struct free_rsrcs_rsp *rsrc_cnt;
- struct rte_pci_device *pci_dev;
- struct otx2_sso_evdev *dev;
- int rc;
-
- event_dev->dev_ops = &otx2_sso_ops;
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- sso_fastpath_fns_set(event_dev);
- return 0;
- }
-
- dev = sso_pmd_priv(event_dev);
-
- pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
-
- /* Initialize the base otx2_dev object */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc < 0) {
- otx2_err("Failed to initialize otx2_dev rc=%d", rc);
- goto error;
- }
-
- /* Get SSO and SSOW MSIX rsrc cnt */
- otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
- rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
- if (rc < 0) {
- otx2_err("Unable to get free rsrc count");
- goto otx2_dev_uninit;
- }
- otx2_sso_dbg("SSO %d SSOW %d NPA %d provisioned", rsrc_cnt->sso,
- rsrc_cnt->ssow, rsrc_cnt->npa);
-
- dev->max_event_ports = RTE_MIN(rsrc_cnt->ssow, OTX2_SSO_MAX_VHWS);
- dev->max_event_queues = RTE_MIN(rsrc_cnt->sso, OTX2_SSO_MAX_VHGRP);
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc < 0) {
- otx2_err("Unable to init NPA lf. It might not be provisioned");
- goto otx2_dev_uninit;
- }
-
- dev->drv_inited = true;
- dev->is_timeout_deq = 0;
- dev->min_dequeue_timeout_ns = USEC2NSEC(1);
- dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
- dev->max_num_events = -1;
- dev->nb_event_queues = 0;
- dev->nb_event_ports = 0;
-
- if (!dev->max_event_ports || !dev->max_event_queues) {
- otx2_err("Not enough eventdev resource queues=%d ports=%d",
- dev->max_event_queues, dev->max_event_ports);
- rc = -ENODEV;
- goto otx2_npa_lf_uninit;
- }
-
- dev->dual_ws = 1;
- sso_parse_devargs(dev, pci_dev->device.devargs);
- if (dev->dual_ws) {
- otx2_sso_dbg("Using dual workslot mode");
- dev->max_event_ports = dev->max_event_ports / 2;
- } else {
- otx2_sso_dbg("Using single workslot mode");
- }
-
- otx2_sso_pf_func_set(dev->pf_func);
- otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
- event_dev->data->name, dev->max_event_queues,
- dev->max_event_ports);
-
- otx2_tim_init(pci_dev, (struct otx2_dev *)dev);
-
- return 0;
-
-otx2_npa_lf_uninit:
- otx2_npa_lf_fini();
-otx2_dev_uninit:
- otx2_dev_fini(pci_dev, dev);
-error:
- return rc;
-}
-
-int
-otx2_sso_fini(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct rte_pci_device *pci_dev;
-
- /* For secondary processes, nothing to be done */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
-
- if (!dev->drv_inited)
- goto dev_fini;
-
- dev->drv_inited = false;
- otx2_npa_lf_fini();
-
-dev_fini:
- if (otx2_npa_lf_active(dev)) {
- otx2_info("Common resource in use by other devices");
- return -EAGAIN;
- }
-
- otx2_tim_fini();
- otx2_dev_fini(pci_dev, dev);
-
- return 0;
-}
-
-RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso);
-RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
-RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
- OTX2_SSO_SINGLE_WS "=1"
- OTX2_SSO_GGRP_QOS "=<string>"
- OTX2_SSO_FORCE_BP "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
deleted file mode 100644
index a5d34b7df7..0000000000
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ /dev/null
@@ -1,430 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_EVDEV_H__
-#define __OTX2_EVDEV_H__
-
-#include <rte_eventdev.h>
-#include <eventdev_pmd.h>
-#include <rte_event_eth_rx_adapter.h>
-#include <rte_event_eth_tx_adapter.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-#include "otx2_mempool.h"
-#include "otx2_tim_evdev.h"
-
-#define EVENTDEV_NAME_OCTEONTX2_PMD event_octeontx2
-
-#define sso_func_trace otx2_sso_dbg
-
-#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV
-#define OTX2_SSO_MAX_VHWS (UINT8_MAX)
-#define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc"
-#define OTX2_SSO_SQB_LIMIT (0x180)
-#define OTX2_SSO_XAQ_SLACK (8)
-#define OTX2_SSO_XAQ_CACHE_CNT (0x7)
-#define OTX2_SSO_WQE_SG_PTR (9)
-
-/* SSO LF register offsets (BAR2) */
-#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
-#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
-
-#define SSO_LF_GGRP_QCTL (0x20ull)
-#define SSO_LF_GGRP_EXE_DIS (0x80ull)
-#define SSO_LF_GGRP_INT (0x100ull)
-#define SSO_LF_GGRP_INT_W1S (0x108ull)
-#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
-#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
-#define SSO_LF_GGRP_INT_THR (0x140ull)
-#define SSO_LF_GGRP_INT_CNT (0x180ull)
-#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
-#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
-#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
-#define SSO_LF_GGRP_MISC_CNT (0x200ull)
-
-/* SSOW LF register offsets (BAR2) */
-#define SSOW_LF_GWS_LINKS (0x10ull)
-#define SSOW_LF_GWS_PENDWQP (0x40ull)
-#define SSOW_LF_GWS_PENDSTATE (0x50ull)
-#define SSOW_LF_GWS_NW_TIM (0x70ull)
-#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
-#define SSOW_LF_GWS_INT (0x100ull)
-#define SSOW_LF_GWS_INT_W1S (0x108ull)
-#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
-#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
-#define SSOW_LF_GWS_TAG (0x200ull)
-#define SSOW_LF_GWS_WQP (0x210ull)
-#define SSOW_LF_GWS_SWTP (0x220ull)
-#define SSOW_LF_GWS_PENDTAG (0x230ull)
-#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
-#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
-#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
-#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
-#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
-#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
-#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
-#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
-#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
-#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
-#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
-#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
-
-#define OTX2_SSOW_GET_BASE_ADDR(_GW) ((_GW) - SSOW_LF_GWS_OP_GET_WORK)
-#define OTX2_SSOW_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
-#define OTX2_SSOW_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
-
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us) * 1E3)
-#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
-#define TICK2NSEC(__tck, __freq) (((__tck) * 1E9) / (__freq))
-
-enum otx2_sso_lf_type {
- SSO_LF_GGRP,
- SSO_LF_GWS
-};
-
-union otx2_sso_event {
- uint64_t get_work0;
- struct {
- uint32_t flow_id:20;
- uint32_t sub_event_type:8;
- uint32_t event_type:4;
- uint8_t op:2;
- uint8_t rsvd:4;
- uint8_t sched_type:2;
- uint8_t queue_id;
- uint8_t priority;
- uint8_t impl_opaque;
- };
-} __rte_aligned(64);
-
-enum {
- SSO_SYNC_ORDERED,
- SSO_SYNC_ATOMIC,
- SSO_SYNC_UNTAGGED,
- SSO_SYNC_EMPTY
-};
-
-struct otx2_sso_qos {
- uint8_t queue;
- uint8_t xaq_prcnt;
- uint8_t taq_prcnt;
- uint8_t iaq_prcnt;
-};
-
-struct otx2_sso_evdev {
- OTX2_DEV; /* Base class */
- uint8_t max_event_queues;
- uint8_t max_event_ports;
- uint8_t is_timeout_deq;
- uint8_t nb_event_queues;
- uint8_t nb_event_ports;
- uint8_t configured;
- uint32_t deq_tmo_ns;
- uint32_t min_dequeue_timeout_ns;
- uint32_t max_dequeue_timeout_ns;
- int32_t max_num_events;
- uint64_t *fc_mem;
- uint64_t xaq_lmt;
- uint64_t nb_xaq_cfg;
- rte_iova_t fc_iova;
- struct rte_mempool *xaq_pool;
- uint64_t rx_offloads;
- uint64_t tx_offloads;
- uint64_t adptr_xae_cnt;
- uint16_t rx_adptr_pool_cnt;
- uint64_t *rx_adptr_pools;
- uint16_t max_port_id;
- uint16_t tim_adptr_ring_cnt;
- uint16_t *timer_adptr_rings;
- uint64_t *timer_adptr_sz;
- /* Dev args */
- uint8_t dual_ws;
- uint32_t xae_cnt;
- uint8_t qos_queue_cnt;
- uint8_t force_rx_bp;
- struct otx2_sso_qos *qos_parse_data;
- /* HW const */
- uint32_t xae_waes;
- uint32_t xaq_buf_size;
- uint32_t iue;
- /* MSIX offsets */
- uint16_t sso_msixoff[OTX2_SSO_MAX_VHGRP];
- uint16_t ssow_msixoff[OTX2_SSO_MAX_VHWS];
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
-} __rte_cache_aligned;
-
-#define OTX2_SSOGWS_OPS \
- /* WS ops */ \
- uintptr_t getwrk_op; \
- uintptr_t tag_op; \
- uintptr_t wqp_op; \
- uintptr_t swtag_flush_op; \
- uintptr_t swtag_norm_op; \
- uintptr_t swtag_desched_op;
-
-/* Event port aka GWS */
-struct otx2_ssogws {
- /* Get Work Fastpath data */
- OTX2_SSOGWS_OPS;
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
- void *lookup_mem;
- uint8_t swtag_req;
- uint8_t port;
- /* Add Work Fastpath data */
- uint64_t xaq_lmt __rte_cache_aligned;
- uint64_t *fc_mem;
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
- /* Tx Fastpath data */
- uint64_t base __rte_cache_aligned;
- uint8_t tx_adptr_data[];
-} __rte_cache_aligned;
-
-struct otx2_ssogws_state {
- OTX2_SSOGWS_OPS;
-};
-
-struct otx2_ssogws_dual {
- /* Get Work Fastpath data */
- struct otx2_ssogws_state ws_state[2]; /* Ping and Pong */
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
- void *lookup_mem;
- uint8_t swtag_req;
- uint8_t vws; /* Ping pong bit */
- uint8_t port;
- /* Add Work Fastpath data */
- uint64_t xaq_lmt __rte_cache_aligned;
- uint64_t *fc_mem;
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
- /* Tx Fastpath data */
- uint64_t base[2] __rte_cache_aligned;
- uint8_t tx_adptr_data[];
-} __rte_cache_aligned;
-
-static inline struct otx2_sso_evdev *
-sso_pmd_priv(const struct rte_eventdev *event_dev)
-{
- return event_dev->data->dev_private;
-}
-
-struct otx2_ssogws_cookie {
- const struct rte_eventdev *event_dev;
- bool configured;
-};
-
-static inline struct otx2_ssogws_cookie *
-ssogws_get_cookie(void *ws)
-{
- return (struct otx2_ssogws_cookie *)
- ((uint8_t *)ws - RTE_CACHE_LINE_SIZE);
-}
-
-static const union mbuf_initializer mbuf_init = {
- .fields = {
- .data_off = RTE_PKTMBUF_HEADROOM,
- .refcnt = 1,
- .nb_segs = 1,
- .port = 0
- }
-};
-
-static __rte_always_inline void
-otx2_wqe_to_mbuf(uint64_t get_work1, const uint64_t mbuf, uint8_t port_id,
- const uint32_t tag, const uint32_t flags,
- const void * const lookup_mem)
-{
- struct nix_wqe_hdr_s *wqe = (struct nix_wqe_hdr_s *)get_work1;
- uint64_t val = mbuf_init.value | (uint64_t)port_id << 48;
-
- if (flags & NIX_RX_OFFLOAD_TSTAMP_F)
- val |= NIX_TIMESYNC_RX_OFFSET;
-
- otx2_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag,
- (struct rte_mbuf *)mbuf, lookup_mem,
- val, flags);
-
-}
-
-static inline int
-parse_kvargs_flag(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- *(uint8_t *)opaque = !!atoi(value);
- return 0;
-}
-
-static inline int
-parse_kvargs_value(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- *(uint32_t *)opaque = (uint32_t)atoi(value);
- return 0;
-}
-
-#define SSO_RX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_FASTPATH_MODES
-#define SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_TX_FASTPATH_MODES
-
-/* Single WS API's */
-uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev);
-uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-
-/* Dual WS API's */
-uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev);
-uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-
-/* Auto generated API's */
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
- \
-uint16_t otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks);\
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[],\
- uint16_t nb_events); \
-uint16_t otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-uint16_t otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-uint16_t otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data,
- uint32_t event_type);
-int sso_xae_reconfigure(struct rte_eventdev *event_dev);
-void sso_fastpath_fns_set(struct rte_eventdev *event_dev);
-
-int otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- uint32_t *caps);
-int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id,
- const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
-int otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id);
-int otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev);
-int otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev);
-int otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
- const struct rte_eth_dev *eth_dev,
- uint32_t *caps);
-int otx2_sso_tx_adapter_queue_add(uint8_t id,
- const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id);
-
-int otx2_sso_tx_adapter_queue_del(uint8_t id,
- const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id);
-
-/* Event crypto adapter API's */
-int otx2_ca_caps_get(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, uint32_t *caps);
-
-int otx2_ca_qp_add(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, int32_t queue_pair_id,
- const struct rte_event *event);
-
-int otx2_ca_qp_del(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, int32_t queue_pair_id);
-
-/* Clean up API's */
-typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev);
-void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id,
- uintptr_t base, otx2_handle_event_t fn, void *arg);
-void ssogws_reset(struct otx2_ssogws *ws);
-/* Selftest */
-int otx2_sso_selftest(void);
-/* Init and Fini API's */
-int otx2_sso_init(struct rte_eventdev *event_dev);
-int otx2_sso_fini(struct rte_eventdev *event_dev);
-/* IRQ handlers */
-int sso_register_irqs(const struct rte_eventdev *event_dev);
-void sso_unregister_irqs(const struct rte_eventdev *event_dev);
-
-#endif /* __OTX2_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c
deleted file mode 100644
index a91f784b1e..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_adptr.c
+++ /dev/null
@@ -1,656 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019-2021 Marvell.
- */
-
-#include "otx2_evdev.h"
-
-#define NIX_RQ_AURA_THRESH(x) (((x)*95) / 100)
-
-int
-otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev, uint32_t *caps)
-{
- int rc;
-
- RTE_SET_USED(event_dev);
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
- else
- *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT |
- RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ;
-
- return 0;
-}
-
-static inline int
-sso_rxq_enable(struct otx2_eth_dev *dev, uint16_t qid, uint8_t tt, uint8_t ggrp,
- uint16_t eth_port_id)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 0;
- aq->cq.caching = 0;
-
- otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s));
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
- aq->cq_mask.caching = ~(aq->cq_mask.caching);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to disable cq context");
- goto fail;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.sso_ena = 1;
- aq->rq.sso_tt = tt;
- aq->rq.sso_grp = ggrp;
- aq->rq.ena_wqwd = 1;
- /* Mbuf Header generation :
- * > FIRST_SKIP is a super set of WQE_SKIP, dont modify first skip as
- * it already has data related to mbuf size, headroom, private area.
- * > Using WQE_SKIP we can directly assign
- * mbuf = wqe - sizeof(struct mbuf);
- * so that mbuf header will not have unpredicted values while headroom
- * and private data starts at the beginning of wqe_data.
- */
- aq->rq.wqe_skip = 1;
- aq->rq.wqe_caching = 1;
- aq->rq.spb_ena = 0;
- aq->rq.flow_tagw = 20; /* 20-bits */
-
- /* Flow Tag calculation :
- *
- * rq_tag <31:24> = good/bad_tag<8:0>;
- * rq_tag <23:0> = [ltag]
- *
- * flow_tag_mask<31:0> = (1 << flow_tagw) - 1; <31:20>
- * tag<31:0> = (~flow_tag_mask & rq_tag) | (flow_tag_mask & flow_tag);
- *
- * Setup :
- * ltag<23:0> = (eth_port_id & 0xF) << 20;
- * good/bad_tag<8:0> =
- * ((eth_port_id >> 4) & 0xF) | (RTE_EVENT_TYPE_ETHDEV << 4);
- *
- * TAG<31:0> on getwork = <31:28>(RTE_EVENT_TYPE_ETHDEV) |
- * <27:20> (eth_port_id) | <20:0> [TAG]
- */
-
- aq->rq.ltag = (eth_port_id & 0xF) << 20;
- aq->rq.good_utag = ((eth_port_id >> 4) & 0xF) |
- (RTE_EVENT_TYPE_ETHDEV << 4);
- aq->rq.bad_utag = aq->rq.good_utag;
-
- aq->rq.ena = 0; /* Don't enable RQ yet */
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
-
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s));
- /* mask the bits to write. */
- aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena);
- aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt);
- aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp);
- aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd);
- aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip);
- aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching);
- aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena);
- aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw);
- aq->rq_mask.ltag = ~(aq->rq_mask.ltag);
- aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag);
- aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag);
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
- aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching);
- aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to init rx adapter context");
- goto fail;
- }
-
- return 0;
-fail:
- return rc;
-}
-
-static inline int
-sso_rxq_disable(struct otx2_eth_dev *dev, uint16_t qid)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 1;
- aq->cq.caching = 1;
-
- otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s));
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
- aq->cq_mask.caching = ~(aq->cq_mask.caching);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to enable cq context");
- goto fail;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.sso_ena = 0;
- aq->rq.sso_tt = SSO_TT_UNTAGGED;
- aq->rq.sso_grp = 0;
- aq->rq.ena_wqwd = 0;
- aq->rq.wqe_caching = 0;
- aq->rq.wqe_skip = 0;
- aq->rq.spb_ena = 0;
- aq->rq.flow_tagw = 0x20;
- aq->rq.ltag = 0;
- aq->rq.good_utag = 0;
- aq->rq.bad_utag = 0;
- aq->rq.ena = 1;
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
-
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s));
- /* mask the bits to write. */
- aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena);
- aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt);
- aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp);
- aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd);
- aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching);
- aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip);
- aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena);
- aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw);
- aq->rq_mask.ltag = ~(aq->rq_mask.ltag);
- aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag);
- aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag);
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
- aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching);
- aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to clear rx adapter context");
- goto fail;
- }
-
- return 0;
-fail:
- return rc;
-}
-
-void
-sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type)
-{
- int i;
-
- switch (event_type) {
- case RTE_EVENT_TYPE_ETHDEV:
- {
- struct otx2_eth_rxq *rxq = data;
- uint64_t *old_ptr;
-
- for (i = 0; i < dev->rx_adptr_pool_cnt; i++) {
- if ((uint64_t)rxq->pool == dev->rx_adptr_pools[i])
- return;
- }
-
- dev->rx_adptr_pool_cnt++;
- old_ptr = dev->rx_adptr_pools;
- dev->rx_adptr_pools = rte_realloc(dev->rx_adptr_pools,
- sizeof(uint64_t) *
- dev->rx_adptr_pool_cnt, 0);
- if (dev->rx_adptr_pools == NULL) {
- dev->adptr_xae_cnt += rxq->pool->size;
- dev->rx_adptr_pools = old_ptr;
- dev->rx_adptr_pool_cnt--;
- return;
- }
- dev->rx_adptr_pools[dev->rx_adptr_pool_cnt - 1] =
- (uint64_t)rxq->pool;
-
- dev->adptr_xae_cnt += rxq->pool->size;
- break;
- }
- case RTE_EVENT_TYPE_TIMER:
- {
- struct otx2_tim_ring *timr = data;
- uint16_t *old_ring_ptr;
- uint64_t *old_sz_ptr;
-
- for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
- if (timr->ring_id != dev->timer_adptr_rings[i])
- continue;
- if (timr->nb_timers == dev->timer_adptr_sz[i])
- return;
- dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_sz[i] = timr->nb_timers;
-
- return;
- }
-
- dev->tim_adptr_ring_cnt++;
- old_ring_ptr = dev->timer_adptr_rings;
- old_sz_ptr = dev->timer_adptr_sz;
-
- dev->timer_adptr_rings = rte_realloc(dev->timer_adptr_rings,
- sizeof(uint16_t) *
- dev->tim_adptr_ring_cnt,
- 0);
- if (dev->timer_adptr_rings == NULL) {
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_rings = old_ring_ptr;
- dev->tim_adptr_ring_cnt--;
- return;
- }
-
- dev->timer_adptr_sz = rte_realloc(dev->timer_adptr_sz,
- sizeof(uint64_t) *
- dev->tim_adptr_ring_cnt,
- 0);
-
- if (dev->timer_adptr_sz == NULL) {
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_sz = old_sz_ptr;
- dev->tim_adptr_ring_cnt--;
- return;
- }
-
- dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
- timr->ring_id;
- dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
- timr->nb_timers;
-
- dev->adptr_xae_cnt += timr->nb_timers;
- break;
- }
- default:
- break;
- }
-}
-
-static inline void
-sso_updt_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[i];
-
- ws->lookup_mem = lookup_mem;
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[i];
-
- ws->lookup_mem = lookup_mem;
- }
- }
-}
-
-static inline void
-sso_cfg_nix_mp_bpid(struct otx2_sso_evdev *dev,
- struct otx2_eth_dev *otx2_eth_dev, struct otx2_eth_rxq *rxq,
- uint8_t ena)
-{
- struct otx2_fc_info *fc = &otx2_eth_dev->fc_info;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- struct otx2_npa_lf *lf;
- struct otx2_mbox *mbox;
- uint32_t limit;
- int rc;
-
- if (otx2_dev_is_sdp(otx2_eth_dev))
- return;
-
- lf = otx2_npa_lf_obj_get();
- if (!lf)
- return;
- mbox = lf->mbox;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return;
-
- limit = rsp->aura.limit;
- /* BP is already enabled. */
- if (rsp->aura.bp_ena) {
- /* If BP ids don't match disable BP. */
- if ((rsp->aura.nix0_bpid != fc->bpid[0]) && !dev->force_rx_bp) {
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id =
- npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
-
- req->aura.bp_ena = 0;
- req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
-
- otx2_mbox_process(mbox);
- }
- return;
- }
-
- /* BP was previously enabled but now disabled skip. */
- if (rsp->aura.bp)
- return;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
-
- if (ena) {
- req->aura.nix0_bpid = fc->bpid[0];
- req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
- req->aura.bp = NIX_RQ_AURA_THRESH(
- limit > 128 ? 256 : limit); /* 95% of size*/
- req->aura_mask.bp = ~(req->aura_mask.bp);
- }
-
- req->aura.bp_ena = !!ena;
- req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
-
- otx2_mbox_process(mbox);
-}
-
-int
-otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id,
- const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t port = eth_dev->data->port_id;
- struct otx2_eth_rxq *rxq;
- int i, rc;
-
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- return -EINVAL;
-
- if (rx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
- sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true);
- rc = sso_xae_reconfigure(
- (struct rte_eventdev *)(uintptr_t)event_dev);
- rc |= sso_rxq_enable(otx2_eth_dev, i,
- queue_conf->ev.sched_type,
- queue_conf->ev.queue_id, port);
- }
- rxq = eth_dev->data->rx_queues[0];
- sso_updt_lookup_mem(event_dev, rxq->lookup_mem);
- } else {
- rxq = eth_dev->data->rx_queues[rx_queue_id];
- sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true);
- rc = sso_xae_reconfigure((struct rte_eventdev *)
- (uintptr_t)event_dev);
- rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id,
- queue_conf->ev.sched_type,
- queue_conf->ev.queue_id, port);
- sso_updt_lookup_mem(event_dev, rxq->lookup_mem);
- }
-
- if (rc < 0) {
- otx2_err("Failed to configure Rx adapter port=%d, q=%d", port,
- queue_conf->ev.queue_id);
- return rc;
- }
-
- dev->rx_offloads |= otx2_eth_dev->rx_offload_flags;
- dev->tstamp = &otx2_eth_dev->tstamp;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
-
- return 0;
-}
-
-int
-otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i, rc;
-
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- return -EINVAL;
-
- if (rx_queue_id < 0) {
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = sso_rxq_disable(otx2_eth_dev, i);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev,
- eth_dev->data->rx_queues[i], false);
- }
- } else {
- rc = sso_rxq_disable(otx2_eth_dev, (uint16_t)rx_queue_id);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev,
- eth_dev->data->rx_queues[rx_queue_id],
- false);
- }
-
- if (rc < 0)
- otx2_err("Failed to clear Rx adapter config port=%d, q=%d",
- eth_dev->data->port_id, rx_queue_id);
-
- return rc;
-}
-
-int
-otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(eth_dev);
-
- return 0;
-}
-
-int
-otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(eth_dev);
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
- const struct rte_eth_dev *eth_dev, uint32_t *caps)
-{
- int ret;
-
- RTE_SET_USED(dev);
- ret = strncmp(eth_dev->device->driver->name, "net_octeontx2,", 13);
- if (ret)
- *caps = 0;
- else
- *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT;
-
- return 0;
-}
-
-static int
-sso_sqb_aura_limit_edit(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *aura_req;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
-
- aura_req->aura.limit = nb_sqb_bufs;
- aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
-
- return otx2_mbox_process(npa_lf->mbox);
-}
-
-static int
-sso_add_tx_queue_data(const struct rte_eventdev *event_dev,
- uint16_t eth_port_id, uint16_t tx_queue_id,
- struct otx2_eth_txq *txq)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i;
-
- for (i = 0; i < event_dev->data->nb_ports; i++) {
- dev->max_port_id = RTE_MAX(dev->max_port_id, eth_port_id);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *old_dws;
- struct otx2_ssogws_dual *dws;
-
- old_dws = event_dev->data->ports[i];
- dws = rte_realloc_socket(ssogws_get_cookie(old_dws),
- sizeof(struct otx2_ssogws_dual)
- + RTE_CACHE_LINE_SIZE +
- (sizeof(uint64_t) *
- (dev->max_port_id + 1) *
- RTE_MAX_QUEUES_PER_PORT),
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (dws == NULL)
- return -ENOMEM;
-
- /* First cache line is reserved for cookie */
- dws = (struct otx2_ssogws_dual *)
- ((uint8_t *)dws + RTE_CACHE_LINE_SIZE);
-
- ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT]
- )&dws->tx_adptr_data)[eth_port_id][tx_queue_id] =
- (uint64_t)txq;
- event_dev->data->ports[i] = dws;
- } else {
- struct otx2_ssogws *old_ws;
- struct otx2_ssogws *ws;
-
- old_ws = event_dev->data->ports[i];
- ws = rte_realloc_socket(ssogws_get_cookie(old_ws),
- sizeof(struct otx2_ssogws) +
- RTE_CACHE_LINE_SIZE +
- (sizeof(uint64_t) *
- (dev->max_port_id + 1) *
- RTE_MAX_QUEUES_PER_PORT),
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL)
- return -ENOMEM;
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
-
- ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT]
- )&ws->tx_adptr_data)[eth_port_id][tx_queue_id] =
- (uint64_t)txq;
- event_dev->data->ports[i] = ws;
- }
- }
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_eth_txq *txq;
- int i, ret;
-
- RTE_SET_USED(id);
- if (tx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- sso_sqb_aura_limit_edit(txq->sqb_pool,
- OTX2_SSO_SQB_LIMIT);
- ret = sso_add_tx_queue_data(event_dev,
- eth_dev->data->port_id, i,
- txq);
- if (ret < 0)
- return ret;
- }
- } else {
- txq = eth_dev->data->tx_queues[tx_queue_id];
- sso_sqb_aura_limit_edit(txq->sqb_pool, OTX2_SSO_SQB_LIMIT);
- ret = sso_add_tx_queue_data(event_dev, eth_dev->data->port_id,
- tx_queue_id, txq);
- if (ret < 0)
- return ret;
- }
-
- dev->tx_offloads |= otx2_eth_dev->tx_offload_flags;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id)
-{
- struct otx2_eth_txq *txq;
- int i;
-
- RTE_SET_USED(id);
- RTE_SET_USED(eth_dev);
- RTE_SET_USED(event_dev);
- if (tx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- sso_sqb_aura_limit_edit(txq->sqb_pool,
- txq->nb_sqb_bufs);
- }
- } else {
- txq = eth_dev->data->tx_queues[tx_queue_id];
- sso_sqb_aura_limit_edit(txq->sqb_pool, txq->nb_sqb_bufs);
- }
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
deleted file mode 100644
index d59d6c53f6..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
+++ /dev/null
@@ -1,132 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020-2021 Marvell.
- */
-
-#include <cryptodev_pmd.h>
-#include <rte_eventdev.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_qp.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_evdev.h"
-
-int
-otx2_ca_caps_get(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, uint32_t *caps)
-{
- RTE_SET_USED(dev);
- RTE_SET_USED(cdev);
-
- *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND |
- RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW |
- RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD;
-
- return 0;
-}
-
-static int
-otx2_ca_qp_sso_link(const struct rte_cryptodev *cdev, struct otx2_cpt_qp *qp,
- uint16_t sso_pf_func)
-{
- union otx2_cpt_af_lf_ctl2 af_lf_ctl2;
- int ret;
-
- ret = otx2_cpt_af_reg_read(cdev, OTX2_CPT_AF_LF_CTL2(qp->id),
- qp->blkaddr, &af_lf_ctl2.u);
- if (ret)
- return ret;
-
- af_lf_ctl2.s.sso_pf_func = sso_pf_func;
- ret = otx2_cpt_af_reg_write(cdev, OTX2_CPT_AF_LF_CTL2(qp->id),
- qp->blkaddr, af_lf_ctl2.u);
- return ret;
-}
-
-static void
-otx2_ca_qp_init(struct otx2_cpt_qp *qp, const struct rte_event *event)
-{
- if (event) {
- qp->qp_ev_bind = 1;
- rte_memcpy(&qp->ev, event, sizeof(struct rte_event));
- } else {
- qp->qp_ev_bind = 0;
- }
- qp->ca_enable = 1;
-}
-
-int
-otx2_ca_qp_add(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev,
- int32_t queue_pair_id, const struct rte_event *event)
-{
- struct otx2_sso_evdev *sso_evdev = sso_pmd_priv(dev);
- struct otx2_cpt_vf *vf = cdev->data->dev_private;
- uint16_t sso_pf_func = otx2_sso_pf_func_get();
- struct otx2_cpt_qp *qp;
- uint8_t qp_id;
- int ret;
-
- if (queue_pair_id == -1) {
- for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) {
- qp = cdev->data->queue_pairs[qp_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func);
- if (ret) {
- uint8_t qp_tmp;
- for (qp_tmp = 0; qp_tmp < qp_id; qp_tmp++)
- otx2_ca_qp_del(dev, cdev, qp_tmp);
- return ret;
- }
- otx2_ca_qp_init(qp, event);
- }
- } else {
- qp = cdev->data->queue_pairs[queue_pair_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func);
- if (ret)
- return ret;
- otx2_ca_qp_init(qp, event);
- }
-
- sso_evdev->rx_offloads |= NIX_RX_OFFLOAD_SECURITY_F;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)dev);
-
- /* Update crypto adapter xae count */
- if (queue_pair_id == -1)
- sso_evdev->adptr_xae_cnt +=
- vf->nb_queues * OTX2_CPT_DEFAULT_CMD_QLEN;
- else
- sso_evdev->adptr_xae_cnt += OTX2_CPT_DEFAULT_CMD_QLEN;
- sso_xae_reconfigure((struct rte_eventdev *)(uintptr_t)dev);
-
- return 0;
-}
-
-int
-otx2_ca_qp_del(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev,
- int32_t queue_pair_id)
-{
- struct otx2_cpt_vf *vf = cdev->data->dev_private;
- struct otx2_cpt_qp *qp;
- uint8_t qp_id;
- int ret;
-
- RTE_SET_USED(dev);
-
- ret = 0;
- if (queue_pair_id == -1) {
- for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) {
- qp = cdev->data->queue_pairs[qp_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, 0);
- if (ret)
- return ret;
- qp->ca_enable = 0;
- }
- } else {
- qp = cdev->data->queue_pairs[queue_pair_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, 0);
- if (ret)
- return ret;
- qp->ca_enable = 0;
- }
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
deleted file mode 100644
index b33cb7e139..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
+++ /dev/null
@@ -1,77 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
-#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
-
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_eventdev.h>
-
-#include "cpt_pmd_logs.h"
-#include "cpt_ucode.h"
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_ops_helper.h"
-#include "otx2_cryptodev_qp.h"
-
-static inline void
-otx2_ca_deq_post_process(const struct otx2_cpt_qp *qp,
- struct rte_crypto_op *cop, uintptr_t *rsp,
- uint8_t cc)
-{
- if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (likely(cc == NO_ERR)) {
- /* Verify authentication data if required */
- if (unlikely(rsp[2]))
- compl_auth_verify(cop, (uint8_t *)rsp[2],
- rsp[3]);
- else
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else {
- if (cc == ERR_GC_ICV_MISCOMPARE)
- cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-
- if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
- sym_session_clear(otx2_cryptodev_driver_id,
- cop->sym->session);
- memset(cop->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- cop->sym->session));
- rte_mempool_put(qp->sess_mp, cop->sym->session);
- cop->sym->session = NULL;
- }
- }
-
-}
-
-static inline uint64_t
-otx2_handle_crypto_event(uint64_t get_work1)
-{
- struct cpt_request_info *req;
- const struct otx2_cpt_qp *qp;
- struct rte_crypto_op *cop;
- uintptr_t *rsp;
- void *metabuf;
- uint8_t cc;
-
- req = (struct cpt_request_info *)(get_work1);
- cc = otx2_cpt_compcode_get(req);
- qp = req->qp;
-
- rsp = req->op;
- metabuf = (void *)rsp[0];
- cop = (void *)rsp[1];
-
- otx2_ca_deq_post_process(qp, cop, rsp, cc);
-
- rte_mempool_put(qp->meta_info.pool, metabuf);
-
- return (uint64_t)(cop);
-}
-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
deleted file mode 100644
index 1fc56f903b..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
+++ /dev/null
@@ -1,83 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2021 Marvell International Ltd.
- */
-
-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
-#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
-
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_event_crypto_adapter.h>
-#include <rte_eventdev.h>
-
-#include <otx2_cryptodev_qp.h>
-#include <otx2_worker.h>
-
-static inline uint16_t
-otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
-{
- union rte_event_crypto_metadata *m_data;
- struct rte_crypto_op *crypto_op;
- struct rte_cryptodev *cdev;
- struct otx2_cpt_qp *qp;
- uint8_t cdev_id;
- uint16_t qp_id;
-
- crypto_op = ev->event_ptr;
- if (crypto_op == NULL)
- return 0;
-
- if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- m_data = rte_cryptodev_sym_session_get_user_data(
- crypto_op->sym->session);
- if (m_data == NULL)
- goto free_op;
-
- cdev_id = m_data->request_info.cdev_id;
- qp_id = m_data->request_info.queue_pair_id;
- } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
- crypto_op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
- ((uint8_t *)crypto_op +
- crypto_op->private_data_offset);
- cdev_id = m_data->request_info.cdev_id;
- qp_id = m_data->request_info.queue_pair_id;
- } else {
- goto free_op;
- }
-
- cdev = &rte_cryptodevs[cdev_id];
- qp = cdev->data->queue_pairs[qp_id];
-
- if (!ev->sched_type)
- otx2_ssogws_head_wait(tag_op);
- if (qp->ca_enable)
- return cdev->enqueue_burst(qp, &crypto_op, 1);
-
-free_op:
- rte_pktmbuf_free(crypto_op->sym->m_src);
- rte_crypto_op_free(crypto_op);
- rte_errno = EINVAL;
- return 0;
-}
-
-static uint16_t __rte_hot
-otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
-
- RTE_SET_USED(nb_events);
-
- return otx2_ca_enq(ws->tag_op, ev);
-}
-
-static uint16_t __rte_hot
-otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
-
- RTE_SET_USED(nb_events);
-
- return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev);
-}
-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
deleted file mode 100644
index 9b7ad27b04..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ /dev/null
@@ -1,272 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_evdev.h"
-#include "otx2_tim_evdev.h"
-
-static void
-sso_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint64_t intr;
- uint8_t ggrp;
-
- ggrp = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + SSO_LF_GGRP_INT);
- if (intr == 0)
- return;
-
- otx2_err("GGRP %d GGRP_INT=0x%" PRIx64 "", ggrp, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + SSO_LF_GGRP_INT);
-}
-
-static int
-sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, sso_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-ssow_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t gws = (base >> 12) & 0xFF;
- uint64_t intr;
-
- intr = otx2_read64(base + SSOW_LF_GWS_INT);
- if (intr == 0)
- return;
-
- otx2_err("GWS %d GWS_INT=0x%" PRIx64 "", gws, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + SSOW_LF_GWS_INT);
-}
-
-static int
-ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, ssow_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
- uint16_t ggrp_msixoff, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
- otx2_unregister_irq(handle, sso_lf_irq, (void *)base, vec);
-}
-
-static void
-ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
- uint16_t gws_msixoff, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
- otx2_unregister_irq(handle, ssow_lf_irq, (void *)base, vec);
-}
-
-int
-sso_register_irqs(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i, rc = -EINVAL;
- uint8_t nb_ports;
-
- nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid SSOLF MSIX offset[%d] vector: 0x%x",
- i, dev->sso_msixoff[i]);
- goto fail;
- }
- }
-
- for (i = 0; i < nb_ports; i++) {
- if (dev->ssow_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid SSOWLF MSIX offset[%d] vector: 0x%x",
- i, dev->ssow_msixoff[i]);
- goto fail;
- }
- }
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
- i << 12);
- rc = sso_lf_register_irq(event_dev, dev->sso_msixoff[i], base);
- }
-
- for (i = 0; i < nb_ports; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
- i << 12);
- rc = ssow_lf_register_irq(event_dev, dev->ssow_msixoff[i],
- base);
- }
-
-fail:
- return rc;
-}
-
-void
-sso_unregister_irqs(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports;
- int i;
-
- nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
- i << 12);
- sso_lf_unregister_irq(event_dev, dev->sso_msixoff[i], base);
- }
-
- for (i = 0; i < nb_ports; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
- i << 12);
- ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base);
- }
-}
-
-static void
-tim_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint64_t intr;
- uint8_t ring;
-
- ring = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + TIM_LF_NRSPERR_INT);
- otx2_err("TIM RING %d TIM_LF_NRSPERR_INT=0x%" PRIx64 "", ring, intr);
- intr = otx2_read64(base + TIM_LF_RAS_INT);
- otx2_err("TIM RING %d TIM_LF_RAS_INT=0x%" PRIx64 "", ring, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + TIM_LF_NRSPERR_INT);
- otx2_write64(intr, base + TIM_LF_RAS_INT);
-}
-
-static int
-tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
- uintptr_t base)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1S);
-
- vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
- uintptr_t base)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1C);
- otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
-
- vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1C);
- otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
-}
-
-int
-tim_register_irq(uint16_t ring_id)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- int rc = -EINVAL;
- uintptr_t base;
-
- if (dev->tim_msixoff[ring_id] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid TIMLF MSIX offset[%d] vector: 0x%x",
- ring_id, dev->tim_msixoff[ring_id]);
- goto fail;
- }
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
- rc = tim_lf_register_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
-fail:
- return rc;
-}
-
-void
-tim_unregister_irq(uint16_t ring_id)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- uintptr_t base;
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
- tim_lf_unregister_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c
deleted file mode 100644
index 48bfaf893d..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_selftest.c
+++ /dev/null
@@ -1,1517 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_debug.h>
-#include <rte_eal.h>
-#include <rte_ethdev.h>
-#include <rte_eventdev.h>
-#include <rte_hexdump.h>
-#include <rte_launch.h>
-#include <rte_lcore.h>
-#include <rte_mbuf.h>
-#include <rte_malloc.h>
-#include <rte_memcpy.h>
-#include <rte_per_lcore.h>
-#include <rte_random.h>
-#include <rte_test.h>
-
-#include "otx2_evdev.h"
-
-#define NUM_PACKETS (1024)
-#define MAX_EVENTS (1024)
-
-#define OCTEONTX2_TEST_RUN(setup, teardown, test) \
- octeontx_test_run(setup, teardown, test, #test)
-
-static int total;
-static int passed;
-static int failed;
-static int unsupported;
-
-static int evdev;
-static struct rte_mempool *eventdev_test_mempool;
-
-struct event_attr {
- uint32_t flow_id;
- uint8_t event_type;
- uint8_t sub_event_type;
- uint8_t sched_type;
- uint8_t queue;
- uint8_t port;
-};
-
-static uint32_t seqn_list_index;
-static int seqn_list[NUM_PACKETS];
-
-static inline void
-seqn_list_init(void)
-{
- RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
- memset(seqn_list, 0, sizeof(seqn_list));
- seqn_list_index = 0;
-}
-
-static inline int
-seqn_list_update(int val)
-{
- if (seqn_list_index >= NUM_PACKETS)
- return -1;
-
- seqn_list[seqn_list_index++] = val;
- rte_smp_wmb();
- return 0;
-}
-
-static inline int
-seqn_list_check(int limit)
-{
- int i;
-
- for (i = 0; i < limit; i++) {
- if (seqn_list[i] != i) {
- otx2_err("Seqn mismatch %d %d", seqn_list[i], i);
- return -1;
- }
- }
- return 0;
-}
-
-struct test_core_param {
- rte_atomic32_t *total_events;
- uint64_t dequeue_tmo_ticks;
- uint8_t port;
- uint8_t sched_type;
-};
-
-static int
-testsuite_setup(void)
-{
- const char *eventdev_name = "event_octeontx2";
-
- evdev = rte_event_dev_get_dev_id(eventdev_name);
- if (evdev < 0) {
- otx2_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
- return -1;
- }
- return 0;
-}
-
-static void
-testsuite_teardown(void)
-{
- rte_event_dev_close(evdev);
-}
-
-static inline void
-devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
- struct rte_event_dev_info *info)
-{
- memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
- dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
- dev_conf->nb_event_ports = info->max_event_ports;
- dev_conf->nb_event_queues = info->max_event_queues;
- dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
- dev_conf->nb_event_port_dequeue_depth =
- info->max_event_port_dequeue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_events_limit =
- info->max_num_events;
-}
-
-enum {
- TEST_EVENTDEV_SETUP_DEFAULT,
- TEST_EVENTDEV_SETUP_PRIORITY,
- TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
-};
-
-static inline int
-_eventdev_setup(int mode)
-{
- const char *pool_name = "evdev_octeontx_test_pool";
- struct rte_event_dev_config dev_conf;
- struct rte_event_dev_info info;
- int i, ret;
-
- /* Create and destrory pool for each test case to make it standalone */
- eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS,
- 0, 0, 512,
- rte_socket_id());
- if (!eventdev_test_mempool) {
- otx2_err("ERROR creating mempool");
- return -1;
- }
-
- ret = rte_event_dev_info_get(evdev, &info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
-
- devconf_set_default_sane_values(&dev_conf, &info);
- if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
- dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
-
- ret = rte_event_dev_configure(evdev, &dev_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
-
- uint32_t queue_count;
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
- if (queue_count > 8)
- queue_count = 8;
-
- /* Configure event queues(0 to n) with
- * RTE_EVENT_DEV_PRIORITY_HIGHEST to
- * RTE_EVENT_DEV_PRIORITY_LOWEST
- */
- uint8_t step = (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) /
- queue_count;
- for (i = 0; i < (int)queue_count; i++) {
- struct rte_event_queue_conf queue_conf;
-
- ret = rte_event_queue_default_conf_get(evdev, i,
- &queue_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
- i);
- queue_conf.priority = i * step;
- ret = rte_event_queue_setup(evdev, i, &queue_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
- i);
- }
-
- } else {
- /* Configure event queues with default priority */
- for (i = 0; i < (int)queue_count; i++) {
- ret = rte_event_queue_setup(evdev, i, NULL);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
- i);
- }
- }
- /* Configure event ports */
- uint32_t port_count;
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
- "Port count get failed");
- for (i = 0; i < (int)port_count; i++) {
- ret = rte_event_port_setup(evdev, i, NULL);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
- ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
- i);
- }
-
- ret = rte_event_dev_start(evdev);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
-
- return 0;
-}
-
-static inline int
-eventdev_setup(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
-}
-
-static inline int
-eventdev_setup_priority(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
-}
-
-static inline int
-eventdev_setup_dequeue_timeout(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
-}
-
-static inline void
-eventdev_teardown(void)
-{
- rte_event_dev_stop(evdev);
- rte_mempool_free(eventdev_test_mempool);
-}
-
-static inline void
-update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
- uint32_t flow_id, uint8_t event_type,
- uint8_t sub_event_type, uint8_t sched_type,
- uint8_t queue, uint8_t port)
-{
- struct event_attr *attr;
-
- /* Store the event attributes in mbuf for future reference */
- attr = rte_pktmbuf_mtod(m, struct event_attr *);
- attr->flow_id = flow_id;
- attr->event_type = event_type;
- attr->sub_event_type = sub_event_type;
- attr->sched_type = sched_type;
- attr->queue = queue;
- attr->port = port;
-
- ev->flow_id = flow_id;
- ev->sub_event_type = sub_event_type;
- ev->event_type = event_type;
- /* Inject the new event */
- ev->op = RTE_EVENT_OP_NEW;
- ev->sched_type = sched_type;
- ev->queue_id = queue;
- ev->mbuf = m;
-}
-
-static inline int
-inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
- uint8_t sched_type, uint8_t queue, uint8_t port,
- unsigned int events)
-{
- struct rte_mbuf *m;
- unsigned int i;
-
- for (i = 0; i < events; i++) {
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
-
- *rte_event_pmd_selftest_seqn(m) = i;
- update_event_and_validation_attr(m, &ev, flow_id, event_type,
- sub_event_type, sched_type,
- queue, port);
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- return 0;
-}
-
-static inline int
-check_excess_events(uint8_t port)
-{
- uint16_t valid_event;
- struct rte_event ev;
- int i;
-
- /* Check for excess events, try for a few times and exit */
- for (i = 0; i < 32; i++) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
-
- RTE_TEST_ASSERT_SUCCESS(valid_event,
- "Unexpected valid event=%d",
- *rte_event_pmd_selftest_seqn(ev.mbuf));
- }
- return 0;
-}
-
-static inline int
-generate_random_events(const unsigned int total_events)
-{
- struct rte_event_dev_info info;
- uint32_t queue_count;
- unsigned int i;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- ret = rte_event_dev_info_get(evdev, &info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
- for (i = 0; i < total_events; i++) {
- ret = inject_events(
- rte_rand() % info.max_event_queue_flows /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- rte_rand() % queue_count /* queue */,
- 0 /* port */,
- 1 /* events */);
- if (ret)
- return -1;
- }
- return ret;
-}
-
-
-static inline int
-validate_event(struct rte_event *ev)
-{
- struct event_attr *attr;
-
- attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
- RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
- "flow_id mismatch enq=%d deq =%d",
- attr->flow_id, ev->flow_id);
- RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
- "event_type mismatch enq=%d deq =%d",
- attr->event_type, ev->event_type);
- RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
- "sub_event_type mismatch enq=%d deq =%d",
- attr->sub_event_type, ev->sub_event_type);
- RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
- "sched_type mismatch enq=%d deq =%d",
- attr->sched_type, ev->sched_type);
- RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
- "queue mismatch enq=%d deq =%d",
- attr->queue, ev->queue_id);
- return 0;
-}
-
-typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
- struct rte_event *ev);
-
-static inline int
-consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
-{
- uint32_t events = 0, forward_progress_cnt = 0, index = 0;
- uint16_t valid_event;
- struct rte_event ev;
- int ret;
-
- while (1) {
- if (++forward_progress_cnt > UINT16_MAX) {
- otx2_err("Detected deadlock");
- return -1;
- }
-
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- forward_progress_cnt = 0;
- ret = validate_event(&ev);
- if (ret)
- return -1;
-
- if (fn != NULL) {
- ret = fn(index, port, &ev);
- RTE_TEST_ASSERT_SUCCESS(ret,
- "Failed to validate test specific event");
- }
-
- ++index;
-
- rte_pktmbuf_free(ev.mbuf);
- if (++events >= total_events)
- break;
- }
-
- return check_excess_events(port);
-}
-
-static int
-validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
-{
- RTE_SET_USED(port);
- RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
- "index=%d != seqn=%d",
- index, *rte_event_pmd_selftest_seqn(ev->mbuf));
- return 0;
-}
-
-static inline int
-test_simple_enqdeq(uint8_t sched_type)
-{
- int ret;
-
- ret = inject_events(0 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type */,
- sched_type,
- 0 /* queue */,
- 0 /* port */,
- MAX_EVENTS);
- if (ret)
- return -1;
-
- return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
-}
-
-static int
-test_simple_enqdeq_ordered(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_simple_enqdeq_atomic(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_simple_enqdeq_parallel(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
-}
-
-/*
- * Generate a prescribed number of events and spread them across available
- * queues. On dequeue, using single event port(port 0) verify the enqueued
- * event attributes
- */
-static int
-test_multi_queue_enq_single_port_deq(void)
-{
- int ret;
-
- ret = generate_random_events(MAX_EVENTS);
- if (ret)
- return -1;
-
- return consume_events(0 /* port */, MAX_EVENTS, NULL);
-}
-
-/*
- * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
- * operation
- *
- * For example, Inject 32 events over 0..7 queues
- * enqueue events 0, 8, 16, 24 in queue 0
- * enqueue events 1, 9, 17, 25 in queue 1
- * ..
- * ..
- * enqueue events 7, 15, 23, 31 in queue 7
- *
- * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
- * order from queue0(highest priority) to queue7(lowest_priority)
- */
-static int
-validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
-{
- uint32_t queue_count;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count > 8)
- queue_count = 8;
- uint32_t range = MAX_EVENTS / queue_count;
- uint32_t expected_val = (index % range) * queue_count;
-
- expected_val += ev->queue_id;
- RTE_SET_USED(port);
- RTE_TEST_ASSERT_EQUAL(
- *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
- "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
- *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
- range, queue_count, MAX_EVENTS);
- return 0;
-}
-
-static int
-test_multi_queue_priority(void)
-{
- int i, max_evts_roundoff;
- /* See validate_queue_priority() comments for priority validate logic */
- uint32_t queue_count;
- struct rte_mbuf *m;
- uint8_t queue;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count > 8)
- queue_count = 8;
- max_evts_roundoff = MAX_EVENTS / queue_count;
- max_evts_roundoff *= queue_count;
-
- for (i = 0; i < max_evts_roundoff; i++) {
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
-
- *rte_event_pmd_selftest_seqn(m) = i;
- queue = i % queue_count;
- update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
- 0, RTE_SCHED_TYPE_PARALLEL,
- queue, 0);
- rte_event_enqueue_burst(evdev, 0, &ev, 1);
- }
-
- return consume_events(0, max_evts_roundoff, validate_queue_priority);
-}
-
-static int
-worker_multi_port_fn(void *arg)
-{
- struct test_core_param *param = arg;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
- int ret;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- ret = validate_event(&ev);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- }
-
- return 0;
-}
-
-static inline int
-wait_workers_to_join(const rte_atomic32_t *count)
-{
- uint64_t cycles, print_cycles;
-
- cycles = rte_get_timer_cycles();
- print_cycles = cycles;
- while (rte_atomic32_read(count)) {
- uint64_t new_cycles = rte_get_timer_cycles();
-
- if (new_cycles - print_cycles > rte_get_timer_hz()) {
- otx2_err("Events %d", rte_atomic32_read(count));
- print_cycles = new_cycles;
- }
- if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
- otx2_err("No schedules for seconds, deadlock (%d)",
- rte_atomic32_read(count));
- rte_event_dev_dump(evdev, stdout);
- cycles = new_cycles;
- return -1;
- }
- }
- rte_eal_mp_wait_lcore();
-
- return 0;
-}
-
-static inline int
-launch_workers_and_wait(int (*main_thread)(void *),
- int (*worker_thread)(void *), uint32_t total_events,
- uint8_t nb_workers, uint8_t sched_type)
-{
- rte_atomic32_t atomic_total_events;
- struct test_core_param *param;
- uint64_t dequeue_tmo_ticks;
- uint8_t port = 0;
- int w_lcore;
- int ret;
-
- if (!nb_workers)
- return 0;
-
- rte_atomic32_set(&atomic_total_events, total_events);
- seqn_list_init();
-
- param = malloc(sizeof(struct test_core_param) * nb_workers);
- if (!param)
- return -1;
-
- ret = rte_event_dequeue_timeout_ticks(evdev,
- rte_rand() % 10000000/* 10ms */,
- &dequeue_tmo_ticks);
- if (ret) {
- free(param);
- return -1;
- }
-
- param[0].total_events = &atomic_total_events;
- param[0].sched_type = sched_type;
- param[0].port = 0;
- param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
- rte_wmb();
-
- w_lcore = rte_get_next_lcore(
- /* start core */ -1,
- /* skip main */ 1,
- /* wrap */ 0);
- rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
-
- for (port = 1; port < nb_workers; port++) {
- param[port].total_events = &atomic_total_events;
- param[port].sched_type = sched_type;
- param[port].port = port;
- param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
- rte_smp_wmb();
- w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
- rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
- }
-
- rte_smp_wmb();
- ret = wait_workers_to_join(&atomic_total_events);
- free(param);
-
- return ret;
-}
-
-/*
- * Generate a prescribed number of events and spread them across available
- * queues. Dequeue the events through multiple ports and verify the enqueued
- * event attributes
- */
-static int
-test_multi_queue_enq_multi_port_deq(void)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t nr_ports;
- int ret;
-
- ret = generate_random_events(total_events);
- if (ret)
- return -1;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d", nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- return launch_workers_and_wait(worker_multi_port_fn,
- worker_multi_port_fn, total_events,
- nr_ports, 0xff /* invalid */);
-}
-
-static
-void flush(uint8_t dev_id, struct rte_event event, void *arg)
-{
- unsigned int *count = arg;
-
- RTE_SET_USED(dev_id);
- if (event.event_type == RTE_EVENT_TYPE_CPU)
- *count = *count + 1;
-}
-
-static int
-test_dev_stop_flush(void)
-{
- unsigned int total_events = MAX_EVENTS, count = 0;
- int ret;
-
- ret = generate_random_events(total_events);
- if (ret)
- return -1;
-
- ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
- if (ret)
- return -2;
- rte_event_dev_stop(evdev);
- ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
- if (ret)
- return -3;
- RTE_TEST_ASSERT_EQUAL(total_events, count,
- "count mismatch total_events=%d count=%d",
- total_events, count);
-
- return 0;
-}
-
-static int
-validate_queue_to_port_single_link(uint32_t index, uint8_t port,
- struct rte_event *ev)
-{
- RTE_SET_USED(index);
- RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
- "queue mismatch enq=%d deq =%d",
- port, ev->queue_id);
-
- return 0;
-}
-
-/*
- * Link queue x to port x and check correctness of link by checking
- * queue_id == x on dequeue on the specific port x
- */
-static int
-test_queue_to_port_single_link(void)
-{
- int i, nr_links, ret;
- uint32_t queue_count;
- uint32_t port_count;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
- "Port count get failed");
-
- /* Unlink all connections that created in eventdev_setup */
- for (i = 0; i < (int)port_count; i++) {
- ret = rte_event_port_unlink(evdev, i, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0,
- "Failed to unlink all queues port=%d", i);
- }
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- nr_links = RTE_MIN(port_count, queue_count);
- const unsigned int total_events = MAX_EVENTS / nr_links;
-
- /* Link queue x to port x and inject events to queue x through port x */
- for (i = 0; i < nr_links; i++) {
- uint8_t queue = (uint8_t)i;
-
- ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
- RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
-
- ret = inject_events(0x100 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- queue /* queue */, i /* port */,
- total_events /* events */);
- if (ret)
- return -1;
- }
-
- /* Verify the events generated from correct queue */
- for (i = 0; i < nr_links; i++) {
- ret = consume_events(i /* port */, total_events,
- validate_queue_to_port_single_link);
- if (ret)
- return -1;
- }
-
- return 0;
-}
-
-static int
-validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
- struct rte_event *ev)
-{
- RTE_SET_USED(index);
- RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
- "queue mismatch enq=%d deq =%d",
- port, ev->queue_id);
-
- return 0;
-}
-
-/*
- * Link all even number of queues to port 0 and all odd number of queues to
- * port 1 and verify the link connection on dequeue
- */
-static int
-test_queue_to_port_multi_link(void)
-{
- int ret, port0_events = 0, port1_events = 0;
- uint32_t nr_queues = 0;
- uint32_t nr_ports = 0;
- uint8_t queue, port;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
- "Queue count get failed");
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
- "Queue count get failed");
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
-
- if (nr_ports < 2) {
- otx2_err("Not enough ports to test ports=%d", nr_ports);
- return 0;
- }
-
- /* Unlink all connections that created in eventdev_setup */
- for (port = 0; port < nr_ports; port++) {
- ret = rte_event_port_unlink(evdev, port, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
- port);
- }
-
- const unsigned int total_events = MAX_EVENTS / nr_queues;
-
- /* Link all even number of queues to port0 and odd numbers to port 1*/
- for (queue = 0; queue < nr_queues; queue++) {
- port = queue & 0x1;
- ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
- RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
- queue, port);
-
- ret = inject_events(0x100 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- queue /* queue */, port /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- if (port == 0)
- port0_events += total_events;
- else
- port1_events += total_events;
- }
-
- ret = consume_events(0 /* port */, port0_events,
- validate_queue_to_port_multi_link);
- if (ret)
- return -1;
- ret = consume_events(1 /* port */, port1_events,
- validate_queue_to_port_multi_link);
- if (ret)
- return -1;
-
- return 0;
-}
-
-static int
-worker_flow_based_pipeline(void *arg)
-{
- struct test_core_param *param = arg;
- uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t new_sched_type = param->sched_type;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
- dequeue_tmo_ticks);
- if (!valid_event)
- continue;
-
- /* Events from stage 0 */
- if (ev.sub_event_type == 0) {
- /* Move to atomic flow to maintain the ordering */
- ev.flow_id = 0x2;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sub_event_type = 1; /* stage 1 */
- ev.sched_type = new_sched_type;
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
- uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
-
- if (seqn_list_update(seqn) == 0) {
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- otx2_err("Failed to update seqn_list");
- return -1;
- }
- } else {
- otx2_err("Invalid ev.sub_event_type = %d",
- ev.sub_event_type);
- return -1;
- }
- }
- return 0;
-}
-
-static int
-test_multiport_flow_sched_type_test(uint8_t in_sched_type,
- uint8_t out_sched_type)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d", nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- in_sched_type,
- 0 /* queue */,
- 0 /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- rte_mb();
- ret = launch_workers_and_wait(worker_flow_based_pipeline,
- worker_flow_based_pipeline, total_events,
- nr_ports, out_sched_type);
- if (ret)
- return -1;
-
- if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
- out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
- /* Check the events order maintained or not */
- return seqn_list_check(total_events);
- }
-
- return 0;
-}
-
-/* Multi port ordered to atomic transaction */
-static int
-test_multi_port_flow_ordered_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_ordered_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_ordered_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_flow_atomic_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_atomic_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_atomic_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_flow_parallel_to_atomic(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_parallel_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_parallel_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-worker_group_based_pipeline(void *arg)
-{
- struct test_core_param *param = arg;
- uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t new_sched_type = param->sched_type;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
- dequeue_tmo_ticks);
- if (!valid_event)
- continue;
-
- /* Events from stage 0(group 0) */
- if (ev.queue_id == 0) {
- /* Move to atomic flow to maintain the ordering */
- ev.flow_id = 0x2;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sched_type = new_sched_type;
- ev.queue_id = 1; /* Stage 1*/
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
- uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
-
- if (seqn_list_update(seqn) == 0) {
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- otx2_err("Failed to update seqn_list");
- return -1;
- }
- } else {
- otx2_err("Invalid ev.queue_id = %d", ev.queue_id);
- return -1;
- }
- }
-
- return 0;
-}
-
-static int
-test_multiport_queue_sched_type_test(uint8_t in_sched_type,
- uint8_t out_sched_type)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t queue_count;
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
-
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count < 2 || !nr_ports) {
- otx2_err("Not enough queues=%d ports=%d or workers=%d",
- queue_count, nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- in_sched_type,
- 0 /* queue */,
- 0 /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- ret = launch_workers_and_wait(worker_group_based_pipeline,
- worker_group_based_pipeline, total_events,
- nr_ports, out_sched_type);
- if (ret)
- return -1;
-
- if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
- out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
- /* Check the events order maintained or not */
- return seqn_list_check(total_events);
- }
-
- return 0;
-}
-
-static int
-test_multi_port_queue_ordered_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_ordered_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_ordered_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_queue_atomic_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_atomic_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_atomic_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_queue_parallel_to_atomic(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_parallel_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_parallel_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.sub_event_type == 255) { /* last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sub_event_type++;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-static int
-launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
-{
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d",
- nr_ports, rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- rte_rand() %
- (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
- 0 /* queue */,
- 0 /* port */,
- MAX_EVENTS /* events */);
- if (ret)
- return -1;
-
- return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
- 0xff /* invalid */);
-}
-
-/* Flow based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_flow_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_flow_based_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- uint32_t queue_count;
- uint16_t valid_event;
- struct rte_event ev;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- uint8_t nr_queues = queue_count;
- rte_atomic32_t *total_events = param->total_events;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.queue_id == nr_queues - 1) { /* last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.queue_id++;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-/* Queue based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_queue_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_queue_based_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- uint32_t queue_count;
- uint16_t valid_event;
- struct rte_event ev;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- uint8_t nr_queues = queue_count;
- rte_atomic32_t *total_events = param->total_events;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.queue_id == nr_queues - 1) { /* Last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.queue_id++;
- ev.sub_event_type = rte_rand() % 256;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-/* Queue and flow based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_mixed_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_mixed_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_ordered_flow_producer(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- struct rte_mbuf *m;
- int counter = 0;
-
- while (counter < NUM_PACKETS) {
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- if (m == NULL)
- continue;
-
- *rte_event_pmd_selftest_seqn(m) = counter++;
-
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- ev.flow_id = 0x1; /* Generate a fat flow */
- ev.sub_event_type = 0;
- /* Inject the new event */
- ev.op = RTE_EVENT_OP_NEW;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sched_type = RTE_SCHED_TYPE_ORDERED;
- ev.queue_id = 0;
- ev.mbuf = m;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
-
- return 0;
-}
-
-static inline int
-test_producer_consumer_ingress_order_test(int (*fn)(void *))
-{
- uint32_t nr_ports;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (rte_lcore_count() < 3 || nr_ports < 2) {
- otx2_err("### Not enough cores for test.");
- return 0;
- }
-
- launch_workers_and_wait(worker_ordered_flow_producer, fn,
- NUM_PACKETS, nr_ports, RTE_SCHED_TYPE_ATOMIC);
- /* Check the events order maintained or not */
- return seqn_list_check(NUM_PACKETS);
-}
-
-/* Flow based producer consumer ingress order test */
-static int
-test_flow_producer_consumer_ingress_order_test(void)
-{
- return test_producer_consumer_ingress_order_test(
- worker_flow_based_pipeline);
-}
-
-/* Queue based producer consumer ingress order test */
-static int
-test_queue_producer_consumer_ingress_order_test(void)
-{
- return test_producer_consumer_ingress_order_test(
- worker_group_based_pipeline);
-}
-
-static void octeontx_test_run(int (*setup)(void), void (*tdown)(void),
- int (*test)(void), const char *name)
-{
- if (setup() < 0) {
- printf("Error setting up test %s", name);
- unsupported++;
- } else {
- if (test() < 0) {
- failed++;
- printf("+ TestCase [%2d] : %s failed\n", total, name);
- } else {
- passed++;
- printf("+ TestCase [%2d] : %s succeeded\n", total,
- name);
- }
- }
-
- total++;
- tdown();
-}
-
-int
-otx2_sso_selftest(void)
-{
- testsuite_setup();
-
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_queue_enq_single_port_deq);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_dev_stop_flush);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_queue_enq_multi_port_deq);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_to_port_single_link);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_to_port_multi_link);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_mixed_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_flow_producer_consumer_ingress_order_test);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_producer_consumer_ingress_order_test);
- OCTEONTX2_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
- test_multi_queue_priority);
- OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
- test_multi_port_flow_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
- test_multi_port_queue_ordered_to_atomic);
- printf("Total tests : %d\n", total);
- printf("Passed : %d\n", passed);
- printf("Failed : %d\n", failed);
- printf("Not supported : %d\n", unsupported);
-
- testsuite_teardown();
-
- if (failed)
- return -1;
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h
deleted file mode 100644
index 74fcec8a07..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_stats.h
+++ /dev/null
@@ -1,286 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_EVDEV_STATS_H__
-#define __OTX2_EVDEV_STATS_H__
-
-#include "otx2_evdev.h"
-
-struct otx2_sso_xstats_name {
- const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
- const size_t offset;
- const uint64_t mask;
- const uint8_t shift;
- uint64_t reset_snap[OTX2_SSO_MAX_VHGRP];
-};
-
-static struct otx2_sso_xstats_name sso_hws_xstats[] = {
- {"last_grp_serviced", offsetof(struct sso_hws_stats, arbitration),
- 0x3FF, 0, {0} },
- {"affinity_arbitration_credits",
- offsetof(struct sso_hws_stats, arbitration),
- 0xF, 16, {0} },
-};
-
-static struct otx2_sso_xstats_name sso_grp_xstats[] = {
- {"wrk_sched", offsetof(struct sso_grp_stats, ws_pc), ~0x0, 0,
- {0} },
- {"xaq_dram", offsetof(struct sso_grp_stats, ext_pc), ~0x0,
- 0, {0} },
- {"add_wrk", offsetof(struct sso_grp_stats, wa_pc), ~0x0, 0,
- {0} },
- {"tag_switch_req", offsetof(struct sso_grp_stats, ts_pc), ~0x0, 0,
- {0} },
- {"desched_req", offsetof(struct sso_grp_stats, ds_pc), ~0x0, 0,
- {0} },
- {"desched_wrk", offsetof(struct sso_grp_stats, dq_pc), ~0x0, 0,
- {0} },
- {"xaq_cached", offsetof(struct sso_grp_stats, aw_status), 0x3,
- 0, {0} },
- {"work_inflight", offsetof(struct sso_grp_stats, aw_status), 0x3F,
- 16, {0} },
- {"inuse_pages", offsetof(struct sso_grp_stats, page_cnt),
- 0xFFFFFFFF, 0, {0} },
-};
-
-#define OTX2_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
-#define OTX2_SSO_NUM_GRP_XSTATS RTE_DIM(sso_grp_xstats)
-
-#define OTX2_SSO_NUM_XSTATS (OTX2_SSO_NUM_HWS_XSTATS + OTX2_SSO_NUM_GRP_XSTATS)
-
-static int
-otx2_sso_xstats_get(const struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
- const unsigned int ids[], uint64_t values[], unsigned int n)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_sso_xstats_name *xstats;
- struct otx2_sso_xstats_name *xstat;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int i;
- uint64_t value;
- void *req_rsp;
- int rc;
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- return 0;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_hws_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
- 2 * queue_port_id : queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- if (dev->dual_ws) {
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- values[i] = *(uint64_t *)
- ((char *)req_rsp + xstat->offset);
- values[i] = (values[i] >> xstat->shift) &
- xstat->mask;
- }
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws =
- (2 * queue_port_id) + 1;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
- }
-
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_grp_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- break;
- default:
- otx2_err("Invalid mode received");
- goto invalid_value;
- };
-
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- value = *(uint64_t *)((char *)req_rsp + xstat->offset);
- value = (value >> xstat->shift) & xstat->mask;
-
- if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
- values[i] += value;
- else
- values[i] = value;
-
- values[i] -= xstat->reset_snap[queue_port_id];
- }
-
- return i;
-invalid_value:
- return -EINVAL;
-}
-
-static int
-otx2_sso_xstats_reset(struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode,
- int16_t queue_port_id, const uint32_t ids[], uint32_t n)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_sso_xstats_name *xstats;
- struct otx2_sso_xstats_name *xstat;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int i;
- uint64_t value;
- void *req_rsp;
- int rc;
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- return 0;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_hws_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
- 2 * queue_port_id : queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- if (dev->dual_ws) {
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- xstat->reset_snap[queue_port_id] = *(uint64_t *)
- ((char *)req_rsp + xstat->offset);
- xstat->reset_snap[queue_port_id] =
- (xstat->reset_snap[queue_port_id] >>
- xstat->shift) & xstat->mask;
- }
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws =
- (2 * queue_port_id) + 1;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
- }
-
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_grp_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void *)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- break;
- default:
- otx2_err("Invalid mode received");
- goto invalid_value;
- };
-
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- value = *(uint64_t *)((char *)req_rsp + xstat->offset);
- value = (value >> xstat->shift) & xstat->mask;
-
- if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
- xstat->reset_snap[queue_port_id] += value;
- else
- xstat->reset_snap[queue_port_id] = value;
- }
- return i;
-invalid_value:
- return -EINVAL;
-}
-
-static int
-otx2_sso_xstats_get_names(const struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode,
- uint8_t queue_port_id,
- struct rte_event_dev_xstats_name *xstats_names,
- unsigned int *ids, unsigned int size)
-{
- struct rte_event_dev_xstats_name xstats_names_copy[OTX2_SSO_NUM_XSTATS];
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int xidx = 0;
- unsigned int i;
-
- for (i = 0; i < OTX2_SSO_NUM_HWS_XSTATS; i++) {
- snprintf(xstats_names_copy[i].name,
- sizeof(xstats_names_copy[i].name), "%s",
- sso_hws_xstats[i].name);
- }
-
- for (; i < OTX2_SSO_NUM_XSTATS; i++) {
- snprintf(xstats_names_copy[i].name,
- sizeof(xstats_names_copy[i].name), "%s",
- sso_grp_xstats[i - OTX2_SSO_NUM_HWS_XSTATS].name);
- }
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- break;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- break;
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- break;
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- break;
- default:
- otx2_err("Invalid mode received");
- return -EINVAL;
- };
-
- if (xstats_mode_count > size || !ids || !xstats_names)
- return xstats_mode_count;
-
- for (i = 0; i < xstats_mode_count; i++) {
- xidx = i + start_offset;
- strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
- sizeof(xstats_names[i].name));
- ids[i] = xidx;
- }
-
- return i;
-}
-
-#endif
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
deleted file mode 100644
index 6da8b14b78..0000000000
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ /dev/null
@@ -1,735 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <rte_mbuf_pool_ops.h>
-
-#include "otx2_evdev.h"
-#include "otx2_tim_evdev.h"
-
-static struct event_timer_adapter_ops otx2_tim_ops;
-
-static inline int
-tim_get_msix_offsets(void)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int i, rc;
-
- /* Get TIM MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- for (i = 0; i < dev->nb_rings; i++)
- dev->tim_msixoff[i] = msix_rsp->timlf_msixoff[i];
-
- return rc;
-}
-
-static void
-tim_set_fp_ops(struct otx2_tim_ring *tim_ring)
-{
- uint8_t prod_flag = !tim_ring->prod_type_sp;
-
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
-#define FP(_name, _f3, _f2, _f1, flags) \
- [_f3][_f2][_f1] = otx2_tim_arm_burst_##_name,
- TIM_ARM_FASTPATH_MODES
-#undef FP
- };
-
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) \
- [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_##_name,
- TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
- };
-
- otx2_tim_ops.arm_burst =
- arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
- otx2_tim_ops.arm_tmo_tick_burst =
- arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
- otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst;
-}
-
-static void
-otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer_adapter_info *adptr_info)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
-
- adptr_info->max_tmo_ns = tim_ring->max_tout;
- adptr_info->min_resolution_ns = tim_ring->ena_periodic ?
- tim_ring->max_tout : tim_ring->tck_nsec;
- rte_memcpy(&adptr_info->conf, &adptr->data->conf,
- sizeof(struct rte_event_timer_adapter_conf));
-}
-
-static int
-tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
- struct rte_event_timer_adapter_conf *rcfg)
-{
- unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
- unsigned int mp_flags = 0;
- char pool_name[25];
- int rc;
-
- cache_sz /= rte_lcore_count();
- /* Create chunk pool. */
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
- mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
- otx2_tim_dbg("Using single producer mode");
- tim_ring->prod_type_sp = true;
- }
-
- snprintf(pool_name, sizeof(pool_name), "otx2_tim_chunk_pool%d",
- tim_ring->ring_id);
-
- if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
- cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
-
- cache_sz = cache_sz != 0 ? cache_sz : 2;
- tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- if (!tim_ring->disable_npa) {
- tim_ring->chunk_pool = rte_mempool_create_empty(pool_name,
- tim_ring->nb_chunks, tim_ring->chunk_sz,
- cache_sz, 0, rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
-
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(),
- NULL);
- if (rc < 0) {
- otx2_err("Unable to set chunkpool ops");
- goto free;
- }
-
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate chunkpool.");
- goto free;
- }
- tim_ring->aura = npa_lf_aura_handle_to_aura(
- tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = tim_ring->ena_periodic ? 1 : 0;
- } else {
- tim_ring->chunk_pool = rte_mempool_create(pool_name,
- tim_ring->nb_chunks, tim_ring->chunk_sz,
- cache_sz, 0, NULL, NULL, NULL, NULL,
- rte_socket_id(),
- mp_flags);
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
- tim_ring->ena_dfb = 1;
- }
-
- return 0;
-
-free:
- rte_mempool_free(tim_ring->chunk_pool);
- return rc;
-}
-
-static void
-tim_err_desc(int rc)
-{
- switch (rc) {
- case TIM_AF_NO_RINGS_LEFT:
- otx2_err("Unable to allocat new TIM ring.");
- break;
- case TIM_AF_INVALID_NPA_PF_FUNC:
- otx2_err("Invalid NPA pf func.");
- break;
- case TIM_AF_INVALID_SSO_PF_FUNC:
- otx2_err("Invalid SSO pf func.");
- break;
- case TIM_AF_RING_STILL_RUNNING:
- otx2_tim_dbg("Ring busy.");
- break;
- case TIM_AF_LF_INVALID:
- otx2_err("Invalid Ring id.");
- break;
- case TIM_AF_CSIZE_NOT_ALIGNED:
- otx2_err("Chunk size specified needs to be multiple of 16.");
- break;
- case TIM_AF_CSIZE_TOO_SMALL:
- otx2_err("Chunk size too small.");
- break;
- case TIM_AF_CSIZE_TOO_BIG:
- otx2_err("Chunk size too big.");
- break;
- case TIM_AF_INTERVAL_TOO_SMALL:
- otx2_err("Bucket traversal interval too small.");
- break;
- case TIM_AF_INVALID_BIG_ENDIAN_VALUE:
- otx2_err("Invalid Big endian value.");
- break;
- case TIM_AF_INVALID_CLOCK_SOURCE:
- otx2_err("Invalid Clock source specified.");
- break;
- case TIM_AF_GPIO_CLK_SRC_NOT_ENABLED:
- otx2_err("GPIO clock source not enabled.");
- break;
- case TIM_AF_INVALID_BSIZE:
- otx2_err("Invalid bucket size.");
- break;
- case TIM_AF_INVALID_ENABLE_PERIODIC:
- otx2_err("Invalid bucket size.");
- break;
- case TIM_AF_INVALID_ENABLE_DONTFREE:
- otx2_err("Invalid Don't free value.");
- break;
- case TIM_AF_ENA_DONTFRE_NSET_PERIODIC:
- otx2_err("Don't free bit not set when periodic is enabled.");
- break;
- case TIM_AF_RING_ALREADY_DISABLED:
- otx2_err("Ring already stopped");
- break;
- default:
- otx2_err("Unknown Error.");
- }
-}
-
-static int
-otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
-{
- struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct otx2_tim_ring *tim_ring;
- struct tim_config_req *cfg_req;
- struct tim_ring_req *free_req;
- struct tim_lf_alloc_req *req;
- struct tim_lf_alloc_rsp *rsp;
- uint8_t is_periodic;
- int i, rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- if (adptr->data->id >= dev->nb_rings)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_lf_alloc(dev->mbox);
- req->npa_pf_func = otx2_npa_pf_func_get();
- req->sso_pf_func = otx2_sso_pf_func_get();
- req->ring = adptr->data->id;
-
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (rc < 0) {
- tim_err_desc(rc);
- return -ENODEV;
- }
-
- if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10),
- rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) {
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
- rcfg->timer_tick_ns = TICK2NSEC(OTX2_TIM_MIN_TMO_TKS,
- rsp->tenns_clk);
- else {
- rc = -ERANGE;
- goto rng_mem_err;
- }
- }
-
- is_periodic = 0;
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_PERIODIC) {
- if (rcfg->max_tmo_ns &&
- rcfg->max_tmo_ns != rcfg->timer_tick_ns) {
- rc = -ERANGE;
- goto rng_mem_err;
- }
-
- /* Use 2 buckets to avoid contention */
- rcfg->max_tmo_ns = rcfg->timer_tick_ns;
- rcfg->timer_tick_ns /= 2;
- is_periodic = 1;
- }
-
- tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0);
- if (tim_ring == NULL) {
- rc = -ENOMEM;
- goto rng_mem_err;
- }
-
- adptr->data->adapter_priv = tim_ring;
-
- tim_ring->tenns_clk_freq = rsp->tenns_clk;
- tim_ring->clk_src = (int)rcfg->clk_src;
- tim_ring->ring_id = adptr->data->id;
- tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10);
- tim_ring->max_tout = is_periodic ?
- rcfg->timer_tick_ns * 2 : rcfg->max_tmo_ns;
- tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
- tim_ring->chunk_sz = dev->chunk_sz;
- tim_ring->nb_timers = rcfg->nb_timers;
- tim_ring->disable_npa = dev->disable_npa;
- tim_ring->ena_periodic = is_periodic;
- tim_ring->enable_stats = dev->enable_stats;
-
- for (i = 0; i < dev->ring_ctl_cnt ; i++) {
- struct otx2_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
-
- if (ring_ctl->ring == tim_ring->ring_id) {
- tim_ring->chunk_sz = ring_ctl->chunk_slots ?
- ((uint32_t)(ring_ctl->chunk_slots + 1) *
- OTX2_TIM_CHUNK_ALIGNMENT) : tim_ring->chunk_sz;
- tim_ring->enable_stats = ring_ctl->enable_stats;
- tim_ring->disable_npa = ring_ctl->disable_npa;
- }
- }
-
- if (tim_ring->disable_npa) {
- tim_ring->nb_chunks =
- tim_ring->nb_timers /
- OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
- tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
- } else {
- tim_ring->nb_chunks = tim_ring->nb_timers;
- }
- tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
- tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) *
- sizeof(struct otx2_tim_bkt),
- RTE_CACHE_LINE_SIZE);
- if (tim_ring->bkt == NULL)
- goto bkt_mem_err;
-
- rc = tim_chnk_pool_create(tim_ring, rcfg);
- if (rc < 0)
- goto chnk_mem_err;
-
- cfg_req = otx2_mbox_alloc_msg_tim_config_ring(dev->mbox);
-
- cfg_req->ring = tim_ring->ring_id;
- cfg_req->bigendian = false;
- cfg_req->clocksource = tim_ring->clk_src;
- cfg_req->enableperiodic = tim_ring->ena_periodic;
- cfg_req->enabledontfreebuffer = tim_ring->ena_dfb;
- cfg_req->bucketsize = tim_ring->nb_bkts;
- cfg_req->chunksize = tim_ring->chunk_sz;
- cfg_req->interval = NSEC2TICK(tim_ring->tck_nsec,
- tim_ring->tenns_clk_freq);
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- goto chnk_mem_err;
- }
-
- tim_ring->base = dev->bar2 +
- (RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12);
-
- rc = tim_register_irq(tim_ring->ring_id);
- if (rc < 0)
- goto chnk_mem_err;
-
- otx2_write64((uint64_t)tim_ring->bkt,
- tim_ring->base + TIM_LF_RING_BASE);
- otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
-
- /* Set fastpath ops. */
- tim_set_fp_ops(tim_ring);
-
- /* Update SSO xae count. */
- sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)tim_ring,
- RTE_EVENT_TYPE_TIMER);
- sso_xae_reconfigure(dev->event_dev);
-
- otx2_tim_dbg("Total memory used %"PRIu64"MB\n",
- (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz)
- + (tim_ring->nb_bkts * sizeof(struct otx2_tim_bkt))) /
- BIT_ULL(20)));
-
- return rc;
-
-chnk_mem_err:
- rte_free(tim_ring->bkt);
-bkt_mem_err:
- rte_free(tim_ring);
-rng_mem_err:
- free_req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
- free_req->ring = adptr->data->id;
- otx2_mbox_process(dev->mbox);
- return rc;
-}
-
-static void
-otx2_tim_calibrate_start_tsc(struct otx2_tim_ring *tim_ring)
-{
-#define OTX2_TIM_CALIB_ITER 1E6
- uint32_t real_bkt, bucket;
- int icount, ecount = 0;
- uint64_t bkt_cyc;
-
- for (icount = 0; icount < OTX2_TIM_CALIB_ITER; icount++) {
- real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
- bkt_cyc = tim_cntvct();
- bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
- tim_ring->tck_int;
- bucket = bucket % (tim_ring->nb_bkts);
- tim_ring->ring_start_cyc = bkt_cyc - (real_bkt *
- tim_ring->tck_int);
- if (bucket != real_bkt)
- ecount++;
- }
- tim_ring->last_updt_cyc = bkt_cyc;
- otx2_tim_dbg("Bucket mispredict %3.2f distance %d\n",
- 100 - (((double)(icount - ecount) / (double)icount) * 100),
- bucket - real_bkt);
-}
-
-static int
-otx2_tim_ring_start(const struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_enable_rsp *rsp;
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_enable_ring(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (rc < 0) {
- tim_err_desc(rc);
- goto fail;
- }
- tim_ring->ring_start_cyc = rsp->timestarted;
- tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, tim_cntfrq());
- tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
- tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
- tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
-
- otx2_tim_calibrate_start_tsc(tim_ring);
-
-fail:
- return rc;
-}
-
-static int
-otx2_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_disable_ring(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- rc = -EBUSY;
- }
-
- return rc;
-}
-
-static int
-otx2_tim_ring_free(struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- tim_unregister_irq(tim_ring->ring_id);
-
- req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- return -EBUSY;
- }
-
- rte_free(tim_ring->bkt);
- rte_mempool_free(tim_ring->chunk_pool);
- rte_free(adptr->data->adapter_priv);
-
- return 0;
-}
-
-static int
-otx2_tim_stats_get(const struct rte_event_timer_adapter *adapter,
- struct rte_event_timer_adapter_stats *stats)
-{
- struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
- uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc;
-
- stats->evtim_exp_count = __atomic_load_n(&tim_ring->arm_cnt,
- __ATOMIC_RELAXED);
- stats->ev_enq_count = stats->evtim_exp_count;
- stats->adapter_tick_count = rte_reciprocal_divide_u64(bkt_cyc,
- &tim_ring->fast_div);
- return 0;
-}
-
-static int
-otx2_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
-{
- struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
-
- __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
- return 0;
-}
-
-int
-otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
- uint32_t *caps, const struct event_timer_adapter_ops **ops)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
-
- RTE_SET_USED(flags);
-
- if (dev == NULL)
- return -ENODEV;
-
- otx2_tim_ops.init = otx2_tim_ring_create;
- otx2_tim_ops.uninit = otx2_tim_ring_free;
- otx2_tim_ops.start = otx2_tim_ring_start;
- otx2_tim_ops.stop = otx2_tim_ring_stop;
- otx2_tim_ops.get_info = otx2_tim_ring_info_get;
-
- if (dev->enable_stats) {
- otx2_tim_ops.stats_get = otx2_tim_stats_get;
- otx2_tim_ops.stats_reset = otx2_tim_stats_reset;
- }
-
- /* Store evdev pointer for later use. */
- dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
- *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT |
- RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC;
- *ops = &otx2_tim_ops;
-
- return 0;
-}
-
-#define OTX2_TIM_DISABLE_NPA "tim_disable_npa"
-#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots"
-#define OTX2_TIM_STATS_ENA "tim_stats_ena"
-#define OTX2_TIM_RINGS_LMT "tim_rings_lmt"
-#define OTX2_TIM_RING_CTL "tim_ring_ctl"
-
-static void
-tim_parse_ring_param(char *value, void *opaque)
-{
- struct otx2_tim_evdev *dev = opaque;
- struct otx2_tim_ctl ring_ctl = {0};
- char *tok = strtok(value, "-");
- struct otx2_tim_ctl *old_ptr;
- uint16_t *val;
-
- val = (uint16_t *)&ring_ctl;
-
- if (!strlen(value))
- return;
-
- while (tok != NULL) {
- *val = atoi(tok);
- tok = strtok(NULL, "-");
- val++;
- }
-
- if (val != (&ring_ctl.enable_stats + 1)) {
- otx2_err(
- "Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
- return;
- }
-
- dev->ring_ctl_cnt++;
- old_ptr = dev->ring_ctl_data;
- dev->ring_ctl_data = rte_realloc(dev->ring_ctl_data,
- sizeof(struct otx2_tim_ctl) *
- dev->ring_ctl_cnt, 0);
- if (dev->ring_ctl_data == NULL) {
- dev->ring_ctl_data = old_ptr;
- dev->ring_ctl_cnt--;
- return;
- }
-
- dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
-}
-
-static void
-tim_parse_ring_ctl_list(const char *value, void *opaque)
-{
- char *s = strdup(value);
- char *start = NULL;
- char *end = NULL;
- char *f = s;
-
- while (*s) {
- if (*s == '[')
- start = s;
- else if (*s == ']')
- end = s;
-
- if (start && start < end) {
- *end = 0;
- tim_parse_ring_param(start + 1, opaque);
- start = end;
- s = end;
- }
- s++;
- }
-
- free(f);
-}
-
-static int
-tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
- * isn't allowed. 0 represents default.
- */
- tim_parse_ring_ctl_list(value, opaque);
-
- return 0;
-}
-
-static void
-tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
-{
- struct rte_kvargs *kvlist;
-
- if (devargs == NULL)
- return;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA,
- &parse_kvargs_flag, &dev->disable_npa);
- rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS,
- &parse_kvargs_value, &dev->chunk_slots);
- rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag,
- &dev->enable_stats);
- rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value,
- &dev->min_ring_cnt);
- rte_kvargs_process(kvlist, OTX2_TIM_RING_CTL,
- &tim_parse_kvargs_dict, &dev);
-
- rte_kvargs_free(kvlist);
-}
-
-void
-otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
-{
- struct rsrc_attach_req *atch_req;
- struct rsrc_detach_req *dtch_req;
- struct free_rsrcs_rsp *rsrc_cnt;
- const struct rte_memzone *mz;
- struct otx2_tim_evdev *dev;
- int rc;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return;
-
- mz = rte_memzone_reserve(RTE_STR(OTX2_TIM_EVDEV_NAME),
- sizeof(struct otx2_tim_evdev),
- rte_socket_id(), 0);
- if (mz == NULL) {
- otx2_tim_dbg("Unable to allocate memory for TIM Event device");
- return;
- }
-
- dev = mz->addr;
- dev->pci_dev = pci_dev;
- dev->mbox = cmn_dev->mbox;
- dev->bar2 = cmn_dev->bar2;
-
- tim_parse_devargs(pci_dev->device.devargs, dev);
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
- rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
- if (rc < 0) {
- otx2_err("Unable to get free rsrc count.");
- goto mz_free;
- }
-
- dev->nb_rings = dev->min_ring_cnt ?
- RTE_MIN(dev->min_ring_cnt, rsrc_cnt->tim) : rsrc_cnt->tim;
-
- if (!dev->nb_rings) {
- otx2_tim_dbg("No TIM Logical functions provisioned.");
- goto mz_free;
- }
-
- atch_req = otx2_mbox_alloc_msg_attach_resources(dev->mbox);
- atch_req->modify = true;
- atch_req->timlfs = dev->nb_rings;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- otx2_err("Unable to attach TIM rings.");
- goto mz_free;
- }
-
- rc = tim_get_msix_offsets();
- if (rc < 0) {
- otx2_err("Unable to get MSIX offsets for TIM.");
- goto detach;
- }
-
- if (dev->chunk_slots &&
- dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS &&
- dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) {
- dev->chunk_sz = (dev->chunk_slots + 1) *
- OTX2_TIM_CHUNK_ALIGNMENT;
- } else {
- dev->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ;
- }
-
- return;
-
-detach:
- dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
- dtch_req->partial = true;
- dtch_req->timlfs = true;
-
- otx2_mbox_process(dev->mbox);
-mz_free:
- rte_memzone_free(mz);
-}
-
-void
-otx2_tim_fini(void)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct rsrc_detach_req *dtch_req;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return;
-
- dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
- dtch_req->partial = true;
- dtch_req->timlfs = true;
-
- otx2_mbox_process(dev->mbox);
- rte_memzone_free(rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME)));
-}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
deleted file mode 100644
index dac642e0e1..0000000000
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ /dev/null
@@ -1,256 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_EVDEV_H__
-#define __OTX2_TIM_EVDEV_H__
-
-#include <event_timer_adapter_pmd.h>
-#include <rte_event_timer_adapter.h>
-#include <rte_reciprocal.h>
-
-#include "otx2_dev.h"
-
-#define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev
-
-#define otx2_tim_func_trace otx2_tim_dbg
-
-#define TIM_LF_RING_AURA (0x0)
-#define TIM_LF_RING_BASE (0x130)
-#define TIM_LF_NRSPERR_INT (0x200)
-#define TIM_LF_NRSPERR_INT_W1S (0x208)
-#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210)
-#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218)
-#define TIM_LF_RAS_INT (0x300)
-#define TIM_LF_RAS_INT_W1S (0x308)
-#define TIM_LF_RAS_INT_ENA_W1S (0x310)
-#define TIM_LF_RAS_INT_ENA_W1C (0x318)
-#define TIM_LF_RING_REL (0x400)
-
-#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
-#define TIM_BUCKET_W1_M_CHUNK_REMAINDER ((1ULL << (64 - \
- TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
-#define TIM_BUCKET_W1_S_LOCK (40)
-#define TIM_BUCKET_W1_M_LOCK ((1ULL << \
- (TIM_BUCKET_W1_S_CHUNK_REMAINDER - \
- TIM_BUCKET_W1_S_LOCK)) - 1)
-#define TIM_BUCKET_W1_S_RSVD (35)
-#define TIM_BUCKET_W1_S_BSK (34)
-#define TIM_BUCKET_W1_M_BSK ((1ULL << \
- (TIM_BUCKET_W1_S_RSVD - \
- TIM_BUCKET_W1_S_BSK)) - 1)
-#define TIM_BUCKET_W1_S_HBT (33)
-#define TIM_BUCKET_W1_M_HBT ((1ULL << \
- (TIM_BUCKET_W1_S_BSK - \
- TIM_BUCKET_W1_S_HBT)) - 1)
-#define TIM_BUCKET_W1_S_SBT (32)
-#define TIM_BUCKET_W1_M_SBT ((1ULL << \
- (TIM_BUCKET_W1_S_HBT - \
- TIM_BUCKET_W1_S_SBT)) - 1)
-#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
-#define TIM_BUCKET_W1_M_NUM_ENTRIES ((1ULL << \
- (TIM_BUCKET_W1_S_SBT - \
- TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
-
-#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
-
-#define TIM_BUCKET_CHUNK_REMAIN \
- (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
-
-#define TIM_BUCKET_LOCK \
- (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
-
-#define TIM_BUCKET_SEMA_WLOCK \
- (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
-
-#define OTX2_MAX_TIM_RINGS (256)
-#define OTX2_TIM_MAX_BUCKETS (0xFFFFF)
-#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
-#define OTX2_TIM_CHUNK_ALIGNMENT (16)
-#define OTX2_TIM_MAX_BURST (RTE_CACHE_LINE_SIZE / \
- OTX2_TIM_CHUNK_ALIGNMENT)
-#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1)
-#define OTX2_TIM_MIN_CHUNK_SLOTS (0x8)
-#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE)
-#define OTX2_TIM_MIN_TMO_TKS (256)
-
-#define OTX2_TIM_SP 0x1
-#define OTX2_TIM_MP 0x2
-#define OTX2_TIM_ENA_FB 0x10
-#define OTX2_TIM_ENA_DFB 0x20
-#define OTX2_TIM_ENA_STATS 0x40
-
-enum otx2_tim_clk_src {
- OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
- OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
- OTX2_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
- OTX2_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
-};
-
-struct otx2_tim_bkt {
- uint64_t first_chunk;
- union {
- uint64_t w1;
- struct {
- uint32_t nb_entry;
- uint8_t sbt:1;
- uint8_t hbt:1;
- uint8_t bsk:1;
- uint8_t rsvd:5;
- uint8_t lock;
- int16_t chunk_remainder;
- };
- };
- uint64_t current_chunk;
- uint64_t pad;
-} __rte_packed __rte_aligned(32);
-
-struct otx2_tim_ent {
- uint64_t w0;
- uint64_t wqe;
-} __rte_packed;
-
-struct otx2_tim_ctl {
- uint16_t ring;
- uint16_t chunk_slots;
- uint16_t disable_npa;
- uint16_t enable_stats;
-};
-
-struct otx2_tim_evdev {
- struct rte_pci_device *pci_dev;
- struct rte_eventdev *event_dev;
- struct otx2_mbox *mbox;
- uint16_t nb_rings;
- uint32_t chunk_sz;
- uintptr_t bar2;
- /* Dev args */
- uint8_t disable_npa;
- uint16_t chunk_slots;
- uint16_t min_ring_cnt;
- uint8_t enable_stats;
- uint16_t ring_ctl_cnt;
- struct otx2_tim_ctl *ring_ctl_data;
- /* HW const */
- /* MSIX offsets */
- uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS];
-};
-
-struct otx2_tim_ring {
- uintptr_t base;
- uint16_t nb_chunk_slots;
- uint32_t nb_bkts;
- uint64_t last_updt_cyc;
- uint64_t ring_start_cyc;
- uint64_t tck_int;
- uint64_t tot_int;
- struct otx2_tim_bkt *bkt;
- struct rte_mempool *chunk_pool;
- struct rte_reciprocal_u64 fast_div;
- struct rte_reciprocal_u64 fast_bkt;
- uint64_t arm_cnt;
- uint8_t prod_type_sp;
- uint8_t enable_stats;
- uint8_t disable_npa;
- uint8_t ena_dfb;
- uint8_t ena_periodic;
- uint16_t ring_id;
- uint32_t aura;
- uint64_t nb_timers;
- uint64_t tck_nsec;
- uint64_t max_tout;
- uint64_t nb_chunks;
- uint64_t chunk_sz;
- uint64_t tenns_clk_freq;
- enum otx2_tim_clk_src clk_src;
-} __rte_cache_aligned;
-
-static inline struct otx2_tim_evdev *
-tim_priv_get(void)
-{
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME));
- if (mz == NULL)
- return NULL;
-
- return mz->addr;
-}
-
-#ifdef RTE_ARCH_ARM64
-static inline uint64_t
-tim_cntvct(void)
-{
- return __rte_arm64_cntvct();
-}
-
-static inline uint64_t
-tim_cntfrq(void)
-{
- return __rte_arm64_cntfrq();
-}
-#else
-static inline uint64_t
-tim_cntvct(void)
-{
- return 0;
-}
-
-static inline uint64_t
-tim_cntfrq(void)
-{
- return 0;
-}
-#endif
-
-#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, 0, OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
- FP(mp, 0, 0, 1, OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
- FP(fb_sp, 0, 1, 0, OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
- FP(fb_mp, 0, 1, 1, OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
- FP(stats_mod_sp, 1, 0, 0, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
- FP(stats_mod_mp, 1, 0, 1, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
- FP(stats_mod_fb_sp, 1, 1, 0, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
- FP(stats_mod_fb_mp, 1, 1, 1, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_MP)
-
-#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, 0, OTX2_TIM_ENA_DFB) \
- FP(fb, 0, 1, OTX2_TIM_ENA_FB) \
- FP(stats_dfb, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB) \
- FP(stats_fb, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB)
-
-#define FP(_name, _f3, _f2, _f1, flags) \
- uint16_t otx2_tim_arm_burst_##_name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, const uint16_t nb_timers);
-TIM_ARM_FASTPATH_MODES
-#undef FP
-
-#define FP(_name, _f2, _f1, flags) \
- uint16_t otx2_tim_arm_tmo_tick_burst_##_name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, const uint64_t timeout_tick, \
- const uint16_t nb_timers);
-TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
-
-uint16_t otx2_tim_timer_cancel_burst(
- const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim, const uint16_t nb_timers);
-
-int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
- uint32_t *caps,
- const struct event_timer_adapter_ops **ops);
-
-void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev);
-void otx2_tim_fini(void);
-
-/* TIM IRQ */
-int tim_register_irq(uint16_t ring_id);
-void tim_unregister_irq(uint16_t ring_id);
-
-#endif /* __OTX2_TIM_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
deleted file mode 100644
index 9ee07958fd..0000000000
--- a/drivers/event/octeontx2/otx2_tim_worker.c
+++ /dev/null
@@ -1,192 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_tim_evdev.h"
-#include "otx2_tim_worker.h"
-
-static inline int
-tim_arm_checks(const struct otx2_tim_ring * const tim_ring,
- struct rte_event_timer * const tim)
-{
- if (unlikely(tim->state)) {
- tim->state = RTE_EVENT_TIMER_ERROR;
- rte_errno = EALREADY;
- goto fail;
- }
-
- if (unlikely(!tim->timeout_ticks ||
- tim->timeout_ticks >= tim_ring->nb_bkts)) {
- tim->state = tim->timeout_ticks ? RTE_EVENT_TIMER_ERROR_TOOLATE
- : RTE_EVENT_TIMER_ERROR_TOOEARLY;
- rte_errno = EINVAL;
- goto fail;
- }
-
- return 0;
-
-fail:
- return -EINVAL;
-}
-
-static inline void
-tim_format_event(const struct rte_event_timer * const tim,
- struct otx2_tim_ent * const entry)
-{
- entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
- (tim->ev.event & 0xFFFFFFFFF);
- entry->wqe = tim->ev.u64;
-}
-
-static inline void
-tim_sync_start_cyc(struct otx2_tim_ring *tim_ring)
-{
- uint64_t cur_cyc = tim_cntvct();
- uint32_t real_bkt;
-
- if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
- real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
- cur_cyc = tim_cntvct();
-
- tim_ring->ring_start_cyc = cur_cyc -
- (real_bkt * tim_ring->tck_int);
- tim_ring->last_updt_cyc = cur_cyc;
- }
-
-}
-
-static __rte_always_inline uint16_t
-tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint16_t nb_timers,
- const uint8_t flags)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_ent entry;
- uint16_t index;
- int ret;
-
- tim_sync_start_cyc(tim_ring);
- for (index = 0; index < nb_timers; index++) {
- if (tim_arm_checks(tim_ring, tim[index]))
- break;
-
- tim_format_event(tim[index], &entry);
- if (flags & OTX2_TIM_SP)
- ret = tim_add_entry_sp(tim_ring,
- tim[index]->timeout_ticks,
- tim[index], &entry, flags);
- if (flags & OTX2_TIM_MP)
- ret = tim_add_entry_mp(tim_ring,
- tim[index]->timeout_ticks,
- tim[index], &entry, flags);
-
- if (unlikely(ret)) {
- rte_errno = -ret;
- break;
- }
- }
-
- if (flags & OTX2_TIM_ENA_STATS)
- __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
-
- return index;
-}
-
-static __rte_always_inline uint16_t
-tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint64_t timeout_tick,
- const uint16_t nb_timers, const uint8_t flags)
-{
- struct otx2_tim_ent entry[OTX2_TIM_MAX_BURST] __rte_cache_aligned;
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- uint16_t set_timers = 0;
- uint16_t arr_idx = 0;
- uint16_t idx;
- int ret;
-
- if (unlikely(!timeout_tick || timeout_tick >= tim_ring->nb_bkts)) {
- const enum rte_event_timer_state state = timeout_tick ?
- RTE_EVENT_TIMER_ERROR_TOOLATE :
- RTE_EVENT_TIMER_ERROR_TOOEARLY;
- for (idx = 0; idx < nb_timers; idx++)
- tim[idx]->state = state;
-
- rte_errno = EINVAL;
- return 0;
- }
-
- tim_sync_start_cyc(tim_ring);
- while (arr_idx < nb_timers) {
- for (idx = 0; idx < OTX2_TIM_MAX_BURST && (arr_idx < nb_timers);
- idx++, arr_idx++) {
- tim_format_event(tim[arr_idx], &entry[idx]);
- }
- ret = tim_add_entry_brst(tim_ring, timeout_tick,
- &tim[set_timers], entry, idx, flags);
- set_timers += ret;
- if (ret != idx)
- break;
- }
- if (flags & OTX2_TIM_ENA_STATS)
- __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
- __ATOMIC_RELAXED);
-
- return set_timers;
-}
-
-#define FP(_name, _f3, _f2, _f1, _flags) \
-uint16_t __rte_noinline \
-otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint16_t nb_timers) \
-{ \
- return tim_timer_arm_burst(adptr, tim, nb_timers, _flags); \
-}
-TIM_ARM_FASTPATH_MODES
-#undef FP
-
-#define FP(_name, _f2, _f1, _flags) \
-uint16_t __rte_noinline \
-otx2_tim_arm_tmo_tick_burst_ ## _name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint64_t timeout_tick, \
- const uint16_t nb_timers) \
-{ \
- return tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
- nb_timers, _flags); \
-}
-TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
-
-uint16_t
-otx2_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint16_t nb_timers)
-{
- uint16_t index;
- int ret;
-
- RTE_SET_USED(adptr);
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
- for (index = 0; index < nb_timers; index++) {
- if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
- rte_errno = EALREADY;
- break;
- }
-
- if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
- rte_errno = EINVAL;
- break;
- }
- ret = tim_rm_entry(tim[index]);
- if (ret) {
- rte_errno = -ret;
- break;
- }
- }
-
- return index;
-}
diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h
deleted file mode 100644
index efe88a8692..0000000000
--- a/drivers/event/octeontx2/otx2_tim_worker.h
+++ /dev/null
@@ -1,598 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_WORKER_H__
-#define __OTX2_TIM_WORKER_H__
-
-#include "otx2_tim_evdev.h"
-
-static inline uint8_t
-tim_bkt_fetch_lock(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_LOCK) &
- TIM_BUCKET_W1_M_LOCK;
-}
-
-static inline int16_t
-tim_bkt_fetch_rem(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
- TIM_BUCKET_W1_M_CHUNK_REMAINDER;
-}
-
-static inline int16_t
-tim_bkt_get_rem(struct otx2_tim_bkt *bktp)
-{
- return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
-}
-
-static inline void
-tim_bkt_set_rem(struct otx2_tim_bkt *bktp, uint16_t v)
-{
- __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
-}
-
-static inline void
-tim_bkt_sub_rem(struct otx2_tim_bkt *bktp, uint16_t v)
-{
- __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
-}
-
-static inline uint8_t
-tim_bkt_get_hbt(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
-}
-
-static inline uint8_t
-tim_bkt_get_bsk(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
-}
-
-static inline uint64_t
-tim_bkt_clr_bsk(struct otx2_tim_bkt *bktp)
-{
- /* Clear everything except lock. */
- const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
-
- return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
-}
-
-static inline uint64_t
-tim_bkt_fetch_sema_lock(struct otx2_tim_bkt *bktp)
-{
- return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
- __ATOMIC_ACQUIRE);
-}
-
-static inline uint64_t
-tim_bkt_fetch_sema(struct otx2_tim_bkt *bktp)
-{
- return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
-}
-
-static inline uint64_t
-tim_bkt_inc_lock(struct otx2_tim_bkt *bktp)
-{
- const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
-
- return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
-}
-
-static inline void
-tim_bkt_dec_lock(struct otx2_tim_bkt *bktp)
-{
- __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
-}
-
-static inline void
-tim_bkt_dec_lock_relaxed(struct otx2_tim_bkt *bktp)
-{
- __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
-}
-
-static inline uint32_t
-tim_bkt_get_nent(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
- TIM_BUCKET_W1_M_NUM_ENTRIES;
-}
-
-static inline void
-tim_bkt_inc_nent(struct otx2_tim_bkt *bktp)
-{
- __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
-}
-
-static inline void
-tim_bkt_add_nent(struct otx2_tim_bkt *bktp, uint32_t v)
-{
- __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
-}
-
-static inline uint64_t
-tim_bkt_clr_nent(struct otx2_tim_bkt *bktp)
-{
- const uint64_t v = ~(TIM_BUCKET_W1_M_NUM_ENTRIES <<
- TIM_BUCKET_W1_S_NUM_ENTRIES);
-
- return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
-}
-
-static inline uint64_t
-tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
-{
- return (n - (d * rte_reciprocal_divide_u64(n, &R)));
-}
-
-static __rte_always_inline void
-tim_get_target_bucket(struct otx2_tim_ring *const tim_ring,
- const uint32_t rel_bkt, struct otx2_tim_bkt **bkt,
- struct otx2_tim_bkt **mirr_bkt)
-{
- const uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc;
- uint64_t bucket =
- rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
- rel_bkt;
- uint64_t mirr_bucket = 0;
-
- bucket =
- tim_bkt_fast_mod(bucket, tim_ring->nb_bkts, tim_ring->fast_bkt);
- mirr_bucket = tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
- tim_ring->nb_bkts, tim_ring->fast_bkt);
- *bkt = &tim_ring->bkt[bucket];
- *mirr_bkt = &tim_ring->bkt[mirr_bucket];
-}
-
-static struct otx2_tim_ent *
-tim_clr_bkt(struct otx2_tim_ring * const tim_ring,
- struct otx2_tim_bkt * const bkt)
-{
-#define TIM_MAX_OUTSTANDING_OBJ 64
- void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
- struct otx2_tim_ent *chunk;
- struct otx2_tim_ent *pnext;
- uint8_t objs = 0;
-
-
- chunk = ((struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk);
- chunk = (struct otx2_tim_ent *)(uintptr_t)(chunk +
- tim_ring->nb_chunk_slots)->w0;
- while (chunk) {
- pnext = (struct otx2_tim_ent *)(uintptr_t)
- ((chunk + tim_ring->nb_chunk_slots)->w0);
- if (objs == TIM_MAX_OUTSTANDING_OBJ) {
- rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
- objs);
- objs = 0;
- }
- pend_chunks[objs++] = chunk;
- chunk = pnext;
- }
-
- if (objs)
- rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
- objs);
-
- return (struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk;
-}
-
-static struct otx2_tim_ent *
-tim_refill_chunk(struct otx2_tim_bkt * const bkt,
- struct otx2_tim_bkt * const mirr_bkt,
- struct otx2_tim_ring * const tim_ring)
-{
- struct otx2_tim_ent *chunk;
-
- if (bkt->nb_entry || !bkt->first_chunk) {
- if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
- (void **)&chunk)))
- return NULL;
- if (bkt->nb_entry) {
- *(uint64_t *)(((struct otx2_tim_ent *)
- mirr_bkt->current_chunk) +
- tim_ring->nb_chunk_slots) =
- (uintptr_t)chunk;
- } else {
- bkt->first_chunk = (uintptr_t)chunk;
- }
- } else {
- chunk = tim_clr_bkt(tim_ring, bkt);
- bkt->first_chunk = (uintptr_t)chunk;
- }
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
-
- return chunk;
-}
-
-static struct otx2_tim_ent *
-tim_insert_chunk(struct otx2_tim_bkt * const bkt,
- struct otx2_tim_bkt * const mirr_bkt,
- struct otx2_tim_ring * const tim_ring)
-{
- struct otx2_tim_ent *chunk;
-
- if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
- return NULL;
-
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
- if (bkt->nb_entry) {
- *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t)
- mirr_bkt->current_chunk) +
- tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
- } else {
- bkt->first_chunk = (uintptr_t)chunk;
- }
- return chunk;
-}
-
-static __rte_always_inline int
-tim_add_entry_sp(struct otx2_tim_ring * const tim_ring,
- const uint32_t rel_bkt,
- struct rte_event_timer * const tim,
- const struct otx2_tim_ent * const pent,
- const uint8_t flags)
-{
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_ent *chunk;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
- int16_t rem;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
-
- /* Get Bucket sema*/
- lock_sema = tim_bkt_fetch_sema_lock(bkt);
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
- /* Insert the work. */
- rem = tim_bkt_fetch_rem(lock_sema);
-
- if (!rem) {
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- bkt->chunk_remainder = 0;
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim->state = RTE_EVENT_TIMER_ERROR;
- tim_bkt_dec_lock(bkt);
- return -ENOMEM;
- }
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += tim_ring->nb_chunk_slots - rem;
- }
-
- /* Copy work entry. */
- *chunk = *pent;
-
- tim->impl_opaque[0] = (uintptr_t)chunk;
- tim->impl_opaque[1] = (uintptr_t)bkt;
- __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
- tim_bkt_inc_nent(bkt);
- tim_bkt_dec_lock_relaxed(bkt);
-
- return 0;
-}
-
-static __rte_always_inline int
-tim_add_entry_mp(struct otx2_tim_ring * const tim_ring,
- const uint32_t rel_bkt,
- struct rte_event_timer * const tim,
- const struct otx2_tim_ent * const pent,
- const uint8_t flags)
-{
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_ent *chunk;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
- int16_t rem;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
- /* Get Bucket sema*/
- lock_sema = tim_bkt_fetch_sema_lock(bkt);
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
-
- rem = tim_bkt_fetch_rem(lock_sema);
- if (rem < 0) {
- tim_bkt_dec_lock(bkt);
-#ifdef RTE_ARCH_ARM64
- uint64_t w1;
- asm volatile(" ldxr %[w1], [%[crem]] \n"
- " tbz %[w1], 63, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[w1], [%[crem]] \n"
- " tbnz %[w1], 63, rty%= \n"
- "dne%=: \n"
- : [w1] "=&r"(w1)
- : [crem] "r"(&bkt->w1)
- : "memory");
-#else
- while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
- 0)
- ;
-#endif
- goto __retry;
- } else if (!rem) {
- /* Only one thread can be here*/
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim->state = RTE_EVENT_TIMER_ERROR;
- tim_bkt_set_rem(bkt, 0);
- tim_bkt_dec_lock(bkt);
- return -ENOMEM;
- }
- *chunk = *pent;
- if (tim_bkt_fetch_lock(lock_sema)) {
- do {
- lock_sema = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (tim_bkt_fetch_lock(lock_sema) - 1);
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
- }
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- __atomic_store_n(&bkt->chunk_remainder,
- tim_ring->nb_chunk_slots - 1, __ATOMIC_RELEASE);
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += tim_ring->nb_chunk_slots - rem;
- *chunk = *pent;
- }
-
- tim->impl_opaque[0] = (uintptr_t)chunk;
- tim->impl_opaque[1] = (uintptr_t)bkt;
- __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
- tim_bkt_inc_nent(bkt);
- tim_bkt_dec_lock_relaxed(bkt);
-
- return 0;
-}
-
-static inline uint16_t
-tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt,
- struct otx2_tim_ent *chunk,
- struct rte_event_timer ** const tim,
- const struct otx2_tim_ent * const ents,
- const struct otx2_tim_bkt * const bkt)
-{
- for (; index < cpy_lmt; index++) {
- *chunk = *(ents + index);
- tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
- tim[index]->impl_opaque[1] = (uintptr_t)bkt;
- tim[index]->state = RTE_EVENT_TIMER_ARMED;
- }
-
- return index;
-}
-
-/* Burst mode functions */
-static inline int
-tim_add_entry_brst(struct otx2_tim_ring * const tim_ring,
- const uint16_t rel_bkt,
- struct rte_event_timer ** const tim,
- const struct otx2_tim_ent *ents,
- const uint16_t nb_timers, const uint8_t flags)
-{
- struct otx2_tim_ent *chunk = NULL;
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_bkt *bkt;
- uint16_t chunk_remainder;
- uint16_t index = 0;
- uint64_t lock_sema;
- int16_t rem, crem;
- uint8_t lock_cnt;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
-
- /* Only one thread beyond this. */
- lock_sema = tim_bkt_inc_lock(bkt);
- lock_cnt = (uint8_t)
- ((lock_sema >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK);
-
- if (lock_cnt) {
- tim_bkt_dec_lock(bkt);
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxrb %w[lock_cnt], [%[lock]] \n"
- " tst %w[lock_cnt], 255 \n"
- " beq dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxrb %w[lock_cnt], [%[lock]] \n"
- " tst %w[lock_cnt], 255 \n"
- " bne rty%= \n"
- "dne%=: \n"
- : [lock_cnt] "=&r"(lock_cnt)
- : [lock] "r"(&bkt->lock)
- : "memory");
-#else
- while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
- ;
-#endif
- goto __retry;
- }
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
-
- chunk_remainder = tim_bkt_fetch_rem(lock_sema);
- rem = chunk_remainder - nb_timers;
- if (rem < 0) {
- crem = tim_ring->nb_chunk_slots - chunk_remainder;
- if (chunk_remainder && crem) {
- chunk = ((struct otx2_tim_ent *)
- mirr_bkt->current_chunk) + crem;
-
- index = tim_cpy_wrk(index, chunk_remainder, chunk, tim,
- ents, bkt);
- tim_bkt_sub_rem(bkt, chunk_remainder);
- tim_bkt_add_nent(bkt, chunk_remainder);
- }
-
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- tim_bkt_dec_lock(bkt);
- rte_errno = ENOMEM;
- tim[index]->state = RTE_EVENT_TIMER_ERROR;
- return crem;
- }
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
-
- rem = nb_timers - chunk_remainder;
- tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
- tim_bkt_add_nent(bkt, rem);
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
-
- tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
- tim_bkt_sub_rem(bkt, nb_timers);
- tim_bkt_add_nent(bkt, nb_timers);
- }
-
- tim_bkt_dec_lock(bkt);
-
- return nb_timers;
-}
-
-static int
-tim_rm_entry(struct rte_event_timer *tim)
-{
- struct otx2_tim_ent *entry;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
-
- if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
- return -ENOENT;
-
- entry = (struct otx2_tim_ent *)(uintptr_t)tim->impl_opaque[0];
- if (entry->wqe != tim->ev.u64) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- return -ENOENT;
- }
-
- bkt = (struct otx2_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
- lock_sema = tim_bkt_inc_lock(bkt);
- if (tim_bkt_get_hbt(lock_sema) || !tim_bkt_get_nent(lock_sema)) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim_bkt_dec_lock(bkt);
- return -ENOENT;
- }
-
- entry->w0 = 0;
- entry->wqe = 0;
- tim->state = RTE_EVENT_TIMER_CANCELED;
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim_bkt_dec_lock(bkt);
-
- return 0;
-}
-
-#endif /* __OTX2_TIM_WORKER_H__ */
diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c
deleted file mode 100644
index 95139d27a3..0000000000
--- a/drivers/event/octeontx2/otx2_worker.c
+++ /dev/null
@@ -1,372 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_worker.h"
-
-static __rte_noinline uint8_t
-otx2_ssogws_new_event(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint64_t event_ptr = ev->u64;
- const uint16_t grp = ev->queue_id;
-
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- otx2_ssogws_add_work(ws, event_ptr, tag, new_tt, grp);
-
- return 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_fwd_swtag(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op));
-
- /* 96XX model
- * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
- *
- * SSO_SYNC_ORDERED norm norm untag
- * SSO_SYNC_ATOMIC norm norm untag
- * SSO_SYNC_UNTAGGED norm norm NOOP
- */
-
- if (new_tt == SSO_SYNC_UNTAGGED) {
- if (cur_tt != SSO_SYNC_UNTAGGED)
- otx2_ssogws_swtag_untag(ws);
- } else {
- otx2_ssogws_swtag_norm(ws, tag, new_tt);
- }
-
- ws->swtag_req = 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_fwd_group(struct otx2_ssogws *ws, const struct rte_event *ev,
- const uint16_t grp)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_UPD_WQP_GRP1);
- rte_smp_wmb();
- otx2_ssogws_swtag_desched(ws, tag, new_tt, grp);
-}
-
-static __rte_always_inline void
-otx2_ssogws_forward_event(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint8_t grp = ev->queue_id;
-
- /* Group hasn't changed, Use SWTAG to forward the event */
- if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(ws->tag_op)) == grp)
- otx2_ssogws_fwd_swtag(ws, ev);
- else
- /*
- * Group has been changed for group based work pipelining,
- * Use deschedule/add_work operation to transfer the event to
- * new group/core
- */
- otx2_ssogws_fwd_group(ws, ev, grp);
-}
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(timeout_ticks); \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return 1; \
- } \
- \
- return otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint16_t ret = 1; \
- uint64_t iter; \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return ret; \
- } \
- \
- ret = otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
- ret = otx2_ssogws_get_work(ws, ev, flags, \
- ws->lookup_mem); \
- \
- return ret; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_timeout_burst_ ##name(void *port, struct rte_event ev[],\
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_timeout_ ##name(port, ev, timeout_ticks);\
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(timeout_ticks); \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return 1; \
- } \
- \
- return otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_seg_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint16_t ret = 1; \
- uint64_t iter; \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return ret; \
- } \
- \
- ret = otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
- ret = otx2_ssogws_get_work(ws, ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
- \
- return ret; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_seg_timeout_ ##name(port, ev, \
- timeout_ticks); \
-}
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-uint16_t __rte_hot
-otx2_ssogws_enq(void *port, const struct rte_event *ev)
-{
- struct otx2_ssogws *ws = port;
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- rte_smp_mb();
- return otx2_ssogws_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- otx2_ssogws_forward_event(ws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return otx2_ssogws_enq(port, ev);
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
- uint16_t i, rc = 1;
-
- rte_smp_mb();
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- for (i = 0; i < nb_events && rc; i++)
- rc = otx2_ssogws_new_event(ws, &ev[i]);
-
- return nb_events;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
-
- RTE_SET_USED(nb_events);
- otx2_ssogws_forward_event(ws, ev);
-
- return 1;
-}
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint64_t cmd[sz]; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \
- (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- flags); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, struct rte_event ev[],\
- uint16_t nb_events) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \
- (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- (flags) | NIX_TX_MULTI_SEG_F); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-void
-ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base,
- otx2_handle_event_t fn, void *arg)
-{
- uint64_t cq_ds_cnt = 1;
- uint64_t aq_cnt = 1;
- uint64_t ds_cnt = 1;
- struct rte_event ev;
- uint64_t enable;
- uint64_t val;
-
- enable = otx2_read64(base + SSO_LF_GGRP_QCTL);
- if (!enable)
- return;
-
- val = queue_id; /* GGRP ID */
- val |= BIT_ULL(18); /* Grouped */
- val |= BIT_ULL(16); /* WAIT */
-
- aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
- ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
- cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
- cq_ds_cnt &= 0x3FFF3FFF0000;
-
- while (aq_cnt || cq_ds_cnt || ds_cnt) {
- otx2_write64(val, ws->getwrk_op);
- otx2_ssogws_get_work_empty(ws, &ev, 0);
- if (fn != NULL && ev.u64 != 0)
- fn(arg, ev);
- if (ev.sched_type != SSO_TT_EMPTY)
- otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
- rte_mb();
- aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
- ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
- cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
- /* Extract cq and ds count */
- cq_ds_cnt &= 0x3FFF3FFF0000;
- }
-
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_GWC_INVAL);
- rte_mb();
-}
-
-void
-ssogws_reset(struct otx2_ssogws *ws)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
- uint64_t pend_state;
- uint8_t pend_tt;
- uint64_t tag;
-
- /* Wait till getwork/swtp/waitw/desched completes. */
- do {
- pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
- rte_mb();
- } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58)));
-
- tag = otx2_read64(base + SSOW_LF_GWS_TAG);
- pend_tt = (tag >> 32) & 0x3;
- if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
- if (pend_tt == SSO_SYNC_ATOMIC || pend_tt == SSO_SYNC_ORDERED)
- otx2_ssogws_swtag_untag(ws);
- otx2_ssogws_desched(ws);
- }
- rte_mb();
-
- /* Wait for desched to complete. */
- do {
- pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
- rte_mb();
- } while (pend_state & BIT_ULL(58));
-}
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
deleted file mode 100644
index aa766c6602..0000000000
--- a/drivers/event/octeontx2/otx2_worker.h
+++ /dev/null
@@ -1,339 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_WORKER_H__
-#define __OTX2_WORKER_H__
-
-#include <rte_common.h>
-#include <rte_branch_prediction.h>
-
-#include <otx2_common.h>
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_rx.h"
-#include "otx2_ethdev_sec_tx.h"
-
-/* SSO Operations */
-
-static __rte_always_inline uint16_t
-otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev,
- const uint32_t flags, const void * const lookup_mem)
-{
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
- otx2_write64(BIT_ULL(16) | /* wait for work. */
- 1, /* Use Mask set 0. */
- ws->getwrk_op);
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F)
- rte_prefetch_non_temporal(lookup_mem);
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbz %[tag], 63, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8] \n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
-
- get_work1 = otx2_read64(ws->wqp_op);
- rte_prefetch0((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch0((const void *)mbuf);
-#endif
-
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY) {
- if ((flags & NIX_RX_OFFLOAD_SECURITY_F) &&
- (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
- get_work1 = otx2_handle_crypto_event(get_work1);
- } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type,
- (uint32_t) event.get_work0, flags,
- lookup_mem);
- /* Extracting tstamp, if PTP enabled*/
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)
- get_work1) +
- OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf,
- ws->tstamp, flags,
- (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-/* Used in cleaning up workslot. */
-static __rte_always_inline uint16_t
-otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev,
- const uint32_t flags)
-{
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbz %[tag], 63, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8] \n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
-
- get_work1 = otx2_read64(ws->wqp_op);
- rte_prefetch_non_temporal((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch_non_temporal((const void *)mbuf);
-#endif
-
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY &&
- event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type,
- (uint32_t) event.get_work0, flags, NULL);
- /* Extracting tstamp, if PTP enabled*/
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)get_work1)
- + OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, ws->tstamp,
- flags, (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_add_work(struct otx2_ssogws *ws, const uint64_t event_ptr,
- const uint32_t tag, const uint8_t new_tt,
- const uint16_t grp)
-{
- uint64_t add_work0;
-
- add_work0 = tag | ((uint64_t)(new_tt) << 32);
- otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_desched(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt,
- uint16_t grp)
-{
- uint64_t val;
-
- val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
- otx2_write64(val, ws->swtag_desched_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_norm(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt)
-{
- uint64_t val;
-
- val = tag | ((uint64_t)(new_tt & 0x3) << 32);
- otx2_write64(val, ws->swtag_norm_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_untag(struct otx2_ssogws *ws)
-{
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_SWTAG_UNTAG);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
-{
- if (OTX2_SSOW_TT_FROM_TAG(otx2_read64(tag_op)) == SSO_TT_EMPTY)
- return;
- otx2_write64(0, flush_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_desched(struct otx2_ssogws *ws)
-{
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_DESCHED);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_wait(struct otx2_ssogws *ws)
-{
-#ifdef RTE_ARCH_ARM64
- uint64_t swtp;
-
- asm volatile(" ldr %[swtb], [%[swtp_loc]] \n"
- " tbz %[swtb], 62, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[swtb], [%[swtp_loc]] \n"
- " tbnz %[swtb], 62, rty%= \n"
- "done%=: \n"
- : [swtb] "=&r" (swtp)
- : [swtp_loc] "r" (ws->tag_op));
-#else
- /* Wait for the SWTAG/SWTAG_FULL operation */
- while (otx2_read64(ws->tag_op) & BIT_ULL(62))
- ;
-#endif
-}
-
-static __rte_always_inline void
-otx2_ssogws_head_wait(uint64_t tag_op)
-{
-#ifdef RTE_ARCH_ARM64
- uint64_t tag;
-
- asm volatile (
- " ldr %[tag], [%[tag_op]] \n"
- " tbnz %[tag], 35, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_op]] \n"
- " tbz %[tag], 35, rty%= \n"
- "done%=: \n"
- : [tag] "=&r" (tag)
- : [tag_op] "r" (tag_op)
- );
-#else
- /* Wait for the HEAD to be set */
- while (!(otx2_read64(tag_op) & BIT_ULL(35)))
- ;
-#endif
-}
-
-static __rte_always_inline const struct otx2_eth_txq *
-otx2_ssogws_xtract_meta(struct rte_mbuf *m,
- const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT])
-{
- return (const struct otx2_eth_txq *)txq_data[m->port][
- rte_event_eth_tx_adapter_txq_get(m)];
-}
-
-static __rte_always_inline void
-otx2_ssogws_prepare_pkt(const struct otx2_eth_txq *txq, struct rte_mbuf *m,
- uint64_t *cmd, const uint32_t flags)
-{
- otx2_lmt_mov(cmd, txq->cmd, otx2_nix_tx_ext_subs(flags));
- otx2_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt);
-}
-
-static __rte_always_inline uint16_t
-otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
- const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
- const uint32_t flags)
-{
- struct rte_mbuf *m = ev->mbuf;
- const struct otx2_eth_txq *txq;
- uint16_t ref_cnt = m->refcnt;
-
- if ((flags & NIX_TX_OFFLOAD_SECURITY_F) &&
- (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
- txq = otx2_ssogws_xtract_meta(m, txq_data);
- return otx2_sec_event_tx(base, ev, m, txq, flags);
- }
-
- /* Perform header writes before barrier for TSO */
- otx2_nix_xmit_prepare_tso(m, flags);
- /* Lets commit any changes in the packet here in case when
- * fast free is set as no further changes will be made to mbuf.
- * In case of fast free is not set, both otx2_nix_prepare_mseg()
- * and otx2_nix_xmit_prepare() has a barrier after refcnt update.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
- txq = otx2_ssogws_xtract_meta(m, txq_data);
- otx2_ssogws_prepare_pkt(txq, m, cmd, flags);
-
- if (flags & NIX_TX_MULTI_SEG_F) {
- const uint16_t segdw = otx2_nix_prepare_mseg(m, cmd, flags);
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- m->ol_flags, segdw, flags);
- if (!ev->sched_type) {
- otx2_nix_xmit_mseg_prep_lmt(cmd, txq->lmt_addr, segdw);
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
- if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0)
- otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr,
- txq->io_addr, segdw);
- } else {
- otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr,
- txq->io_addr, segdw);
- }
- } else {
- /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- m->ol_flags, 4, flags);
-
- if (!ev->sched_type) {
- otx2_nix_xmit_prep_lmt(cmd, txq->lmt_addr, flags);
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
- if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0)
- otx2_nix_xmit_one(cmd, txq->lmt_addr,
- txq->io_addr, flags);
- } else {
- otx2_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr,
- flags);
- }
- }
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- if (ref_cnt > 1)
- return 1;
- }
-
- otx2_ssogws_swtag_flush(base + SSOW_LF_GWS_TAG,
- base + SSOW_LF_GWS_OP_SWTAG_FLUSH);
-
- return 1;
-}
-
-#endif
diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c
deleted file mode 100644
index 81af4ca904..0000000000
--- a/drivers/event/octeontx2/otx2_worker_dual.c
+++ /dev/null
@@ -1,345 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_worker_dual.h"
-#include "otx2_worker.h"
-
-static __rte_noinline uint8_t
-otx2_ssogws_dual_new_event(struct otx2_ssogws_dual *ws,
- const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint64_t event_ptr = ev->u64;
- const uint16_t grp = ev->queue_id;
-
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- otx2_ssogws_dual_add_work(ws, event_ptr, tag, new_tt, grp);
-
- return 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_fwd_swtag(struct otx2_ssogws_state *ws,
- const struct rte_event *ev)
-{
- const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op));
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- /* 96XX model
- * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
- *
- * SSO_SYNC_ORDERED norm norm untag
- * SSO_SYNC_ATOMIC norm norm untag
- * SSO_SYNC_UNTAGGED norm norm NOOP
- */
- if (new_tt == SSO_SYNC_UNTAGGED) {
- if (cur_tt != SSO_SYNC_UNTAGGED)
- otx2_ssogws_swtag_untag((struct otx2_ssogws *)ws);
- } else {
- otx2_ssogws_swtag_norm((struct otx2_ssogws *)ws, tag, new_tt);
- }
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_fwd_group(struct otx2_ssogws_state *ws,
- const struct rte_event *ev, const uint16_t grp)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_UPD_WQP_GRP1);
- rte_smp_wmb();
- otx2_ssogws_swtag_desched((struct otx2_ssogws *)ws, tag, new_tt, grp);
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws,
- struct otx2_ssogws_state *vws,
- const struct rte_event *ev)
-{
- const uint8_t grp = ev->queue_id;
-
- /* Group hasn't changed, Use SWTAG to forward the event */
- if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(vws->tag_op)) == grp) {
- otx2_ssogws_dual_fwd_swtag(vws, ev);
- ws->swtag_req = 1;
- } else {
- /*
- * Group has been changed for group based work pipelining,
- * Use deschedule/add_work operation to transfer the event to
- * new group/core
- */
- otx2_ssogws_dual_fwd_group(vws, ev, grp);
- }
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq(void *port, const struct rte_event *ev)
-{
- struct otx2_ssogws_dual *ws = port;
- struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- rte_smp_mb();
- return otx2_ssogws_dual_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- otx2_ssogws_dual_forward_event(ws, vws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- otx2_ssogws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return otx2_ssogws_dual_enq(port, ev);
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
- uint16_t i, rc = 1;
-
- rte_smp_mb();
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- for (i = 0; i < nb_events && rc; i++)
- rc = otx2_ssogws_dual_new_event(ws, &ev[i]);
-
- return nb_events;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
- struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
-
- RTE_SET_USED(nb_events);
- otx2_ssogws_dual_forward_event(ws, vws, ev);
-
- return 1;
-}
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint8_t gw; \
- \
- rte_prefetch_non_temporal(ws); \
- RTE_SET_USED(timeout_ticks); \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags, ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t iter; \
- uint8_t gw; \
- \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags, ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], \
- ev, flags, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- } \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_timeout_ ##name(port, ev, \
- timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint8_t gw; \
- \
- RTE_SET_USED(timeout_ticks); \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_seg_ ##name(port, ev, \
- timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t iter; \
- uint8_t gw; \
- \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], \
- ev, flags | \
- NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- } \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_seg_timeout_ ##name(port, ev, \
- timeout_ticks); \
-}
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t cmd[sz]; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \
- cmd, (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, flags); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- struct otx2_ssogws_dual *ws = port; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \
- cmd, (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- (flags) | NIX_TX_MULTI_SEG_F);\
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h
deleted file mode 100644
index 36ae4dd88f..0000000000
--- a/drivers/event/octeontx2/otx2_worker_dual.h
+++ /dev/null
@@ -1,110 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_WORKER_DUAL_H__
-#define __OTX2_WORKER_DUAL_H__
-
-#include <rte_branch_prediction.h>
-#include <rte_common.h>
-
-#include <otx2_common.h>
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_rx.h"
-
-/* SSO Operations */
-static __rte_always_inline uint16_t
-otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws,
- struct otx2_ssogws_state *ws_pair,
- struct rte_event *ev, const uint32_t flags,
- const void * const lookup_mem,
- struct otx2_timesync_info * const tstamp)
-{
- const uint64_t set_gw = BIT_ULL(16) | 1;
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F)
- rte_prefetch_non_temporal(lookup_mem);
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- "rty%=: \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: str %[gw], [%[pong]] \n"
- " dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8]\n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op),
- [gw] "r" (set_gw),
- [pong] "r" (ws_pair->getwrk_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
- get_work1 = otx2_read64(ws->wqp_op);
- otx2_write64(set_gw, ws_pair->getwrk_op);
-
- rte_prefetch0((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch0((const void *)mbuf);
-#endif
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY) {
- if ((flags & NIX_RX_OFFLOAD_SECURITY_F) &&
- (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
- get_work1 = otx2_handle_crypto_event(get_work1);
- } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- uint8_t port = event.sub_event_type;
-
- event.sub_event_type = 0;
- otx2_wqe_to_mbuf(get_work1, mbuf, port,
- event.flow_id, flags, lookup_mem);
- /* Extracting tstamp, if PTP enabled. CGX will prepend
- * the timestamp at starting of packet data and it can
- * be derieved from WQE 9 dword which corresponds to SG
- * iova.
- * rte_pktmbuf_mtod_offset can be used for this purpose
- * but it brings down the performance as it reads
- * mbuf->buf_addr which is not part of cache in general
- * fast path.
- */
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)
- get_work1) +
- OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, tstamp,
- flags, (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_add_work(struct otx2_ssogws_dual *ws, const uint64_t event_ptr,
- const uint32_t tag, const uint8_t new_tt,
- const uint16_t grp)
-{
- uint64_t add_work0;
-
- add_work0 = tag | ((uint64_t)(new_tt) << 32);
- otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
-}
-
-#endif
diff --git a/drivers/event/octeontx2/version.map b/drivers/event/octeontx2/version.map
deleted file mode 100644
index c2e0723b4c..0000000000
--- a/drivers/event/octeontx2/version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_22 {
- local: *;
-};
diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c
index 57be33b862..ea473552dd 100644
--- a/drivers/mempool/cnxk/cnxk_mempool.c
+++ b/drivers/mempool/cnxk/cnxk_mempool.c
@@ -161,48 +161,20 @@ npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id npa_pci_map[] = {
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA,
- },
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/mempool/meson.build b/drivers/mempool/meson.build
index d295263b87..dc88812585 100644
--- a/drivers/mempool/meson.build
+++ b/drivers/mempool/meson.build
@@ -7,7 +7,6 @@ drivers = [
'dpaa',
'dpaa2',
'octeontx',
- 'octeontx2',
'ring',
'stack',
]
diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build
deleted file mode 100644
index a4bea6d364..0000000000
--- a/drivers/mempool/octeontx2/meson.build
+++ /dev/null
@@ -1,18 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_mempool.c',
- 'otx2_mempool_debug.c',
- 'otx2_mempool_irq.c',
- 'otx2_mempool_ops.c',
-)
-
-deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'mempool']
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
deleted file mode 100644
index f63dc06ef2..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ /dev/null
@@ -1,457 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_io.h>
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_mempool.h"
-
-#define OTX2_NPA_DEV_NAME RTE_STR(otx2_npa_dev_)
-#define OTX2_NPA_DEV_NAME_LEN (sizeof(OTX2_NPA_DEV_NAME) + PCI_PRI_STR_SIZE)
-
-static inline int
-npa_lf_alloc(struct otx2_npa_lf *lf)
-{
- struct otx2_mbox *mbox = lf->mbox;
- struct npa_lf_alloc_req *req;
- struct npa_lf_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_lf_alloc(mbox);
- req->aura_sz = lf->aura_sz;
- req->nr_pools = lf->nr_pools;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return NPA_LF_ERR_ALLOC;
-
- lf->stack_pg_ptrs = rsp->stack_pg_ptrs;
- lf->stack_pg_bytes = rsp->stack_pg_bytes;
- lf->qints = rsp->qints;
-
- return 0;
-}
-
-static int
-npa_lf_free(struct otx2_mbox *mbox)
-{
- otx2_mbox_alloc_msg_npa_lf_free(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npa_lf_init(struct otx2_npa_lf *lf, uintptr_t base, uint8_t aura_sz,
- uint32_t nr_pools, struct otx2_mbox *mbox)
-{
- uint32_t i, bmp_sz;
- int rc;
-
- /* Sanity checks */
- if (!lf || !base || !mbox || !nr_pools)
- return NPA_LF_ERR_PARAM;
-
- if (base & AURA_ID_MASK)
- return NPA_LF_ERR_BASE_INVALID;
-
- if (aura_sz == NPA_AURA_SZ_0 || aura_sz >= NPA_AURA_SZ_MAX)
- return NPA_LF_ERR_PARAM;
-
- memset(lf, 0x0, sizeof(*lf));
- lf->base = base;
- lf->aura_sz = aura_sz;
- lf->nr_pools = nr_pools;
- lf->mbox = mbox;
-
- rc = npa_lf_alloc(lf);
- if (rc)
- goto exit;
-
- bmp_sz = rte_bitmap_get_memory_footprint(nr_pools);
-
- /* Allocate memory for bitmap */
- lf->npa_bmp_mem = rte_zmalloc("npa_bmp_mem", bmp_sz,
- RTE_CACHE_LINE_SIZE);
- if (lf->npa_bmp_mem == NULL) {
- rc = -ENOMEM;
- goto lf_free;
- }
-
- /* Initialize pool resource bitmap array */
- lf->npa_bmp = rte_bitmap_init(nr_pools, lf->npa_bmp_mem, bmp_sz);
- if (lf->npa_bmp == NULL) {
- rc = -EINVAL;
- goto bmap_mem_free;
- }
-
- /* Mark all pools available */
- for (i = 0; i < nr_pools; i++)
- rte_bitmap_set(lf->npa_bmp, i);
-
- /* Allocate memory for qint context */
- lf->npa_qint_mem = rte_zmalloc("npa_qint_mem",
- sizeof(struct otx2_npa_qint) * nr_pools, 0);
- if (lf->npa_qint_mem == NULL) {
- rc = -ENOMEM;
- goto bmap_free;
- }
-
- /* Allocate memory for nap_aura_lim memory */
- lf->aura_lim = rte_zmalloc("npa_aura_lim_mem",
- sizeof(struct npa_aura_lim) * nr_pools, 0);
- if (lf->aura_lim == NULL) {
- rc = -ENOMEM;
- goto qint_free;
- }
-
- /* Init aura start & end limits */
- for (i = 0; i < nr_pools; i++) {
- lf->aura_lim[i].ptr_start = UINT64_MAX;
- lf->aura_lim[i].ptr_end = 0x0ull;
- }
-
- return 0;
-
-qint_free:
- rte_free(lf->npa_qint_mem);
-bmap_free:
- rte_bitmap_free(lf->npa_bmp);
-bmap_mem_free:
- rte_free(lf->npa_bmp_mem);
-lf_free:
- npa_lf_free(lf->mbox);
-exit:
- return rc;
-}
-
-static int
-npa_lf_fini(struct otx2_npa_lf *lf)
-{
- if (!lf)
- return NPA_LF_ERR_PARAM;
-
- rte_free(lf->aura_lim);
- rte_free(lf->npa_qint_mem);
- rte_bitmap_free(lf->npa_bmp);
- rte_free(lf->npa_bmp_mem);
-
- return npa_lf_free(lf->mbox);
-
-}
-
-static inline uint32_t
-otx2_aura_size_to_u32(uint8_t val)
-{
- if (val == NPA_AURA_SZ_0)
- return 128;
- if (val >= NPA_AURA_SZ_MAX)
- return BIT_ULL(20);
-
- return 1 << (val + 6);
-}
-
-static int
-parse_max_pools(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
- if (val < otx2_aura_size_to_u32(NPA_AURA_SZ_128))
- val = 128;
- if (val > otx2_aura_size_to_u32(NPA_AURA_SZ_1M))
- val = BIT_ULL(20);
-
- *(uint8_t *)extra_args = rte_log2_u32(val) - 6;
- return 0;
-}
-
-#define OTX2_MAX_POOLS "max_pools"
-
-static uint8_t
-otx2_parse_aura_size(struct rte_devargs *devargs)
-{
- uint8_t aura_sz = NPA_AURA_SZ_128;
- struct rte_kvargs *kvlist;
-
- if (devargs == NULL)
- goto exit;
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- goto exit;
-
- rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
- otx2_parse_common_devargs(kvlist);
- rte_kvargs_free(kvlist);
-exit:
- return aura_sz;
-}
-
-static inline int
-npa_lf_attach(struct otx2_mbox *mbox)
-{
- struct rsrc_attach_req *req;
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- req->npalf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-npa_lf_detach(struct otx2_mbox *mbox)
-{
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->npalf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-npa_lf_get_msix_offset(struct otx2_mbox *mbox, uint16_t *npa_msixoff)
-{
- struct msix_offset_rsp *msix_rsp;
- int rc;
-
- /* Get NPA and NIX MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- *npa_msixoff = msix_rsp->npa_msixoff;
-
- return rc;
-}
-
-/**
- * @internal
- * Finalize NPA LF.
- */
-int
-otx2_npa_lf_fini(void)
-{
- struct otx2_idev_cfg *idev;
- int rc = 0;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- if (rte_atomic16_add_return(&idev->npa_refcnt, -1) == 0) {
- otx2_npa_unregister_irqs(idev->npa_lf);
- rc |= npa_lf_fini(idev->npa_lf);
- rc |= npa_lf_detach(idev->npa_lf->mbox);
- otx2_npa_set_defaults(idev);
- }
-
- return rc;
-}
-
-/**
- * @internal
- * Initialize NPA LF.
- */
-int
-otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_npa_lf *lf;
- uint16_t npa_msixoff;
- uint32_t nr_pools;
- uint8_t aura_sz;
- int rc;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- /* Is NPA LF initialized by any another driver? */
- if (rte_atomic16_add_return(&idev->npa_refcnt, 1) == 1) {
-
- rc = npa_lf_attach(dev->mbox);
- if (rc)
- goto fail;
-
- rc = npa_lf_get_msix_offset(dev->mbox, &npa_msixoff);
- if (rc)
- goto npa_detach;
-
- aura_sz = otx2_parse_aura_size(pci_dev->device.devargs);
- nr_pools = otx2_aura_size_to_u32(aura_sz);
-
- lf = &dev->npalf;
- rc = npa_lf_init(lf, dev->bar2 + (RVU_BLOCK_ADDR_NPA << 20),
- aura_sz, nr_pools, dev->mbox);
-
- if (rc)
- goto npa_detach;
-
- lf->pf_func = dev->pf_func;
- lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = pci_dev->intr_handle;
- lf->pci_dev = pci_dev;
-
- idev->npa_pf_func = dev->pf_func;
- idev->npa_lf = lf;
- rte_smp_wmb();
- rc = otx2_npa_register_irqs(lf);
- if (rc)
- goto npa_fini;
-
- rte_mbuf_set_platform_mempool_ops("octeontx2_npa");
- otx2_npa_dbg("npa_lf=%p pools=%d sz=%d pf_func=0x%x msix=0x%x",
- lf, nr_pools, aura_sz, lf->pf_func, npa_msixoff);
- }
-
- return 0;
-
-npa_fini:
- npa_lf_fini(idev->npa_lf);
-npa_detach:
- npa_lf_detach(dev->mbox);
-fail:
- rte_atomic16_dec(&idev->npa_refcnt);
- return rc;
-}
-
-static inline char*
-otx2_npa_dev_to_name(struct rte_pci_device *pci_dev, char *name)
-{
- snprintf(name, OTX2_NPA_DEV_NAME_LEN,
- OTX2_NPA_DEV_NAME PCI_PRI_FMT,
- pci_dev->addr.domain, pci_dev->addr.bus,
- pci_dev->addr.devid, pci_dev->addr.function);
-
- return name;
-}
-
-static int
-otx2_npa_init(struct rte_pci_device *pci_dev)
-{
- char name[OTX2_NPA_DEV_NAME_LEN];
- const struct rte_memzone *mz;
- struct otx2_dev *dev;
- int rc = -ENOMEM;
-
- mz = rte_memzone_reserve_aligned(otx2_npa_dev_to_name(pci_dev, name),
- sizeof(*dev), SOCKET_ID_ANY,
- 0, OTX2_ALIGN);
- if (mz == NULL)
- goto error;
-
- dev = mz->addr;
-
- /* Initialize the base otx2_dev object */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc)
- goto malloc_fail;
-
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc)
- goto dev_uninit;
-
- dev->drv_inited = true;
- return 0;
-
-dev_uninit:
- otx2_npa_lf_fini();
- otx2_dev_fini(pci_dev, dev);
-malloc_fail:
- rte_memzone_free(mz);
-error:
- otx2_err("Failed to initialize npa device rc=%d", rc);
- return rc;
-}
-
-static int
-otx2_npa_fini(struct rte_pci_device *pci_dev)
-{
- char name[OTX2_NPA_DEV_NAME_LEN];
- const struct rte_memzone *mz;
- struct otx2_dev *dev;
-
- mz = rte_memzone_lookup(otx2_npa_dev_to_name(pci_dev, name));
- if (mz == NULL)
- return -EINVAL;
-
- dev = mz->addr;
- if (!dev->drv_inited)
- goto dev_fini;
-
- dev->drv_inited = false;
- otx2_npa_lf_fini();
-
-dev_fini:
- if (otx2_npa_lf_active(dev)) {
- otx2_info("%s: common resource in use by other devices",
- pci_dev->name);
- return -EAGAIN;
- }
-
- otx2_dev_fini(pci_dev, dev);
- rte_memzone_free(mz);
-
- return 0;
-}
-
-static int
-npa_remove(struct rte_pci_device *pci_dev)
-{
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- return otx2_npa_fini(pci_dev);
-}
-
-static int
-npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- RTE_SET_USED(pci_drv);
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- return otx2_npa_init(pci_dev);
-}
-
-static const struct rte_pci_id pci_npa_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_NPA_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_NPA_VF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_npa = {
- .id_table = pci_npa_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
- .probe = npa_probe,
- .remove = npa_remove,
-};
-
-RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa);
-RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
-RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
- OTX2_MAX_POOLS "=<128-1048576>"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h
deleted file mode 100644
index 8aa548248d..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool.h
+++ /dev/null
@@ -1,221 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_MEMPOOL_H__
-#define __OTX2_MEMPOOL_H__
-
-#include <rte_bitmap.h>
-#include <rte_bus_pci.h>
-#include <rte_devargs.h>
-#include <rte_mempool.h>
-
-#include "otx2_common.h"
-#include "otx2_mbox.h"
-
-enum npa_lf_status {
- NPA_LF_ERR_PARAM = -512,
- NPA_LF_ERR_ALLOC = -513,
- NPA_LF_ERR_INVALID_BLOCK_SZ = -514,
- NPA_LF_ERR_AURA_ID_ALLOC = -515,
- NPA_LF_ERR_AURA_POOL_INIT = -516,
- NPA_LF_ERR_AURA_POOL_FINI = -517,
- NPA_LF_ERR_BASE_INVALID = -518,
-};
-
-struct otx2_npa_lf;
-struct otx2_npa_qint {
- struct otx2_npa_lf *lf;
- uint8_t qintx;
-};
-
-struct npa_aura_lim {
- uint64_t ptr_start;
- uint64_t ptr_end;
-};
-
-struct otx2_npa_lf {
- uint16_t qints;
- uintptr_t base;
- uint8_t aura_sz;
- uint16_t pf_func;
- uint32_t nr_pools;
- void *npa_bmp_mem;
- void *npa_qint_mem;
- uint16_t npa_msixoff;
- struct otx2_mbox *mbox;
- uint32_t stack_pg_ptrs;
- uint32_t stack_pg_bytes;
- struct rte_bitmap *npa_bmp;
- struct npa_aura_lim *aura_lim;
- struct rte_pci_device *pci_dev;
- struct rte_intr_handle *intr_handle;
-};
-
-#define AURA_ID_MASK (BIT_ULL(16) - 1)
-
-/*
- * Generate 64bit handle to have optimized alloc and free aura operation.
- * 0 - AURA_ID_MASK for storing the aura_id.
- * AURA_ID_MASK+1 - (2^64 - 1) for storing the lf base address.
- * This scheme is valid when OS can give AURA_ID_MASK
- * aligned address for lf base address.
- */
-static inline uint64_t
-npa_lf_aura_handle_gen(uint32_t aura_id, uintptr_t addr)
-{
- uint64_t val;
-
- val = aura_id & AURA_ID_MASK;
- return (uint64_t)addr | val;
-}
-
-static inline uint64_t
-npa_lf_aura_handle_to_aura(uint64_t aura_handle)
-{
- return aura_handle & AURA_ID_MASK;
-}
-
-static inline uintptr_t
-npa_lf_aura_handle_to_base(uint64_t aura_handle)
-{
- return (uintptr_t)(aura_handle & ~AURA_ID_MASK);
-}
-
-static inline uint64_t
-npa_lf_aura_op_alloc(uint64_t aura_handle, const int drop)
-{
- uint64_t wdata = npa_lf_aura_handle_to_aura(aura_handle);
-
- if (drop)
- wdata |= BIT_ULL(63); /* DROP */
-
- return otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_ALLOCX(0)));
-}
-
-static inline void
-npa_lf_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova)
-{
- uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
-
- if (fabs)
- reg |= BIT_ULL(63); /* FABS */
-
- otx2_store_pair(iova, reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0);
-}
-
-static inline uint64_t
-npa_lf_aura_op_cnt_get(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_CNT));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count)
-{
- uint64_t reg = count & (BIT_ULL(36) - 1);
-
- if (sign)
- reg |= BIT_ULL(43); /* CNT_ADD */
-
- reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44);
-
- otx2_write64(reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_CNT);
-}
-
-static inline uint64_t
-npa_lf_aura_op_limit_get(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_LIMIT));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_limit_set(uint64_t aura_handle, uint64_t limit)
-{
- uint64_t reg = limit & (BIT_ULL(36) - 1);
-
- reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44);
-
- otx2_write64(reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_LIMIT);
-}
-
-static inline uint64_t
-npa_lf_aura_op_available(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(
- aura_handle) + NPA_LF_POOL_OP_AVAILABLE));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova,
- uint64_t end_iova)
-{
- uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- struct npa_aura_lim *lim = lf->aura_lim;
-
- lim[reg].ptr_start = RTE_MIN(lim[reg].ptr_start, start_iova);
- lim[reg].ptr_end = RTE_MAX(lim[reg].ptr_end, end_iova);
-
- otx2_store_pair(lim[reg].ptr_start, reg,
- npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_POOL_OP_PTR_START0);
- otx2_store_pair(lim[reg].ptr_end, reg,
- npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_POOL_OP_PTR_END0);
-}
-
-/* NPA LF */
-__rte_internal
-int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev);
-__rte_internal
-int otx2_npa_lf_fini(void);
-
-/* IRQ */
-int otx2_npa_register_irqs(struct otx2_npa_lf *lf);
-void otx2_npa_unregister_irqs(struct otx2_npa_lf *lf);
-
-/* Debug */
-int otx2_mempool_ctx_dump(struct otx2_npa_lf *lf);
-
-#endif /* __OTX2_MEMPOOL_H__ */
diff --git a/drivers/mempool/octeontx2/otx2_mempool_debug.c b/drivers/mempool/octeontx2/otx2_mempool_debug.c
deleted file mode 100644
index 279ea2e25f..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_debug.c
+++ /dev/null
@@ -1,135 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_mempool.h"
-
-#define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
-
-static inline void
-npa_lf_pool_dump(__otx2_io struct npa_pool_s *pool)
-{
- npa_dump("W0: Stack base\t\t0x%"PRIx64"", pool->stack_base);
- npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d",
- pool->ena, pool->nat_align, pool->stack_caching);
- npa_dump("W1: stack_way_mask\t%d\nW1: buf_offset\t\t%d",
- pool->stack_way_mask, pool->buf_offset);
- npa_dump("W1: buf_size \t\t%d", pool->buf_size);
-
- npa_dump("W2: stack_max_pages \t%d\nW2: stack_pages\t\t%d",
- pool->stack_max_pages, pool->stack_pages);
-
- npa_dump("W3: op_pc \t\t0x%"PRIx64"", (uint64_t)pool->op_pc);
-
- npa_dump("W4: stack_offset\t%d\nW4: shift\t\t%d\nW4: avg_level\t\t%d",
- pool->stack_offset, pool->shift, pool->avg_level);
- npa_dump("W4: avg_con \t\t%d\nW4: fc_ena\t\t%d\nW4: fc_stype\t\t%d",
- pool->avg_con, pool->fc_ena, pool->fc_stype);
- npa_dump("W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d",
- pool->fc_hyst_bits, pool->fc_up_crossing);
- npa_dump("W4: update_time\t\t%d\n", pool->update_time);
-
- npa_dump("W5: fc_addr\t\t0x%"PRIx64"\n", pool->fc_addr);
-
- npa_dump("W6: ptr_start\t\t0x%"PRIx64"\n", pool->ptr_start);
-
- npa_dump("W7: ptr_end\t\t0x%"PRIx64"\n", pool->ptr_end);
- npa_dump("W8: err_int\t\t%d\nW8: err_int_ena\t\t%d",
- pool->err_int, pool->err_int_ena);
- npa_dump("W8: thresh_int\t\t%d", pool->thresh_int);
-
- npa_dump("W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d",
- pool->thresh_int_ena, pool->thresh_up);
- npa_dump("W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d",
- pool->thresh_qint_idx, pool->err_qint_idx);
-}
-
-static inline void
-npa_lf_aura_dump(__otx2_io struct npa_aura_s *aura)
-{
- npa_dump("W0: Pool addr\t\t0x%"PRIx64"\n", aura->pool_addr);
-
- npa_dump("W1: ena\t\t\t%d\nW1: pool caching\t%d\nW1: pool way mask\t%d",
- aura->ena, aura->pool_caching, aura->pool_way_mask);
- npa_dump("W1: avg con\t\t%d\nW1: pool drop ena\t%d",
- aura->avg_con, aura->pool_drop_ena);
- npa_dump("W1: aura drop ena\t%d", aura->aura_drop_ena);
- npa_dump("W1: bp_ena\t\t%d\nW1: aura drop\t\t%d\nW1: aura shift\t\t%d",
- aura->bp_ena, aura->aura_drop, aura->shift);
- npa_dump("W1: avg_level\t\t%d\n", aura->avg_level);
-
- npa_dump("W2: count\t\t%"PRIx64"\nW2: nix0_bpid\t\t%d",
- (uint64_t)aura->count, aura->nix0_bpid);
- npa_dump("W2: nix1_bpid\t\t%d", aura->nix1_bpid);
-
- npa_dump("W3: limit\t\t%"PRIx64"\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n",
- (uint64_t)aura->limit, aura->bp, aura->fc_ena);
- npa_dump("W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d",
- aura->fc_up_crossing, aura->fc_stype);
-
- npa_dump("W3: fc_hyst_bits\t%d", aura->fc_hyst_bits);
-
- npa_dump("W4: fc_addr\t\t0x%"PRIx64"\n", aura->fc_addr);
-
- npa_dump("W5: pool_drop\t\t%d\nW5: update_time\t\t%d",
- aura->pool_drop, aura->update_time);
- npa_dump("W5: err_int\t\t%d", aura->err_int);
- npa_dump("W5: err_int_ena\t\t%d\nW5: thresh_int\t\t%d",
- aura->err_int_ena, aura->thresh_int);
- npa_dump("W5: thresh_int_ena\t%d", aura->thresh_int_ena);
-
- npa_dump("W5: thresh_up\t\t%d\nW5: thresh_qint_idx\t%d",
- aura->thresh_up, aura->thresh_qint_idx);
- npa_dump("W5: err_qint_idx\t%d", aura->err_qint_idx);
-
- npa_dump("W6: thresh\t\t%"PRIx64"\n", (uint64_t)aura->thresh);
-}
-
-int
-otx2_mempool_ctx_dump(struct otx2_npa_lf *lf)
-{
- struct npa_aq_enq_req *aq;
- struct npa_aq_enq_rsp *rsp;
- uint32_t q;
- int rc = 0;
-
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled POOL */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
- aq->aura_id = q;
- aq->ctype = NPA_AQ_CTYPE_POOL;
- aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get pool(%d) context", q);
- return rc;
- }
- npa_dump("============== pool=%d ===============\n", q);
- npa_lf_pool_dump(&rsp->pool);
- }
-
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled AURA */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
- aq->aura_id = q;
- aq->ctype = NPA_AQ_CTYPE_AURA;
- aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get aura(%d) context", q);
- return rc;
- }
- npa_dump("============== aura=%d ===============\n", q);
- npa_lf_aura_dump(&rsp->aura);
- }
-
- return rc;
-}
diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c
deleted file mode 100644
index 5fa22b9612..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_irq.c
+++ /dev/null
@@ -1,303 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_common.h>
-#include <rte_bus_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-#include "otx2_mempool.h"
-
-static void
-npa_lf_err_irq(void *param)
-{
- struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_ERR_INT);
- if (intr == 0)
- return;
-
- otx2_err("Err_intr=0x%" PRIx64 "", intr);
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_ERR_INT);
-}
-
-static int
-npa_lf_register_err_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int rc, vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
- /* Register err interrupt vector */
- rc = otx2_register_irq(handle, npa_lf_err_irq, lf, vec);
-
- /* Enable hw interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-npa_lf_unregister_err_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
- otx2_unregister_irq(handle, npa_lf_err_irq, lf, vec);
-}
-
-static void
-npa_lf_ras_irq(void *param)
-{
- struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_RAS);
- if (intr == 0)
- return;
-
- otx2_err("Ras_intr=0x%" PRIx64 "", intr);
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_RAS);
-}
-
-static int
-npa_lf_register_ras_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int rc, vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, npa_lf_ras_irq, lf, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S);
-
- return rc;
-}
-
-static void
-npa_lf_unregister_ras_irq(struct otx2_npa_lf *lf)
-{
- int vec;
- struct rte_intr_handle *handle = lf->intr_handle;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
- otx2_unregister_irq(handle, npa_lf_ras_irq, lf, vec);
-}
-
-static inline uint8_t
-npa_lf_q_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t q,
- uint32_t off, uint64_t mask)
-{
- uint64_t reg, wdata;
- uint8_t qint;
-
- wdata = (uint64_t)q << 44;
- reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off));
-
- if (reg & BIT_ULL(42) /* OP_ERR */) {
- otx2_err("Failed execute irq get off=0x%x", off);
- return 0;
- }
-
- qint = reg & 0xff;
- wdata &= mask;
- otx2_write64(wdata | qint, lf->base + off);
-
- return qint;
-}
-
-static inline uint8_t
-npa_lf_pool_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t p)
-{
- return npa_lf_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-npa_lf_aura_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t a)
-{
- return npa_lf_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00);
-}
-
-static void
-npa_lf_q_irq(void *param)
-{
- struct otx2_npa_qint *qint = (struct otx2_npa_qint *)param;
- struct otx2_npa_lf *lf = qint->lf;
- uint8_t irq, qintx = qint->qintx;
- uint32_t q, pool, aura;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_QINTX_INT(qintx));
- if (intr == 0)
- return;
-
- otx2_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx);
-
- /* Handle pool queue interrupts */
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled POOL */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- pool = q % lf->qints;
- irq = npa_lf_pool_irq_get_and_clear(lf, pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool);
- }
-
- /* Handle aura queue interrupts */
- for (q = 0; q < lf->nr_pools; q++) {
-
- /* Skip disabled AURA */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aura = q % lf->qints;
- irq = npa_lf_aura_irq_get_and_clear(lf, aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS))
- otx2_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura);
- }
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx));
- otx2_mempool_ctx_dump(lf);
-}
-
-static int
-npa_lf_register_queue_irqs(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec, q, qs, rc = 0;
-
- /* Figure out max qintx required */
- qs = RTE_MIN(lf->qints, lf->nr_pools);
-
- for (q = 0; q < qs; q++) {
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
-
- struct otx2_npa_qint *qintmem = lf->npa_qint_mem;
- qintmem += q;
-
- qintmem->lf = lf;
- qintmem->qintx = q;
-
- /* Sync qints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, npa_lf_q_irq, qintmem, vec);
- if (rc)
- break;
-
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
- otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q));
- /* Enable QINT interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q));
- }
-
- return rc;
-}
-
-static void
-npa_lf_unregister_queue_irqs(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec, q, qs;
-
- /* Figure out max qintx required */
- qs = RTE_MIN(lf->qints, lf->nr_pools);
-
- for (q = 0; q < qs; q++) {
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
- otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
-
- struct otx2_npa_qint *qintmem = lf->npa_qint_mem;
- qintmem += q;
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, npa_lf_q_irq, qintmem, vec);
-
- qintmem->lf = NULL;
- qintmem->qintx = 0;
- }
-}
-
-int
-otx2_npa_register_irqs(struct otx2_npa_lf *lf)
-{
- int rc;
-
- if (lf->npa_msixoff == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid NPALF MSIX vector offset vector: 0x%x",
- lf->npa_msixoff);
- return -EINVAL;
- }
-
- /* Register lf err interrupt */
- rc = npa_lf_register_err_irq(lf);
- /* Register RAS interrupt */
- rc |= npa_lf_register_ras_irq(lf);
- /* Register queue interrupts */
- rc |= npa_lf_register_queue_irqs(lf);
-
- return rc;
-}
-
-void
-otx2_npa_unregister_irqs(struct otx2_npa_lf *lf)
-{
- npa_lf_unregister_err_irq(lf);
- npa_lf_unregister_ras_irq(lf);
- npa_lf_unregister_queue_irqs(lf);
-}
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
deleted file mode 100644
index 332e4f1cb2..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ /dev/null
@@ -1,901 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_mempool.h>
-#include <rte_vect.h>
-
-#include "otx2_mempool.h"
-
-static int __rte_hot
-otx2_npa_enq(struct rte_mempool *mp, void * const *obj_table, unsigned int n)
-{
- unsigned int index; const uint64_t aura_handle = mp->pool_id;
- const uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
- const uint64_t addr = npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_FREE0;
-
- /* Ensure mbuf init changes are written before the free pointers
- * are enqueued to the stack.
- */
- rte_io_wmb();
- for (index = 0; index < n; index++)
- otx2_store_pair((uint64_t)obj_table[index], reg, addr);
-
- return 0;
-}
-
-static __rte_noinline int
-npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr,
- void **obj_table, uint8_t i)
-{
- uint8_t retry = 4;
-
- do {
- obj_table[i] = (void *)otx2_atomic64_add_nosync(wdata, addr);
- if (obj_table[i] != NULL)
- return 0;
-
- } while (retry--);
-
- return -ENOENT;
-}
-
-#if defined(RTE_ARCH_ARM64)
-static __rte_noinline int
-npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr,
- void **obj_table, unsigned int n)
-{
- uint8_t i;
-
- for (i = 0; i < n; i++) {
- if (obj_table[i] != NULL)
- continue;
- if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i))
- return -ENOENT;
- }
-
- return 0;
-}
-
-static __rte_noinline int
-npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr,
- unsigned int n, void **obj_table)
-{
- register const uint64_t wdata64 __asm("x26") = wdata;
- register const uint64_t wdata128 __asm("x27") = wdata;
- uint64x2_t failed = vdupq_n_u64(~0);
-
- switch (n) {
- case 32:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x16, x17, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x18, x19, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x20, x21, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x22, x23, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x8\n"
- "fmov v20.D[1], x9\n"
- "fmov d21, x10\n"
- "fmov v21.D[1], x11\n"
- "fmov d22, x12\n"
- "fmov v22.D[1], x13\n"
- "fmov d23, x14\n"
- "fmov v23.D[1], x15\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- "fmov d16, x16\n"
- "fmov v16.D[1], x17\n"
- "fmov d17, x18\n"
- "fmov v17.D[1], x19\n"
- "fmov d18, x20\n"
- "fmov v18.D[1], x21\n"
- "fmov d19, x22\n"
- "fmov v19.D[1], x23\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x0\n"
- "fmov v20.D[1], x1\n"
- "fmov d21, x2\n"
- "fmov v21.D[1], x3\n"
- "fmov d22, x4\n"
- "fmov v22.D[1], x5\n"
- "fmov d23, x6\n"
- "fmov v23.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16",
- "x17", "x18", "x19", "x20", "x21", "x22", "x23", "v16", "v17",
- "v18", "v19", "v20", "v21", "v22", "v23"
- );
- break;
- }
- case 16:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x8\n"
- "fmov v20.D[1], x9\n"
- "fmov d21, x10\n"
- "fmov v21.D[1], x11\n"
- "fmov d22, x12\n"
- "fmov v22.D[1], x13\n"
- "fmov d23, x14\n"
- "fmov v23.D[1], x15\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "v16",
- "v17", "v18", "v19", "v20", "v21", "v22", "v23"
- );
- break;
- }
- case 8:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "v16", "v17", "v18", "v19"
- );
- break;
- }
- case 4:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "st1 { v16.2d, v17.2d}, [%[dst]], 32\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "v16", "v17"
- );
- break;
- }
- case 2:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "st1 { v16.2d}, [%[dst]], 16\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "v16"
- );
- break;
- }
- case 1:
- return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0);
- }
-
- if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1))))
- return npa_lf_aura_op_search_alloc(wdata, addr, (void **)
- ((char *)obj_table - (sizeof(uint64_t) * n)), n);
-
- return 0;
-}
-
-static __rte_noinline void
-otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- unsigned int i;
-
- for (i = 0; i < n; i++) {
- if (obj_table[i] != NULL) {
- otx2_npa_enq(mp, &obj_table[i], 1);
- obj_table[i] = NULL;
- }
- }
-}
-
-static __rte_noinline int __rte_hot
-otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id);
- void **obj_table_bak = obj_table;
- const unsigned int nfree = n;
- unsigned int parts;
-
- int64_t * const addr = (int64_t * const)
- (npa_lf_aura_handle_to_base(mp->pool_id) +
- NPA_LF_AURA_OP_ALLOCX(0));
- while (n) {
- parts = n > 31 ? 32 : rte_align32prevpow2(n);
- n -= parts;
- if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr,
- parts, obj_table))) {
- otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n);
- return -ENOENT;
- }
- obj_table += parts;
- }
-
- return 0;
-}
-
-#else
-
-static inline int __rte_hot
-otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id);
- unsigned int index;
- uint64_t obj;
-
- int64_t * const addr = (int64_t *)
- (npa_lf_aura_handle_to_base(mp->pool_id) +
- NPA_LF_AURA_OP_ALLOCX(0));
- for (index = 0; index < n; index++, obj_table++) {
- obj = npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0);
- if (obj == 0) {
- for (; index > 0; index--) {
- obj_table--;
- otx2_npa_enq(mp, obj_table, 1);
- }
- return -ENOENT;
- }
- *obj_table = (void *)obj;
- }
-
- return 0;
-}
-
-#endif
-
-static unsigned int
-otx2_npa_get_count(const struct rte_mempool *mp)
-{
- return (unsigned int)npa_lf_aura_op_available(mp->pool_id);
-}
-
-static int
-npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
- struct npa_aura_s *aura, struct npa_pool_s *pool)
-{
- struct npa_aq_enq_req *aura_init_req, *pool_init_req;
- struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct otx2_idev_cfg *idev;
- int rc, off;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- aura_init_req->aura_id = aura_id;
- aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_init_req->op = NPA_AQ_INSTOP_INIT;
- otx2_mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura));
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- pool_init_req->aura_id = aura_id;
- pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_init_req->op = NPA_AQ_INSTOP_INIT;
- otx2_mbox_memcpy(&pool_init_req->pool, pool, sizeof(*pool));
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- aura_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
- off = mbox->rx_start + aura_init_rsp->hdr.next_msgoff;
- pool_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- if (rc == 2 && aura_init_rsp->hdr.rc == 0 && pool_init_rsp->hdr.rc == 0)
- return 0;
- else
- return NPA_LF_ERR_AURA_POOL_INIT;
-
- if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
- return 0;
-
- aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_init_req->aura_id = aura_id;
- aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_init_req->op = NPA_AQ_INSTOP_LOCK;
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (!pool_init_req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK AURA context");
- return -ENOMEM;
- }
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (!pool_init_req) {
- otx2_err("Failed to LOCK POOL context");
- return -ENOMEM;
- }
- }
- pool_init_req->aura_id = aura_id;
- pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_init_req->op = NPA_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to lock POOL ctx to NDC");
- return -ENOMEM;
- }
-
- return 0;
-}
-
-static int
-npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
- uint32_t aura_id,
- uint64_t aura_handle)
-{
- struct npa_aq_enq_req *aura_req, *pool_req;
- struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct ndc_sync_op *ndc_req;
- struct otx2_idev_cfg *idev;
- int rc, off;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -EINVAL;
-
- /* Procedure for disabling an aura/pool */
- rte_delay_us(10);
- npa_lf_aura_op_alloc(aura_handle, 0);
-
- pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- pool_req->aura_id = aura_id;
- pool_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_req->op = NPA_AQ_INSTOP_WRITE;
- pool_req->pool.ena = 0;
- pool_req->pool_mask.ena = ~pool_req->pool_mask.ena;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_req->aura_id = aura_id;
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
- aura_req->aura.ena = 0;
- aura_req->aura_mask.ena = ~aura_req->aura_mask.ena;
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- off = mbox->rx_start + pool_rsp->hdr.next_msgoff;
- aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0)
- return NPA_LF_ERR_AURA_POOL_FINI;
-
- /* Sync NDC-NPA for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->npa_lf_sync = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
- return NPA_LF_ERR_AURA_POOL_FINI;
- }
-
- if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
- return 0;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_req->aura_id = aura_id;
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to unlock AURA ctx to NDC");
- return -EINVAL;
- }
-
- pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- pool_req->aura_id = aura_id;
- pool_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to unlock POOL ctx to NDC");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static inline char*
-npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name)
-{
- snprintf(name, RTE_MEMZONE_NAMESIZE, "otx2_npa_stack_%x_%d",
- lf->pf_func, pool_id);
-
- return name;
-}
-
-static inline const struct rte_memzone *
-npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name,
- int pool_id, size_t size)
-{
- return rte_memzone_reserve_aligned(
- npa_lf_stack_memzone_name(lf, pool_id, name), size, 0,
- RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN);
-}
-
-static inline int
-npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id)
-{
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name));
- if (mz == NULL)
- return -EINVAL;
-
- return rte_memzone_free(mz);
-}
-
-static inline int
-bitmap_ctzll(uint64_t slab)
-{
- if (slab == 0)
- return 0;
-
- return __builtin_ctzll(slab);
-}
-
-static int
-npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size,
- const uint32_t block_count, struct npa_aura_s *aura,
- struct npa_pool_s *pool, uint64_t *aura_handle)
-{
- int rc, aura_id, pool_id, stack_size, alloc_size;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- uint64_t slab;
- uint32_t pos;
-
- /* Sanity check */
- if (!lf || !block_size || !block_count ||
- !pool || !aura || !aura_handle)
- return NPA_LF_ERR_PARAM;
-
- /* Block size should be cache line aligned and in range of 128B-128KB */
- if (block_size % OTX2_ALIGN || block_size < 128 ||
- block_size > 128 * 1024)
- return NPA_LF_ERR_INVALID_BLOCK_SZ;
-
- pos = slab = 0;
- /* Scan from the beginning */
- __rte_bitmap_scan_init(lf->npa_bmp);
- /* Scan bitmap to get the free pool */
- rc = rte_bitmap_scan(lf->npa_bmp, &pos, &slab);
- /* Empty bitmap */
- if (rc == 0) {
- otx2_err("Mempools exhausted, 'max_pools' devargs to increase");
- return -ERANGE;
- }
-
- /* Get aura_id from resource bitmap */
- aura_id = pos + bitmap_ctzll(slab);
- /* Mark pool as reserved */
- rte_bitmap_clear(lf->npa_bmp, aura_id);
-
- /* Configuration based on each aura has separate pool(aura-pool pair) */
- pool_id = aura_id;
- rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools || aura_id >=
- (int)BIT_ULL(6 + lf->aura_sz)) ? NPA_LF_ERR_AURA_ID_ALLOC : 0;
- if (rc)
- goto exit;
-
- /* Allocate stack memory */
- stack_size = (block_count + lf->stack_pg_ptrs - 1) / lf->stack_pg_ptrs;
- alloc_size = stack_size * lf->stack_pg_bytes;
-
- mz = npa_lf_stack_dma_alloc(lf, name, pool_id, alloc_size);
- if (mz == NULL) {
- rc = -ENOMEM;
- goto aura_res_put;
- }
-
- /* Update aura fields */
- aura->pool_addr = pool_id;/* AF will translate to associated poolctx */
- aura->ena = 1;
- aura->shift = rte_log2_u32(block_count);
- aura->shift = aura->shift < 8 ? 0 : aura->shift - 8;
- aura->limit = block_count;
- aura->pool_caching = 1;
- aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS);
- /* Many to one reduction */
- aura->err_qint_idx = aura_id % lf->qints;
-
- /* Update pool fields */
- pool->stack_base = mz->iova;
- pool->ena = 1;
- pool->buf_size = block_size / OTX2_ALIGN;
- pool->stack_max_pages = stack_size;
- pool->shift = rte_log2_u32(block_count);
- pool->shift = pool->shift < 8 ? 0 : pool->shift - 8;
- pool->ptr_start = 0;
- pool->ptr_end = ~0;
- pool->stack_caching = 1;
- pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS);
- pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE);
- pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR);
-
- /* Many to one reduction */
- pool->err_qint_idx = pool_id % lf->qints;
-
- /* Issue AURA_INIT and POOL_INIT op */
- rc = npa_lf_aura_pool_init(lf->mbox, aura_id, aura, pool);
- if (rc)
- goto stack_mem_free;
-
- *aura_handle = npa_lf_aura_handle_gen(aura_id, lf->base);
-
- /* Update aura count */
- npa_lf_aura_op_cnt_set(*aura_handle, 0, block_count);
- /* Read it back to make sure aura count is updated */
- npa_lf_aura_op_cnt_get(*aura_handle);
-
- return 0;
-
-stack_mem_free:
- rte_memzone_free(mz);
-aura_res_put:
- rte_bitmap_set(lf->npa_bmp, aura_id);
-exit:
- return rc;
-}
-
-static int
-npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle)
-{
- char name[RTE_MEMZONE_NAMESIZE];
- int aura_id, pool_id, rc;
-
- if (!lf || !aura_handle)
- return NPA_LF_ERR_PARAM;
-
- aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle);
- rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle);
- rc |= npa_lf_stack_dma_free(lf, name, pool_id);
-
- rte_bitmap_set(lf->npa_bmp, aura_id);
-
- return rc;
-}
-
-static int
-npa_lf_aura_range_update_check(uint64_t aura_handle)
-{
- uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- struct npa_aura_lim *lim = lf->aura_lim;
- __otx2_io struct npa_pool_s *pool;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
-
- req->aura_id = aura_id;
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id);
- return rc;
- }
-
- pool = &rsp->pool;
-
- if (lim[aura_id].ptr_start != pool->ptr_start ||
- lim[aura_id].ptr_end != pool->ptr_end) {
- otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id);
- return -ERANGE;
- }
-
- return 0;
-}
-
-static int
-otx2_npa_alloc(struct rte_mempool *mp)
-{
- uint32_t block_size, block_count;
- uint64_t aura_handle = 0;
- struct otx2_npa_lf *lf;
- struct npa_aura_s aura;
- struct npa_pool_s pool;
- size_t padding;
- int rc;
-
- lf = otx2_npa_lf_obj_get();
- if (lf == NULL) {
- rc = -EINVAL;
- goto error;
- }
-
- block_size = mp->elt_size + mp->header_size + mp->trailer_size;
- /*
- * OCTEON TX2 has 8 sets, 41 ways L1D cache, VA<9:7> bits dictate
- * the set selection.
- * Add additional padding to ensure that the element size always
- * occupies odd number of cachelines to ensure even distribution
- * of elements among L1D cache sets.
- */
- padding = ((block_size / RTE_CACHE_LINE_SIZE) % 2) ? 0 :
- RTE_CACHE_LINE_SIZE;
- mp->trailer_size += padding;
- block_size += padding;
-
- block_count = mp->size;
-
- if (block_size % OTX2_ALIGN != 0) {
- otx2_err("Block size should be multiple of 128B");
- rc = -ERANGE;
- goto error;
- }
-
- memset(&aura, 0, sizeof(struct npa_aura_s));
- memset(&pool, 0, sizeof(struct npa_pool_s));
- pool.nat_align = 1;
- pool.buf_offset = 1;
-
- if ((uint32_t)pool.buf_offset * OTX2_ALIGN != mp->header_size) {
- otx2_err("Unsupported mp->header_size=%d", mp->header_size);
- rc = -EINVAL;
- goto error;
- }
-
- /* Use driver specific mp->pool_config to override aura config */
- if (mp->pool_config != NULL)
- memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s));
-
- rc = npa_lf_aura_pool_pair_alloc(lf, block_size, block_count,
- &aura, &pool, &aura_handle);
- if (rc) {
- otx2_err("Failed to alloc pool or aura rc=%d", rc);
- goto error;
- }
-
- /* Store aura_handle for future queue operations */
- mp->pool_id = aura_handle;
- otx2_npa_dbg("lf=%p block_sz=%d block_count=%d aura_handle=0x%"PRIx64,
- lf, block_size, block_count, aura_handle);
-
- /* Just hold the reference of the object */
- otx2_npa_lf_obj_ref();
- return 0;
-error:
- return rc;
-}
-
-static void
-otx2_npa_free(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- int rc = 0;
-
- otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id);
- if (lf != NULL)
- rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id);
-
- if (rc)
- otx2_err("Failed to free pool or aura rc=%d", rc);
-
- /* Release the reference of npalf */
- otx2_npa_lf_fini();
-}
-
-static ssize_t
-otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
- uint32_t pg_shift, size_t *min_chunk_size, size_t *align)
-{
- size_t total_elt_sz;
-
- /* Need space for one more obj on each chunk to fulfill
- * alignment requirements.
- */
- total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
- return rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift,
- total_elt_sz, min_chunk_size,
- align);
-}
-
-static uint8_t
-otx2_npa_l1d_way_set_get(uint64_t iova)
-{
- return (iova >> rte_log2_u32(RTE_CACHE_LINE_SIZE)) & 0x7;
-}
-
-static int
-otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr,
- rte_iova_t iova, size_t len,
- rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
-{
-#define OTX2_L1D_NB_SETS 8
- uint64_t distribution[OTX2_L1D_NB_SETS];
- rte_iova_t start_iova;
- size_t total_elt_sz;
- uint8_t set;
- size_t off;
- int i;
-
- if (iova == RTE_BAD_IOVA)
- return -EINVAL;
-
- total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
- /* Align object start address to a multiple of total_elt_sz */
- off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1);
-
- if (len < off)
- return -EINVAL;
-
-
- vaddr = (char *)vaddr + off;
- iova += off;
- len -= off;
-
- memset(distribution, 0, sizeof(uint64_t) * OTX2_L1D_NB_SETS);
- start_iova = iova;
- while (start_iova < iova + len) {
- set = otx2_npa_l1d_way_set_get(start_iova + mp->header_size);
- distribution[set]++;
- start_iova += total_elt_sz;
- }
-
- otx2_npa_dbg("iova %"PRIx64", aligned iova %"PRIx64"", iova - off,
- iova);
- otx2_npa_dbg("length %"PRIu64", aligned length %"PRIu64"",
- (uint64_t)(len + off), (uint64_t)len);
- otx2_npa_dbg("element size %"PRIu64"", (uint64_t)total_elt_sz);
- otx2_npa_dbg("requested objects %"PRIu64", possible objects %"PRIu64"",
- (uint64_t)max_objs, (uint64_t)(len / total_elt_sz));
- otx2_npa_dbg("L1D set distribution :");
- for (i = 0; i < OTX2_L1D_NB_SETS; i++)
- otx2_npa_dbg("set[%d] : objects : %"PRIu64"", i,
- distribution[i]);
-
- npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len);
-
- if (npa_lf_aura_range_update_check(mp->pool_id) < 0)
- return -EBUSY;
-
- return rte_mempool_op_populate_helper(mp,
- RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ,
- max_objs, vaddr, iova, len,
- obj_cb, obj_cb_arg);
-}
-
-static struct rte_mempool_ops otx2_npa_ops = {
- .name = "octeontx2_npa",
- .alloc = otx2_npa_alloc,
- .free = otx2_npa_free,
- .enqueue = otx2_npa_enq,
- .get_count = otx2_npa_get_count,
- .calc_mem_size = otx2_npa_calc_mem_size,
- .populate = otx2_npa_populate,
-#if defined(RTE_ARCH_ARM64)
- .dequeue = otx2_npa_deq_arm64,
-#else
- .dequeue = otx2_npa_deq,
-#endif
-};
-
-RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops);
diff --git a/drivers/mempool/octeontx2/version.map b/drivers/mempool/octeontx2/version.map
deleted file mode 100644
index e6887ceb8f..0000000000
--- a/drivers/mempool/octeontx2/version.map
+++ /dev/null
@@ -1,8 +0,0 @@
-INTERNAL {
- global:
-
- otx2_npa_lf_fini;
- otx2_npa_lf_init;
-
- local: *;
-};
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index f8f3d3895e..d34bc6898f 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -579,6 +579,21 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id cn9k_pci_nix_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_AF_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 2355d1cde8..e35652fe63 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -45,7 +45,6 @@ drivers = [
'ngbe',
'null',
'octeontx',
- 'octeontx2',
'octeontx_ep',
'pcap',
'pfe',
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
deleted file mode 100644
index ab15844cbc..0000000000
--- a/drivers/net/octeontx2/meson.build
+++ /dev/null
@@ -1,47 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_rx.c',
- 'otx2_tx.c',
- 'otx2_tm.c',
- 'otx2_rss.c',
- 'otx2_mac.c',
- 'otx2_ptp.c',
- 'otx2_flow.c',
- 'otx2_link.c',
- 'otx2_vlan.c',
- 'otx2_stats.c',
- 'otx2_mcast.c',
- 'otx2_lookup.c',
- 'otx2_ethdev.c',
- 'otx2_flow_ctrl.c',
- 'otx2_flow_dump.c',
- 'otx2_flow_parse.c',
- 'otx2_flow_utils.c',
- 'otx2_ethdev_irq.c',
- 'otx2_ethdev_ops.c',
- 'otx2_ethdev_sec.c',
- 'otx2_ethdev_debug.c',
- 'otx2_ethdev_devargs.c',
-)
-
-deps += ['bus_pci', 'cryptodev', 'eventdev', 'security']
-deps += ['common_octeontx2', 'mempool_octeontx2']
-
-extra_flags = ['-flax-vector-conversions']
-foreach flag: extra_flags
- if cc.has_argument(flag)
- cflags += flag
- endif
-endforeach
-
-includes += include_directories('../../common/cpt')
-includes += include_directories('../../crypto/octeontx2')
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
deleted file mode 100644
index 4f1c0b98de..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ /dev/null
@@ -1,2814 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <ethdev_pci.h>
-#include <rte_io.h>
-#include <rte_malloc.h>
-#include <rte_mbuf.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_mempool.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-
-static inline uint64_t
-nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
-{
- uint64_t capa = NIX_RX_OFFLOAD_CAPA;
-
- if (otx2_dev_is_vf(dev) ||
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
-
- return capa;
-}
-
-static inline uint64_t
-nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
-{
- uint64_t capa = NIX_TX_OFFLOAD_CAPA;
-
- /* TSO not supported for earlier chip revisions */
- if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
- RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
- return capa;
-}
-
-static const struct otx2_dev_ops otx2_dev_ops = {
- .link_status_update = otx2_eth_dev_link_status_update,
- .ptp_info_update = otx2_eth_dev_ptp_info_update,
- .link_status_get = otx2_eth_dev_link_status_get,
-};
-
-static int
-nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lf_alloc_req *req;
- struct nix_lf_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox);
- req->rq_cnt = nb_rxq;
- req->sq_cnt = nb_txq;
- req->cq_cnt = nb_rxq;
- /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */
- RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128);
- req->xqe_sz = NIX_XQESZ_W16;
- req->rss_sz = dev->rss_info.rss_size;
- req->rss_grps = NIX_RSS_GRPS;
- req->npa_func = otx2_npa_pf_func_get();
- req->sso_func = otx2_sso_pf_func_get();
- req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
- req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
- req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
- }
- req->rx_cfg |= (BIT_ULL(32 /* DROP_RE */) |
- BIT_ULL(33 /* Outer L2 Length */) |
- BIT_ULL(38 /* Inner L4 UDP Length */) |
- BIT_ULL(39 /* Inner L3 Length */) |
- BIT_ULL(40 /* Outer L4 UDP Length */) |
- BIT_ULL(41 /* Outer L3 Length */));
-
- if (dev->rss_tag_as_xor == 0)
- req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->sqb_size = rsp->sqb_size;
- dev->tx_chan_base = rsp->tx_chan_base;
- dev->rx_chan_base = rsp->rx_chan_base;
- dev->rx_chan_cnt = rsp->rx_chan_cnt;
- dev->tx_chan_cnt = rsp->tx_chan_cnt;
- dev->lso_tsov4_idx = rsp->lso_tsov4_idx;
- dev->lso_tsov6_idx = rsp->lso_tsov6_idx;
- dev->lf_tx_stats = rsp->lf_tx_stats;
- dev->lf_rx_stats = rsp->lf_rx_stats;
- dev->cints = rsp->cints;
- dev->qints = rsp->qints;
- dev->npc_flow.channel = dev->rx_chan_base;
- dev->ptp_en = rsp->hw_rx_tstamp_en;
-
- return 0;
-}
-
-static int
-nix_lf_switch_header_type_enable(struct otx2_eth_dev *dev, bool enable)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct npc_set_pkind *req;
- struct msg_resp *rsp;
- int rc;
-
- if (dev->npc_flow.switch_header_type == 0)
- return 0;
-
- /* Notify AF about higig2 config */
- req = otx2_mbox_alloc_msg_npc_set_pkind(mbox);
- req->mode = dev->npc_flow.switch_header_type;
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_CHLEN90B_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_CH_LEN_24B) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_CHLEN24B_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_EXDSA) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_EXDSA_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_VLAN_EXDSA) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_VLAN_EXDSA_PKIND;
- }
-
- if (enable == 0)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
- req->dir = PKIND_RX;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
- req = otx2_mbox_alloc_msg_npc_set_pkind(mbox);
- req->mode = dev->npc_flow.switch_header_type;
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B ||
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_24B)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
-
- if (enable == 0)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
- req->dir = PKIND_TX;
- return otx2_mbox_process_msg(mbox, (void *)&rsp);
-}
-
-static int
-nix_lf_free(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lf_free_req *req;
- struct ndc_sync_op *ndc_req;
- int rc;
-
- /* Sync NDC-NIX for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->nix_lf_tx_sync = 1;
- ndc_req->nix_lf_rx_sync = 1;
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc);
-
- req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
- /* Let AF driver free all this nix lf's
- * NPC entries allocated using NPC MBOX.
- */
- req->flags = 0;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npc_rx_enable(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- otx2_mbox_alloc_msg_nix_lf_start_rx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npc_rx_disable(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cgx_start_link_event(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_linkevents(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (en && otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (en)
- otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox);
- else
- otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cgx_stop_link_event(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static inline void
-nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
-{
- rxq->head = 0;
- rxq->available = 0;
-}
-
-static inline uint32_t
-nix_qsize_to_val(enum nix_q_size_e qsize)
-{
- return (16UL << (qsize * 2));
-}
-
-static inline enum nix_q_size_e
-nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val)
-{
- int i;
-
- if (otx2_ethdev_fixup_is_min_4k_q(dev))
- i = nix_q_size_4K;
- else
- i = nix_q_size_16;
-
- for (; i < nix_q_size_max; i++)
- if (val <= nix_qsize_to_val(i))
- break;
-
- if (i >= nix_q_size_max)
- i = nix_q_size_max - 1;
-
- return i;
-}
-
-static int
-nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
- uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp)
-{
- struct otx2_mbox *mbox = dev->mbox;
- const struct rte_memzone *rz;
- uint32_t ring_size, cq_size;
- struct nix_aq_enq_req *aq;
- uint16_t first_skip;
- int rc;
-
- cq_size = rxq->qlen;
- ring_size = cq_size * NIX_CQ_ENTRY_SZ;
- rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size,
- NIX_CQ_ALIGN, dev->node);
- if (rz == NULL) {
- otx2_err("Failed to allocate mem for cq hw ring");
- return -ENOMEM;
- }
- memset(rz->addr, 0, rz->len);
- rxq->desc = (uintptr_t)rz->addr;
- rxq->qmask = cq_size - 1;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_INIT;
-
- aq->cq.ena = 1;
- aq->cq.caching = 1;
- aq->cq.qsize = rxq->qsize;
- aq->cq.base = rz->iova;
- aq->cq.avg_level = 0xff;
- aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
- aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
-
- /* Many to one reduction */
- aq->cq.qint_idx = qid % dev->qints;
- /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */
- aq->cq.cint_idx = qid;
-
- if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
- const float rx_cq_skid = NIX_CQ_FULL_ERRATA_SKID;
- uint16_t min_rx_drop;
-
- min_rx_drop = ceil(rx_cq_skid / (float)cq_size);
- aq->cq.drop = min_rx_drop;
- aq->cq.drop_ena = 1;
- rxq->cq_drop = min_rx_drop;
- } else {
- rxq->cq_drop = NIX_CQ_THRESH_LEVEL;
- aq->cq.drop = rxq->cq_drop;
- aq->cq.drop_ena = 1;
- }
-
- /* TX pause frames enable flowctrl on RX side */
- if (dev->fc_info.tx_pause) {
- /* Single bpid is allocated for all rx channels for now */
- aq->cq.bpid = dev->fc_info.bpid[0];
- aq->cq.bp = rxq->cq_drop;
- aq->cq.bp_ena = 1;
- }
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to init cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_INIT;
-
- aq->rq.sso_ena = 0;
-
- if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
- aq->rq.ipsech_ena = 1;
-
- aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
- aq->rq.spb_ena = 0;
- aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id);
- first_skip = (sizeof(struct rte_mbuf));
- first_skip += RTE_PKTMBUF_HEADROOM;
- first_skip += rte_pktmbuf_priv_size(mp);
- rxq->data_off = first_skip;
-
- first_skip /= 8; /* Expressed in number of dwords */
- aq->rq.first_skip = first_skip;
- aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8);
- aq->rq.flow_tagw = 32; /* 32-bits */
- aq->rq.lpb_sizem1 = mp->elt_size / 8;
- aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
- aq->rq.ena = 1;
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
- aq->rq.rq_int_ena = 0;
- /* Many to one reduction */
- aq->rq.qint_idx = qid % dev->qints;
-
- aq->rq.xqe_drop_ena = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to init rq context");
- return rc;
- }
-
- if (dev->lock_rx_ctx) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_LOCK;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- otx2_err("Failed to LOCK rq context");
- return -ENOMEM;
- }
- }
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_LOCK;
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to LOCK rq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-static int
-nix_rq_enb_dis(struct rte_eth_dev *eth_dev,
- struct otx2_eth_rxq *rxq, const bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
-
- /* Pkts will be dropped silently if RQ is disabled */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.ena = enb;
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- /* RQ is already disabled */
- /* Disable CQ */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 0;
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to disable cq context");
- return rc;
- }
-
- if (dev->lock_rx_ctx) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- otx2_err("Failed to UNLOCK rq context");
- return -ENOMEM;
- }
- }
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK rq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-static inline int
-nix_get_data_off(struct otx2_eth_dev *dev)
-{
- return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0;
-}
-
-uint64_t
-otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id)
-{
- struct rte_mbuf mb_def;
- uint64_t *tmp;
-
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
- offsetof(struct rte_mbuf, data_off) != 2);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) -
- offsetof(struct rte_mbuf, data_off) != 4);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) -
- offsetof(struct rte_mbuf, data_off) != 6);
- mb_def.nb_segs = 1;
- mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev);
- mb_def.port = port_id;
- rte_mbuf_refcnt_set(&mb_def, 1);
-
- /* Prevent compiler reordering: rearm_data covers previous fields */
- rte_compiler_barrier();
- tmp = (uint64_t *)&mb_def.rearm_data;
-
- return *tmp;
-}
-
-static void
-otx2_nix_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- struct otx2_eth_rxq *rxq = dev->data->rx_queues[qid];
-
- if (!rxq)
- return;
-
- otx2_nix_dbg("Releasing rxq %u", rxq->rq);
- nix_cq_rq_uninit(rxq->eth_dev, rxq);
- rte_free(rxq);
- dev->data->rx_queues[qid] = NULL;
-}
-
-static int
-otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
- uint16_t nb_desc, unsigned int socket,
- const struct rte_eth_rxconf *rx_conf,
- struct rte_mempool *mp)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_mempool_ops *ops;
- struct otx2_eth_rxq *rxq;
- const char *platform_ops;
- enum nix_q_size_e qsize;
- uint64_t offloads;
- int rc;
-
- rc = -EINVAL;
-
- /* Compile time check to make sure all fast path elements in a CL */
- RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128);
-
- /* Sanity checks */
- if (rx_conf->rx_deferred_start == 1) {
- otx2_err("Deferred Rx start is not supported");
- goto fail;
- }
-
- platform_ops = rte_mbuf_platform_mempool_ops();
- /* This driver needs octeontx2_npa mempool ops to work */
- ops = rte_mempool_get_ops(mp->ops_index);
- if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) {
- otx2_err("mempool ops should be of octeontx2_npa type");
- goto fail;
- }
-
- if (mp->pool_id == 0) {
- otx2_err("Invalid pool_id");
- goto fail;
- }
-
- /* Free memory prior to re-allocation if needed */
- if (eth_dev->data->rx_queues[rq] != NULL) {
- otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq);
- otx2_nix_rx_queue_release(eth_dev, rq);
- rte_eth_dma_zone_free(eth_dev, "cq", rq);
- }
-
- offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads;
- dev->rx_offloads |= offloads;
-
- /* Find the CQ queue size */
- qsize = nix_qsize_clampup_get(dev, nb_desc);
- /* Allocate rxq memory */
- rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket);
- if (rxq == NULL) {
- otx2_err("Failed to allocate rq=%d", rq);
- rc = -ENOMEM;
- goto fail;
- }
-
- rxq->eth_dev = eth_dev;
- rxq->rq = rq;
- rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR;
- rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS);
- rxq->wdata = (uint64_t)rq << 32;
- rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id);
- rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev,
- eth_dev->data->port_id);
- rxq->offloads = offloads;
- rxq->pool = mp;
- rxq->qlen = nix_qsize_to_val(qsize);
- rxq->qsize = qsize;
- rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
- rxq->tstamp = &dev->tstamp;
-
- eth_dev->data->rx_queues[rq] = rxq;
-
- /* Alloc completion queue */
- rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
- if (rc) {
- otx2_err("Failed to allocate rxq=%u", rq);
- goto free_rxq;
- }
-
- rxq->qconf.socket_id = socket;
- rxq->qconf.nb_desc = nb_desc;
- rxq->qconf.mempool = mp;
- memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf));
-
- nix_rx_queue_reset(rxq);
- otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d",
- rq, mp->name, qsize, nb_desc, rxq->qlen);
-
- eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED;
-
- /* Calculating delta and freq mult between PTP HI clock and tsc.
- * These are needed in deriving raw clock value from tsc counter.
- * read_clock eth op returns raw clock value.
- */
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
- otx2_ethdev_is_ptp_en(dev)) {
- rc = otx2_nix_raw_clock_tsc_conv(dev);
- if (rc) {
- otx2_err("Failed to calculate delta and freq mult");
- goto fail;
- }
- }
-
- /* Setup scatter mode if needed by jumbo */
- otx2_nix_enable_mseg_on_jumbo(rxq);
-
- return 0;
-
-free_rxq:
- otx2_nix_rx_queue_release(eth_dev, rq);
-fail:
- return rc;
-}
-
-static inline uint8_t
-nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
-{
- /*
- * Maximum three segments can be supported with W8, Choose
- * NIX_MAXSQESZ_W16 for multi segment offload.
- */
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- return NIX_MAXSQESZ_W16;
- else
- return NIX_MAXSQESZ_W8;
-}
-
-static uint16_t
-nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct rte_eth_conf *conf = &data->dev_conf;
- struct rte_eth_rxmode *rxmode = &conf->rxmode;
- uint16_t flags = 0;
-
- if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
- (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
- flags |= NIX_RX_OFFLOAD_RSS_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
- flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
- flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
- flags |= NIX_RX_MULTI_SEG_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
- flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
-
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
- flags |= NIX_RX_OFFLOAD_TSTAMP_F;
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
- flags |= NIX_RX_OFFLOAD_SECURITY_F;
-
- if (!dev->ptype_disable)
- flags |= NIX_RX_OFFLOAD_PTYPE_F;
-
- return flags;
-}
-
-static uint16_t
-nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t conf = dev->tx_offloads;
- uint16_t flags = 0;
-
- /* Fastpath is dependent on these enums */
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
- RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
- RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
- RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
- RTE_BUILD_BUG_ON(RTE_MBUF_OUTL3_LEN_BITS != 9);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) !=
- offsetof(struct rte_mbuf, buf_iova) + 8);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
- offsetof(struct rte_mbuf, buf_iova) + 16);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
- offsetof(struct rte_mbuf, ol_flags) + 12);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
- offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
-
- if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
- conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
- flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
- flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
- flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
-
- if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
- flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- flags |= NIX_TX_MULTI_SEG_F;
-
- /* Enable Inner checksum for TSO */
- if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
- flags |= (NIX_TX_OFFLOAD_TSO_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F);
-
- /* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
- flags |= (NIX_TX_OFFLOAD_TSO_F |
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F);
-
- if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
- flags |= NIX_TX_OFFLOAD_SECURITY_F;
-
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
- flags |= NIX_TX_OFFLOAD_TSTAMP_F;
-
- return flags;
-}
-
-static int
-nix_sqb_lock(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_LOCK;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(npa_lf->mbox, 0);
- rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK AURA context");
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- otx2_err("Failed to LOCK POOL context");
- return -ENOMEM;
- }
- }
-
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(npa_lf->mbox);
- if (rc < 0) {
- otx2_err("Unable to lock POOL in NDC");
- return rc;
- }
-
- return 0;
-}
-
-static int
-nix_sqb_unlock(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_UNLOCK;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(npa_lf->mbox, 0);
- rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK AURA context");
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- otx2_err("Failed to UNLOCK POOL context");
- return -ENOMEM;
- }
- }
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(npa_lf->mbox);
- if (rc < 0) {
- otx2_err("Unable to UNLOCK AURA in NDC");
- return rc;
- }
-
- return 0;
-}
-
-void
-otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
-{
- struct rte_pktmbuf_pool_private *mbp_priv;
- struct rte_eth_dev *eth_dev;
- struct otx2_eth_dev *dev;
- uint32_t buffsz;
-
- eth_dev = rxq->eth_dev;
- dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Get rx buffer size */
- mbp_priv = rte_mempool_get_priv(rxq->pool);
- buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
-
- if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
-
- /* Setting up the rx[tx]_offload_flags due to change
- * in rx[tx]_offloads.
- */
- dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
- }
-}
-
-static int
-nix_sq_init(struct otx2_eth_txq *txq)
-{
- struct otx2_eth_dev *dev = txq->dev;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *sq;
- uint32_t rr_quantum;
- uint16_t smq;
- int rc;
-
- if (txq->sqb_pool->pool_id == 0)
- return -EINVAL;
-
- rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq);
- if (rc) {
- otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc);
- return rc;
- }
-
- sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- sq->qidx = txq->sq;
- sq->ctype = NIX_AQ_CTYPE_SQ;
- sq->op = NIX_AQ_INSTOP_INIT;
- sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
-
- sq->sq.smq = smq;
- sq->sq.smq_rr_quantum = rr_quantum;
- sq->sq.default_chan = dev->tx_chan_base;
- sq->sq.sqe_stype = NIX_STYPE_STF;
- sq->sq.ena = 1;
- if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
- sq->sq.sqe_stype = NIX_STYPE_STP;
- sq->sq.sqb_aura =
- npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id);
- sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
-
- /* Many to one reduction */
- sq->sq.qint_idx = txq->sq % dev->qints;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0)
- return rc;
-
- if (dev->lock_tx_ctx) {
- sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- sq->qidx = txq->sq;
- sq->ctype = NIX_AQ_CTYPE_SQ;
- sq->op = NIX_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(mbox);
- }
-
- return rc;
-}
-
-static int
-nix_sq_uninit(struct otx2_eth_txq *txq)
-{
- struct otx2_eth_dev *dev = txq->dev;
- struct otx2_mbox *mbox = dev->mbox;
- struct ndc_sync_op *ndc_req;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- uint16_t sqes_per_sqb;
- void *sqb_buf;
- int rc, count;
-
- otx2_nix_dbg("Cleaning up sq %u", txq->sq);
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Check if sq is already cleaned up */
- if (!rsp->sq.ena)
- return 0;
-
- /* Disable sq */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->sq_mask.ena = ~aq->sq_mask.ena;
- aq->sq.ena = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- if (dev->lock_tx_ctx) {
- /* Unlock sq */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0)
- return rc;
-
- nix_sqb_unlock(txq->sqb_pool);
- }
-
- /* Read SQ and free sqb's */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (aq->sq.smq_pend)
- otx2_err("SQ has pending sqe's");
-
- count = aq->sq.sqb_count;
- sqes_per_sqb = 1 << txq->sqes_per_sqb_log2;
- /* Free SQB's that are used */
- sqb_buf = (void *)rsp->sq.head_sqb;
- while (count) {
- void *next_sqb;
-
- next_sqb = *(void **)((uintptr_t)sqb_buf + (uint32_t)
- ((sqes_per_sqb - 1) *
- nix_sq_max_sqe_sz(txq)));
- npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
- (uint64_t)sqb_buf);
- sqb_buf = next_sqb;
- count--;
- }
-
- /* Free next to use sqb */
- if (rsp->sq.next_sqb)
- npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
- rsp->sq.next_sqb);
-
- /* Sync NDC-NIX-TX for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->nix_lf_tx_sync = 1;
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc);
-
- return rc;
-}
-
-static int
-nix_sqb_aura_limit_cfg(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *aura_req;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
-
- aura_req->aura.limit = nb_sqb_bufs;
- aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
-
- return otx2_mbox_process(npa_lf->mbox);
-}
-
-static int
-nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
-{
- struct otx2_eth_dev *dev = txq->dev;
- uint16_t sqes_per_sqb, nb_sqb_bufs;
- char name[RTE_MEMPOOL_NAMESIZE];
- struct rte_mempool_objsz sz;
- struct npa_aura_s *aura;
- uint32_t tmp, blk_sz;
-
- aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN);
- snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq);
- blk_sz = dev->sqb_size;
-
- if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16)
- sqes_per_sqb = (dev->sqb_size / 8) / 16;
- else
- sqes_per_sqb = (dev->sqb_size / 8) / 8;
-
- nb_sqb_bufs = nb_desc / sqes_per_sqb;
- /* Clamp up to devarg passed SQB count */
- nb_sqb_bufs = RTE_MIN(dev->max_sqb_count, RTE_MAX(NIX_DEF_SQB,
- nb_sqb_bufs + NIX_SQB_LIST_SPACE));
-
- txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
- 0, 0, dev->node,
- RTE_MEMPOOL_F_NO_SPREAD);
- txq->nb_sqb_bufs = nb_sqb_bufs;
- txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
- txq->nb_sqb_bufs_adj = nb_sqb_bufs -
- RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb;
- txq->nb_sqb_bufs_adj =
- (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100;
-
- if (txq->sqb_pool == NULL) {
- otx2_err("Failed to allocate sqe mempool");
- goto fail;
- }
-
- memset(aura, 0, sizeof(*aura));
- aura->fc_ena = 1;
- aura->fc_addr = txq->fc_iova;
- aura->fc_hyst_bits = 0; /* Store count on all updates */
- if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) {
- otx2_err("Failed to set ops for sqe mempool");
- goto fail;
- }
- if (rte_mempool_populate_default(txq->sqb_pool) < 0) {
- otx2_err("Failed to populate sqe mempool");
- goto fail;
- }
-
- tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz);
- if (dev->sqb_size != sz.elt_size) {
- otx2_err("sqe pool block size is not expected %d != %d",
- dev->sqb_size, tmp);
- goto fail;
- }
-
- nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
- if (dev->lock_tx_ctx)
- nix_sqb_lock(txq->sqb_pool);
-
- return 0;
-fail:
- return -ENOMEM;
-}
-
-void
-otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
-{
- struct nix_send_ext_s *send_hdr_ext;
- struct nix_send_hdr_s *send_hdr;
- struct nix_send_mem_s *send_mem;
- union nix_send_sg_s *sg;
-
- /* Initialize the fields based on basic single segment packet */
- memset(&txq->cmd, 0, sizeof(txq->cmd));
-
- if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) {
- send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
- /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */
- send_hdr->w0.sizem1 = 2;
-
- send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2];
- send_hdr_ext->w0.subdc = NIX_SUBDC_EXT;
- if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) {
- /* Default: one seg packet would have:
- * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM)
- * => 8/2 - 1 = 3
- */
- send_hdr->w0.sizem1 = 3;
- send_hdr_ext->w0.tstmp = 1;
-
- /* To calculate the offset for send_mem,
- * send_hdr->w0.sizem1 * 2
- */
- send_mem = (struct nix_send_mem_s *)(txq->cmd +
- (send_hdr->w0.sizem1 << 1));
- send_mem->subdc = NIX_SUBDC_MEM;
- send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
- send_mem->addr = txq->dev->tstamp.tx_tstamp_iova;
- }
- sg = (union nix_send_sg_s *)&txq->cmd[4];
- } else {
- send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
- /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */
- send_hdr->w0.sizem1 = 1;
- sg = (union nix_send_sg_s *)&txq->cmd[2];
- }
-
- send_hdr->w0.sq = txq->sq;
- sg->subdc = NIX_SUBDC_SG;
- sg->segs = 1;
- sg->ld_type = NIX_SENDLDTYPE_LDD;
-
- rte_smp_wmb();
-}
-
-static void
-otx2_nix_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
-{
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[qid];
-
- if (!txq)
- return;
-
- otx2_nix_dbg("Releasing txq %u", txq->sq);
-
- /* Flush and disable tm */
- otx2_nix_sq_flush_pre(txq, eth_dev->data->dev_started);
-
- /* Free sqb's and disable sq */
- nix_sq_uninit(txq);
-
- if (txq->sqb_pool) {
- rte_mempool_free(txq->sqb_pool);
- txq->sqb_pool = NULL;
- }
- otx2_nix_sq_flush_post(txq);
- rte_free(txq);
- eth_dev->data->tx_queues[qid] = NULL;
-}
-
-
-static int
-otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
- uint16_t nb_desc, unsigned int socket_id,
- const struct rte_eth_txconf *tx_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct rte_memzone *fc;
- struct otx2_eth_txq *txq;
- uint64_t offloads;
- int rc;
-
- rc = -EINVAL;
-
- /* Compile time check to make sure all fast path elements in a CL */
- RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128);
-
- if (tx_conf->tx_deferred_start) {
- otx2_err("Tx deferred start is not supported");
- goto fail;
- }
-
- /* Free memory prior to re-allocation if needed. */
- if (eth_dev->data->tx_queues[sq] != NULL) {
- otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq);
- otx2_nix_tx_queue_release(eth_dev, sq);
- }
-
- /* Find the expected offloads for this queue */
- offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads;
-
- /* Allocating tx queue data structure */
- txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq),
- OTX2_ALIGN, socket_id);
- if (txq == NULL) {
- otx2_err("Failed to alloc txq=%d", sq);
- rc = -ENOMEM;
- goto fail;
- }
- txq->sq = sq;
- txq->dev = dev;
- txq->sqb_pool = NULL;
- txq->offloads = offloads;
- dev->tx_offloads |= offloads;
- eth_dev->data->tx_queues[sq] = txq;
-
- /*
- * Allocate memory for flow control updates from HW.
- * Alloc one cache line, so that fits all FC_STYPE modes.
- */
- fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq,
- OTX2_ALIGN + sizeof(struct npa_aura_s),
- OTX2_ALIGN, dev->node);
- if (fc == NULL) {
- otx2_err("Failed to allocate mem for fcmem");
- rc = -ENOMEM;
- goto free_txq;
- }
- txq->fc_iova = fc->iova;
- txq->fc_mem = fc->addr;
-
- /* Initialize the aura sqb pool */
- rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc);
- if (rc) {
- otx2_err("Failed to alloc sqe pool rc=%d", rc);
- goto free_txq;
- }
-
- /* Initialize the SQ */
- rc = nix_sq_init(txq);
- if (rc) {
- otx2_err("Failed to init sq=%d context", sq);
- goto free_txq;
- }
-
- txq->fc_cache_pkts = 0;
- txq->io_addr = dev->base + NIX_LF_OP_SENDX(0);
- /* Evenly distribute LMT slot for each sq */
- txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12));
-
- txq->qconf.socket_id = socket_id;
- txq->qconf.nb_desc = nb_desc;
- memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf));
-
- txq->lso_tun_fmt = dev->lso_tun_fmt;
- otx2_nix_form_default_desc(txq);
-
- otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 ""
- " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq,
- fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr,
- txq->nb_sqb_bufs, txq->sqes_per_sqb_log2);
- eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED;
- return 0;
-
-free_txq:
- otx2_nix_tx_queue_release(eth_dev, sq);
-fail:
- return rc;
-}
-
-static int
-nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_eth_qconf *tx_qconf = NULL;
- struct otx2_eth_qconf *rx_qconf = NULL;
- struct otx2_eth_txq **txq;
- struct otx2_eth_rxq **rxq;
- int i, nb_rxq, nb_txq;
-
- nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
- nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
-
- tx_qconf = malloc(nb_txq * sizeof(*tx_qconf));
- if (tx_qconf == NULL) {
- otx2_err("Failed to allocate memory for tx_qconf");
- goto fail;
- }
-
- rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf));
- if (rx_qconf == NULL) {
- otx2_err("Failed to allocate memory for rx_qconf");
- goto fail;
- }
-
- txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
- for (i = 0; i < nb_txq; i++) {
- if (txq[i] == NULL) {
- tx_qconf[i].valid = false;
- otx2_info("txq[%d] is already released", i);
- continue;
- }
- memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf));
- tx_qconf[i].valid = true;
- otx2_nix_tx_queue_release(eth_dev, i);
- }
-
- rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
- for (i = 0; i < nb_rxq; i++) {
- if (rxq[i] == NULL) {
- rx_qconf[i].valid = false;
- otx2_info("rxq[%d] is already released", i);
- continue;
- }
- memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf));
- rx_qconf[i].valid = true;
- otx2_nix_rx_queue_release(eth_dev, i);
- }
-
- dev->tx_qconf = tx_qconf;
- dev->rx_qconf = rx_qconf;
- return 0;
-
-fail:
- free(tx_qconf);
- free(rx_qconf);
-
- return -ENOMEM;
-}
-
-static int
-nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_eth_qconf *tx_qconf = dev->tx_qconf;
- struct otx2_eth_qconf *rx_qconf = dev->rx_qconf;
- int rc, i, nb_rxq, nb_txq;
-
- nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
- nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
-
- rc = -ENOMEM;
- /* Setup tx & rx queues with previous configuration so
- * that the queues can be functional in cases like ports
- * are started without re configuring queues.
- *
- * Usual re config sequence is like below:
- * port_configure() {
- * if(reconfigure) {
- * queue_release()
- * queue_setup()
- * }
- * queue_configure() {
- * queue_release()
- * queue_setup()
- * }
- * }
- * port_start()
- *
- * In some application's control path, queue_configure() would
- * NOT be invoked for TXQs/RXQs in port_configure().
- * In such cases, queues can be functional after start as the
- * queues are already setup in port_configure().
- */
- for (i = 0; i < nb_txq; i++) {
- if (!tx_qconf[i].valid)
- continue;
- rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc,
- tx_qconf[i].socket_id,
- &tx_qconf[i].conf.tx);
- if (rc) {
- otx2_err("Failed to setup tx queue rc=%d", rc);
- for (i -= 1; i >= 0; i--)
- otx2_nix_tx_queue_release(eth_dev, i);
- goto fail;
- }
- }
-
- free(tx_qconf); tx_qconf = NULL;
-
- for (i = 0; i < nb_rxq; i++) {
- if (!rx_qconf[i].valid)
- continue;
- rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc,
- rx_qconf[i].socket_id,
- &rx_qconf[i].conf.rx,
- rx_qconf[i].mempool);
- if (rc) {
- otx2_err("Failed to setup rx queue rc=%d", rc);
- for (i -= 1; i >= 0; i--)
- otx2_nix_rx_queue_release(eth_dev, i);
- goto release_tx_queues;
- }
- }
-
- free(rx_qconf); rx_qconf = NULL;
-
- return 0;
-
-release_tx_queues:
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_release(eth_dev, i);
-fail:
- if (tx_qconf)
- free(tx_qconf);
- if (rx_qconf)
- free(rx_qconf);
-
- return rc;
-}
-
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
-static void
-nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
-{
- /* These dummy functions are required for supporting
- * some applications which reconfigure queues without
- * stopping tx burst and rx burst threads(eg kni app)
- * When the queues context is saved, txq/rxqs are released
- * which caused app crash since rx/tx burst is still
- * on different lcores
- */
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
- rte_mb();
-}
-
-static void
-nix_lso_tcp(struct nix_lso_format_cfg *req, bool v4)
-{
- volatile struct nix_lso_format *field;
-
- /* Format works only with TCP packet marked by OL3/OL4 */
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
- /* TCP flags field */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static void
-nix_lso_udp_tun_tcp(struct nix_lso_format_cfg *req,
- bool outer_v4, bool inner_v4)
-{
- volatile struct nix_lso_format *field;
-
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 len */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = outer_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (outer_v4) {
- /* IPID */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* Outer UDP length */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 4;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
-
- /* Inner IPv4/IPv6 */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = inner_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (inner_v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
-
- /* TCP flags field */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static void
-nix_lso_tun_tcp(struct nix_lso_format_cfg *req,
- bool outer_v4, bool inner_v4)
-{
- volatile struct nix_lso_format *field;
-
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 len */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = outer_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (outer_v4) {
- /* IPID */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* Inner IPv4/IPv6 */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = inner_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (inner_v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
-
- /* TCP flags field */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static int
-nix_setup_lso_formats(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lso_format_cfg_rsp *rsp;
- struct nix_lso_format_cfg *req;
- uint8_t *fmt;
- int rc;
-
- /* Skip if TSO was not requested */
- if (!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F))
- return 0;
- /*
- * IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tcp(req, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV4)
- return -EFAULT;
- otx2_nix_dbg("tcpv4 lso fmt=%u", rsp->lso_format_idx);
-
-
- /*
- * IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tcp(req, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV6)
- return -EFAULT;
- otx2_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/UDP/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, true, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/UDP/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, true, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/UDP/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, false, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/UDP/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, false, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, true, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, true, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, false, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, false, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx);
-
- /* Save all tun formats into u64 for fast path.
- * Lower 32bit has non-udp tunnel formats.
- * Upper 32bit has udp tunnel formats.
- */
- fmt = dev->lso_tun_idx;
- dev->lso_tun_fmt = ((uint64_t)fmt[NIX_LSO_TUN_V4V4] |
- (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 8 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 16 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 24);
-
- fmt = dev->lso_udp_tun_idx;
- dev->lso_tun_fmt |= ((uint64_t)fmt[NIX_LSO_TUN_V4V4] << 32 |
- (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 40 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 48 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 56);
-
- return 0;
-}
-
-static int
-otx2_nix_configure(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct rte_eth_conf *conf = &data->dev_conf;
- struct rte_eth_rxmode *rxmode = &conf->rxmode;
- struct rte_eth_txmode *txmode = &conf->txmode;
- char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE];
- struct rte_ether_addr *ea;
- uint8_t nb_rxq, nb_txq;
- int rc;
-
- rc = -EINVAL;
-
- /* Sanity checks */
- if (rte_eal_has_hugepages() == 0) {
- otx2_err("Huge page is not configured");
- goto fail_configure;
- }
-
- if (conf->dcb_capability_en == 1) {
- otx2_err("dcb enable is not supported");
- goto fail_configure;
- }
-
- if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
- otx2_err("Flow director is not supported");
- goto fail_configure;
- }
-
- if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
- rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
- otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
- goto fail_configure;
- }
-
- if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
- otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
- goto fail_configure;
- }
-
- if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
- otx2_err("Outer IP and SCTP checksum unsupported");
- goto fail_configure;
- }
-
- /* Free the resources allocated from the previous configure */
- if (dev->configured == 1) {
- otx2_eth_sec_fini(eth_dev);
- otx2_nix_rxchan_bpid_cfg(eth_dev, false);
- otx2_nix_vlan_fini(eth_dev);
- otx2_nix_mc_addr_list_uninstall(eth_dev);
- otx2_flow_free_all_resources(dev);
- oxt2_nix_unregister_queue_irqs(eth_dev);
- if (eth_dev->data->dev_conf.intr_conf.rxq)
- oxt2_nix_unregister_cq_irqs(eth_dev);
- nix_set_nop_rxtx_function(eth_dev);
- rc = nix_store_queue_cfg_and_then_release(eth_dev);
- if (rc)
- goto fail_configure;
- otx2_nix_tm_fini(eth_dev);
- nix_lf_free(dev);
- }
-
- dev->rx_offloads = rxmode->offloads;
- dev->tx_offloads = txmode->offloads;
- dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
- dev->rss_info.rss_grps = NIX_RSS_GRPS;
-
- nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
- nb_txq = RTE_MAX(data->nb_tx_queues, 1);
-
- /* Alloc a nix lf */
- rc = nix_lf_alloc(dev, nb_rxq, nb_txq);
- if (rc) {
- otx2_err("Failed to init nix_lf rc=%d", rc);
- goto fail_offloads;
- }
-
- otx2_nix_err_intr_enb_dis(eth_dev, true);
- otx2_nix_ras_intr_enb_dis(eth_dev, true);
-
- if (dev->ptp_en &&
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- otx2_err("Both PTP and switch header enabled");
- goto free_nix_lf;
- }
-
- rc = nix_lf_switch_header_type_enable(dev, true);
- if (rc) {
- otx2_err("Failed to enable switch type nix_lf rc=%d", rc);
- goto free_nix_lf;
- }
-
- rc = nix_setup_lso_formats(dev);
- if (rc) {
- otx2_err("failed to setup nix lso format fields, rc=%d", rc);
- goto free_nix_lf;
- }
-
- /* Configure RSS */
- rc = otx2_nix_rss_config(eth_dev);
- if (rc) {
- otx2_err("Failed to configure rss rc=%d", rc);
- goto free_nix_lf;
- }
-
- /* Init the default TM scheduler hierarchy */
- rc = otx2_nix_tm_init_default(eth_dev);
- if (rc) {
- otx2_err("Failed to init traffic manager rc=%d", rc);
- goto free_nix_lf;
- }
-
- rc = otx2_nix_vlan_offload_init(eth_dev);
- if (rc) {
- otx2_err("Failed to init vlan offload rc=%d", rc);
- goto tm_fini;
- }
-
- /* Register queue IRQs */
- rc = oxt2_nix_register_queue_irqs(eth_dev);
- if (rc) {
- otx2_err("Failed to register queue interrupts rc=%d", rc);
- goto vlan_fini;
- }
-
- /* Register cq IRQs */
- if (eth_dev->data->dev_conf.intr_conf.rxq) {
- if (eth_dev->data->nb_rx_queues > dev->cints) {
- otx2_err("Rx interrupt cannot be enabled, rxq > %d",
- dev->cints);
- goto q_irq_fini;
- }
- /* Rx interrupt feature cannot work with vector mode because,
- * vector mode doesn't process packets unless min 4 pkts are
- * received, while cq interrupts are generated even for 1 pkt
- * in the CQ.
- */
- dev->scalar_ena = true;
-
- rc = oxt2_nix_register_cq_irqs(eth_dev);
- if (rc) {
- otx2_err("Failed to register CQ interrupts rc=%d", rc);
- goto q_irq_fini;
- }
- }
-
- /* Configure loop back mode */
- rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
- if (rc) {
- otx2_err("Failed to configure cgx loop back mode rc=%d", rc);
- goto cq_fini;
- }
-
- rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
- if (rc) {
- otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
- goto cq_fini;
- }
-
- /* Enable security */
- rc = otx2_eth_sec_init(eth_dev);
- if (rc)
- goto cq_fini;
-
- rc = otx2_nix_flow_ctrl_init(eth_dev);
- if (rc) {
- otx2_err("Failed to init flow ctrl mode %d", rc);
- goto cq_fini;
- }
-
- rc = otx2_nix_mc_addr_list_install(eth_dev);
- if (rc < 0) {
- otx2_err("Failed to install mc address list rc=%d", rc);
- goto sec_fini;
- }
-
- /*
- * Restore queue config when reconfigure followed by
- * reconfigure and no queue configure invoked from application case.
- */
- if (dev->configured == 1) {
- rc = nix_restore_queue_cfg(eth_dev);
- if (rc)
- goto uninstall_mc_list;
- }
-
- /* Update the mac address */
- ea = eth_dev->data->mac_addrs;
- memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
- if (rte_is_zero_ether_addr(ea))
- rte_eth_random_addr((uint8_t *)ea);
-
- rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea);
-
- /* Apply new link configurations if changed */
- rc = otx2_apply_link_speed(eth_dev);
- if (rc) {
- otx2_err("Failed to set link configuration");
- goto uninstall_mc_list;
- }
-
- otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d"
- " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 ""
- " rx_flags=0x%x tx_flags=0x%x",
- eth_dev->data->port_id, ea_fmt, nb_rxq,
- nb_txq, dev->rx_offloads, dev->tx_offloads,
- dev->rx_offload_flags, dev->tx_offload_flags);
-
- /* All good */
- dev->configured = 1;
- dev->configured_nb_rx_qs = data->nb_rx_queues;
- dev->configured_nb_tx_qs = data->nb_tx_queues;
- return 0;
-
-uninstall_mc_list:
- otx2_nix_mc_addr_list_uninstall(eth_dev);
-sec_fini:
- otx2_eth_sec_fini(eth_dev);
-cq_fini:
- oxt2_nix_unregister_cq_irqs(eth_dev);
-q_irq_fini:
- oxt2_nix_unregister_queue_irqs(eth_dev);
-vlan_fini:
- otx2_nix_vlan_fini(eth_dev);
-tm_fini:
- otx2_nix_tm_fini(eth_dev);
-free_nix_lf:
- nix_lf_free(dev);
-fail_offloads:
- dev->rx_offload_flags &= ~nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags &= ~nix_tx_offload_flags(eth_dev);
-fail_configure:
- dev->configured = 0;
- return rc;
-}
-
-int
-otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_txq *txq;
- int rc = -EINVAL;
-
- txq = eth_dev->data->tx_queues[qidx];
-
- if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
- return 0;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d",
- qidx, rc);
- goto done;
- }
-
- data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
-
-done:
- return rc;
-}
-
-int
-otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_txq *txq;
- int rc;
-
- txq = eth_dev->data->tx_queues[qidx];
-
- if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
- return 0;
-
- txq->fc_cache_pkts = 0;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d",
- qidx, rc);
- goto done;
- }
-
- data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
- struct rte_eth_dev_data *data = eth_dev->data;
- int rc;
-
- if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
- return 0;
-
- rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true);
- if (rc) {
- otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc);
- goto done;
- }
-
- data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
- struct rte_eth_dev_data *data = eth_dev->data;
- int rc;
-
- if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
- return 0;
-
- rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false);
- if (rc) {
- otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc);
- goto done;
- }
-
- data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_dev_stop(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_mbuf *rx_pkts[32];
- struct otx2_eth_rxq *rxq;
- struct rte_eth_link link;
- int count, i, j, rc;
-
- nix_lf_switch_header_type_enable(dev, false);
- nix_cgx_stop_link_event(dev);
- npc_rx_disable(dev);
-
- /* Stop rx queues and free up pkts pending */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = otx2_nix_rx_queue_stop(eth_dev, i);
- if (rc)
- continue;
-
- rxq = eth_dev->data->rx_queues[i];
- count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
- while (count) {
- for (j = 0; j < count; j++)
- rte_pktmbuf_free(rx_pkts[j]);
- count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
- }
- }
-
- /* Stop tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_stop(eth_dev, i);
-
- /* Bring down link status internally */
- memset(&link, 0, sizeof(link));
- rte_eth_linkstatus_set(eth_dev, &link);
-
- return 0;
-}
-
-static int
-otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, i;
-
- /* MTU recalculate should be avoided here if PTP is enabled by PF, as
- * otx2_nix_recalc_mtu would be invoked during otx2_nix_ptp_enable_vf
- * call below.
- */
- if (eth_dev->data->nb_rx_queues != 0 && !otx2_ethdev_is_ptp_en(dev)) {
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- return rc;
- }
-
- /* Start rx queues */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = otx2_nix_rx_queue_start(eth_dev, i);
- if (rc)
- return rc;
- }
-
- /* Start tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = otx2_nix_tx_queue_start(eth_dev, i);
- if (rc)
- return rc;
- }
-
- rc = otx2_nix_update_flow_ctrl_mode(eth_dev);
- if (rc) {
- otx2_err("Failed to update flow ctrl mode %d", rc);
- return rc;
- }
-
- /* Enable PTP if it was requested by the app or if it is already
- * enabled in PF owning this VF
- */
- memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
- otx2_ethdev_is_ptp_en(dev))
- otx2_nix_timesync_enable(eth_dev);
- else
- otx2_nix_timesync_disable(eth_dev);
-
- /* Update VF about data off shifted by 8 bytes if PTP already
- * enabled in PF owning this VF
- */
- if (otx2_ethdev_is_ptp_en(dev) && otx2_dev_is_vf(dev))
- otx2_nix_ptp_enable_vf(eth_dev);
-
- if (dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F) {
- rc = rte_mbuf_dyn_rx_timestamp_register(
- &dev->tstamp.tstamp_dynfield_offset,
- &dev->tstamp.rx_tstamp_dynflag);
- if (rc != 0) {
- otx2_err("Failed to register Rx timestamp field/flag");
- return -rte_errno;
- }
- }
-
- rc = npc_rx_enable(dev);
- if (rc) {
- otx2_err("Failed to enable NPC rx %d", rc);
- return rc;
- }
-
- otx2_nix_toggle_flag_link_cfg(dev, true);
-
- rc = nix_cgx_start_link_event(dev);
- if (rc) {
- otx2_err("Failed to start cgx link event %d", rc);
- goto rx_disable;
- }
-
- otx2_nix_toggle_flag_link_cfg(dev, false);
- otx2_eth_set_tx_function(eth_dev);
- otx2_eth_set_rx_function(eth_dev);
-
- return 0;
-
-rx_disable:
- npc_rx_disable(dev);
- otx2_nix_toggle_flag_link_cfg(dev, false);
- return rc;
-}
-
-static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev);
-static int otx2_nix_dev_close(struct rte_eth_dev *eth_dev);
-
-/* Initialize and register driver with DPDK Application */
-static const struct eth_dev_ops otx2_eth_dev_ops = {
- .dev_infos_get = otx2_nix_info_get,
- .dev_configure = otx2_nix_configure,
- .link_update = otx2_nix_link_update,
- .tx_queue_setup = otx2_nix_tx_queue_setup,
- .tx_queue_release = otx2_nix_tx_queue_release,
- .tm_ops_get = otx2_nix_tm_ops_get,
- .rx_queue_setup = otx2_nix_rx_queue_setup,
- .rx_queue_release = otx2_nix_rx_queue_release,
- .dev_start = otx2_nix_dev_start,
- .dev_stop = otx2_nix_dev_stop,
- .dev_close = otx2_nix_dev_close,
- .tx_queue_start = otx2_nix_tx_queue_start,
- .tx_queue_stop = otx2_nix_tx_queue_stop,
- .rx_queue_start = otx2_nix_rx_queue_start,
- .rx_queue_stop = otx2_nix_rx_queue_stop,
- .dev_set_link_up = otx2_nix_dev_set_link_up,
- .dev_set_link_down = otx2_nix_dev_set_link_down,
- .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
- .dev_ptypes_set = otx2_nix_ptypes_set,
- .dev_reset = otx2_nix_dev_reset,
- .stats_get = otx2_nix_dev_stats_get,
- .stats_reset = otx2_nix_dev_stats_reset,
- .get_reg = otx2_nix_dev_get_reg,
- .mtu_set = otx2_nix_mtu_set,
- .mac_addr_add = otx2_nix_mac_addr_add,
- .mac_addr_remove = otx2_nix_mac_addr_del,
- .mac_addr_set = otx2_nix_mac_addr_set,
- .set_mc_addr_list = otx2_nix_set_mc_addr_list,
- .promiscuous_enable = otx2_nix_promisc_enable,
- .promiscuous_disable = otx2_nix_promisc_disable,
- .allmulticast_enable = otx2_nix_allmulticast_enable,
- .allmulticast_disable = otx2_nix_allmulticast_disable,
- .queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
- .reta_update = otx2_nix_dev_reta_update,
- .reta_query = otx2_nix_dev_reta_query,
- .rss_hash_update = otx2_nix_rss_hash_update,
- .rss_hash_conf_get = otx2_nix_rss_hash_conf_get,
- .xstats_get = otx2_nix_xstats_get,
- .xstats_get_names = otx2_nix_xstats_get_names,
- .xstats_reset = otx2_nix_xstats_reset,
- .xstats_get_by_id = otx2_nix_xstats_get_by_id,
- .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
- .rxq_info_get = otx2_nix_rxq_info_get,
- .txq_info_get = otx2_nix_txq_info_get,
- .rx_burst_mode_get = otx2_rx_burst_mode_get,
- .tx_burst_mode_get = otx2_tx_burst_mode_get,
- .tx_done_cleanup = otx2_nix_tx_done_cleanup,
- .set_queue_rate_limit = otx2_nix_tm_set_queue_rate_limit,
- .pool_ops_supported = otx2_nix_pool_ops_supported,
- .flow_ops_get = otx2_nix_dev_flow_ops_get,
- .get_module_info = otx2_nix_get_module_info,
- .get_module_eeprom = otx2_nix_get_module_eeprom,
- .fw_version_get = otx2_nix_fw_version_get,
- .flow_ctrl_get = otx2_nix_flow_ctrl_get,
- .flow_ctrl_set = otx2_nix_flow_ctrl_set,
- .timesync_enable = otx2_nix_timesync_enable,
- .timesync_disable = otx2_nix_timesync_disable,
- .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp,
- .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp,
- .timesync_adjust_time = otx2_nix_timesync_adjust_time,
- .timesync_read_time = otx2_nix_timesync_read_time,
- .timesync_write_time = otx2_nix_timesync_write_time,
- .vlan_offload_set = otx2_nix_vlan_offload_set,
- .vlan_filter_set = otx2_nix_vlan_filter_set,
- .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
- .vlan_tpid_set = otx2_nix_vlan_tpid_set,
- .vlan_pvid_set = otx2_nix_vlan_pvid_set,
- .rx_queue_intr_enable = otx2_nix_rx_queue_intr_enable,
- .rx_queue_intr_disable = otx2_nix_rx_queue_intr_disable,
- .read_clock = otx2_nix_read_clock,
-};
-
-static inline int
-nix_lf_attach(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct rsrc_attach_req *req;
-
- /* Attach NIX(lf) */
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- req->modify = true;
- req->nixlf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-nix_lf_get_msix_offset(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int rc;
-
- /* Get NPA and NIX MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- dev->nix_msixoff = msix_rsp->nix_msixoff;
-
- return rc;
-}
-
-static inline int
-otx2_eth_dev_lf_detach(struct otx2_mbox *mbox)
-{
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
-
- /* Detach all except npa lf */
- req->partial = true;
- req->nixlf = true;
- req->sso = true;
- req->ssow = true;
- req->timlfs = true;
- req->cptlfs = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static bool
-otx2_eth_dev_is_sdp(struct rte_pci_device *pci_dev)
-{
- if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_SDP_PF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_SDP_VF)
- return true;
- return false;
-}
-
-static inline uint64_t
-nix_get_blkaddr(struct otx2_eth_dev *dev)
-{
- uint64_t reg;
-
- /* Reading the discovery register to know which NIX is the LF
- * attached to.
- */
- reg = otx2_read64(dev->bar2 +
- RVU_PF_BLOCK_ADDRX_DISC(RVU_BLOCK_ADDR_NIX0));
-
- return reg & 0x1FFULL ? RVU_BLOCK_ADDR_NIX0 : RVU_BLOCK_ADDR_NIX1;
-}
-
-static int
-otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_pci_device *pci_dev;
- int rc, max_entries;
-
- eth_dev->dev_ops = &otx2_eth_dev_ops;
- eth_dev->rx_queue_count = otx2_nix_rx_queue_count;
- eth_dev->rx_descriptor_status = otx2_nix_rx_descriptor_status;
- eth_dev->tx_descriptor_status = otx2_nix_tx_descriptor_status;
-
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- /* Setup callbacks for secondary process */
- otx2_eth_set_tx_function(eth_dev);
- otx2_eth_set_rx_function(eth_dev);
- return 0;
- }
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- rte_eth_copy_pci_info(eth_dev, pci_dev);
- eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
-
- /* Zero out everything after OTX2_DEV to allow proper dev_reset() */
- memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
- offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
-
- /* Parse devargs string */
- rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev);
- if (rc) {
- otx2_err("Failed to parse devargs rc=%d", rc);
- goto error;
- }
-
- if (!dev->mbox_active) {
- /* Initialize the base otx2_dev object
- * only if already present
- */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc) {
- otx2_err("Failed to initialize otx2_dev rc=%d", rc);
- goto error;
- }
- }
- if (otx2_eth_dev_is_sdp(pci_dev))
- dev->sdp_link = true;
- else
- dev->sdp_link = false;
- /* Device generic callbacks */
- dev->ops = &otx2_dev_ops;
- dev->eth_dev = eth_dev;
-
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc)
- goto otx2_dev_uninit;
-
- dev->configured = 0;
- dev->drv_inited = true;
- dev->ptype_disable = 0;
- dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20);
-
- /* Attach NIX LF */
- rc = nix_lf_attach(dev);
- if (rc)
- goto otx2_npa_uninit;
-
- dev->base = dev->bar2 + (nix_get_blkaddr(dev) << 20);
-
- /* Get NIX MSIX offset */
- rc = nix_lf_get_msix_offset(dev);
- if (rc)
- goto otx2_npa_uninit;
-
- /* Register LF irq handlers */
- rc = otx2_nix_register_irqs(eth_dev);
- if (rc)
- goto mbox_detach;
-
- /* Get maximum number of supported MAC entries */
- max_entries = otx2_cgx_mac_max_entries_get(dev);
- if (max_entries < 0) {
- otx2_err("Failed to get max entries for mac addr");
- rc = -ENOTSUP;
- goto unregister_irq;
- }
-
- /* For VFs, returned max_entries will be 0. But to keep default MAC
- * address, one entry must be allocated. So setting up to 1.
- */
- if (max_entries == 0)
- max_entries = 1;
-
- eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries *
- RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL) {
- otx2_err("Failed to allocate memory for mac addr");
- rc = -ENOMEM;
- goto unregister_irq;
- }
-
- dev->max_mac_entries = max_entries;
-
- rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr);
- if (rc)
- goto free_mac_addrs;
-
- /* Update the mac address */
- memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN);
-
- /* Also sync same MAC address to CGX table */
- otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
-
- /* Initialize the tm data structures */
- otx2_nix_tm_conf_init(eth_dev);
-
- dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
- dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
-
- if (otx2_dev_is_96xx_A0(dev) ||
- otx2_dev_is_95xx_Ax(dev)) {
- dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q;
- dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
- }
-
- /* Create security ctx */
- rc = otx2_eth_sec_ctx_create(eth_dev);
- if (rc)
- goto free_mac_addrs;
- dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
-
- /* Initialize rte-flow */
- rc = otx2_flow_init(dev);
- if (rc)
- goto sec_ctx_destroy;
-
- otx2_nix_mc_filter_init(dev);
-
- otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
- " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
- eth_dev->data->port_id, dev->pf, dev->vf,
- OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap,
- dev->rx_offload_capa, dev->tx_offload_capa);
- return 0;
-
-sec_ctx_destroy:
- otx2_eth_sec_ctx_destroy(eth_dev);
-free_mac_addrs:
- rte_free(eth_dev->data->mac_addrs);
-unregister_irq:
- otx2_nix_unregister_irqs(eth_dev);
-mbox_detach:
- otx2_eth_dev_lf_detach(dev->mbox);
-otx2_npa_uninit:
- otx2_npa_lf_fini();
-otx2_dev_uninit:
- otx2_dev_fini(pci_dev, dev);
-error:
- otx2_err("Failed to init nix eth_dev rc=%d", rc);
- return rc;
-}
-
-static int
-otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_pci_device *pci_dev;
- int rc, i;
-
- /* Nothing to be done for secondary processes */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* Clear the flag since we are closing down */
- dev->configured = 0;
-
- /* Disable nix bpid config */
- otx2_nix_rxchan_bpid_cfg(eth_dev, false);
-
- npc_rx_disable(dev);
-
- /* Disable vlan offloads */
- otx2_nix_vlan_fini(eth_dev);
-
- /* Disable other rte_flow entries */
- otx2_flow_fini(dev);
-
- /* Free multicast filter list */
- otx2_nix_mc_filter_fini(dev);
-
- /* Disable PTP if already enabled */
- if (otx2_ethdev_is_ptp_en(dev))
- otx2_nix_timesync_disable(eth_dev);
-
- nix_cgx_stop_link_event(dev);
-
- /* Unregister the dev ops, this is required to stop VFs from
- * receiving link status updates on exit path.
- */
- dev->ops = NULL;
-
- /* Free up SQs */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_release(eth_dev, i);
- eth_dev->data->nb_tx_queues = 0;
-
- /* Free up RQ's and CQ's */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
- otx2_nix_rx_queue_release(eth_dev, i);
- eth_dev->data->nb_rx_queues = 0;
-
- /* Free tm resources */
- rc = otx2_nix_tm_fini(eth_dev);
- if (rc)
- otx2_err("Failed to cleanup tm, rc=%d", rc);
-
- /* Unregister queue irqs */
- oxt2_nix_unregister_queue_irqs(eth_dev);
-
- /* Unregister cq irqs */
- if (eth_dev->data->dev_conf.intr_conf.rxq)
- oxt2_nix_unregister_cq_irqs(eth_dev);
-
- rc = nix_lf_free(dev);
- if (rc)
- otx2_err("Failed to free nix lf, rc=%d", rc);
-
- rc = otx2_npa_lf_fini();
- if (rc)
- otx2_err("Failed to cleanup npa lf, rc=%d", rc);
-
- /* Disable security */
- otx2_eth_sec_fini(eth_dev);
-
- /* Destroy security ctx */
- otx2_eth_sec_ctx_destroy(eth_dev);
-
- rte_free(eth_dev->data->mac_addrs);
- eth_dev->data->mac_addrs = NULL;
- dev->drv_inited = false;
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- otx2_nix_unregister_irqs(eth_dev);
-
- rc = otx2_eth_dev_lf_detach(dev->mbox);
- if (rc)
- otx2_err("Failed to detach resources, rc=%d", rc);
-
- /* Check if mbox close is needed */
- if (!mbox_close)
- return 0;
-
- if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) {
- /* Will be freed later by PMD */
- eth_dev->data->dev_private = NULL;
- return 0;
- }
-
- otx2_dev_fini(pci_dev, dev);
- return 0;
-}
-
-static int
-otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
-{
- otx2_eth_dev_uninit(eth_dev, true);
- return 0;
-}
-
-static int
-otx2_nix_dev_reset(struct rte_eth_dev *eth_dev)
-{
- int rc;
-
- rc = otx2_eth_dev_uninit(eth_dev, false);
- if (rc)
- return rc;
-
- return otx2_eth_dev_init(eth_dev);
-}
-
-static int
-nix_remove(struct rte_pci_device *pci_dev)
-{
- struct rte_eth_dev *eth_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_dev *otx2_dev;
- int rc;
-
- eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
- if (eth_dev) {
- /* Cleanup eth dev */
- rc = otx2_eth_dev_uninit(eth_dev, true);
- if (rc)
- return rc;
-
- rte_eth_dev_release_port(eth_dev);
- }
-
- /* Nothing to be done for secondary processes */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* Check for common resources */
- idev = otx2_intra_dev_get_cfg();
- if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev)
- return 0;
-
- otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf);
-
- if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev))
- goto exit;
-
- /* Safe to cleanup mbox as no more users */
- otx2_dev_fini(pci_dev, otx2_dev);
- rte_free(otx2_dev);
- return 0;
-
-exit:
- otx2_info("%s: common resource in use by other devices", pci_dev->name);
- return -EAGAIN;
-}
-
-static int
-nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- int rc;
-
- RTE_SET_USED(pci_drv);
-
- rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev),
- otx2_eth_dev_init);
-
- /* On error on secondary, recheck if port exists in primary or
- * in mid of detach state.
- */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc)
- if (!rte_eth_dev_allocated(pci_dev->device.name))
- return 0;
- return rc;
-}
-
-static const struct rte_pci_id pci_nix_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_AF_VF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SDP_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SDP_VF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_nix = {
- .id_table = pci_nix_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA |
- RTE_PCI_DRV_INTR_LSC,
- .probe = nix_probe,
- .remove = nix_remove,
-};
-
-RTE_PMD_REGISTER_PCI(OCTEONTX2_PMD, pci_nix);
-RTE_PMD_REGISTER_PCI_TABLE(OCTEONTX2_PMD, pci_nix_map);
-RTE_PMD_REGISTER_KMOD_DEP(OCTEONTX2_PMD, "vfio-pci");
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
deleted file mode 100644
index a5282c6c12..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ /dev/null
@@ -1,619 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_H__
-#define __OTX2_ETHDEV_H__
-
-#include <math.h>
-#include <stdint.h>
-
-#include <rte_common.h>
-#include <rte_ethdev.h>
-#include <rte_kvargs.h>
-#include <rte_mbuf.h>
-#include <rte_mempool.h>
-#include <rte_security_driver.h>
-#include <rte_spinlock.h>
-#include <rte_string_fns.h>
-#include <rte_time.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_flow.h"
-#include "otx2_irq.h"
-#include "otx2_mempool.h"
-#include "otx2_rx.h"
-#include "otx2_tm.h"
-#include "otx2_tx.h"
-
-#define OTX2_ETH_DEV_PMD_VERSION "1.0"
-
-/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */
-
-/* Minimum CQ size should be 4K */
-#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63)
-#define otx2_ethdev_fixup_is_min_4k_q(dev) \
- ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q)
-/* Limit CQ being full */
-#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62)
-#define otx2_ethdev_fixup_is_limit_cq_full(dev) \
- ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL)
-
-/* Used for struct otx2_eth_dev::flags */
-#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
-
-/* VLAN tag inserted by NIX_TX_VTAG_ACTION.
- * In Tx space is always reserved for this in FRS.
- */
-#define NIX_MAX_VTAG_INS 2
-#define NIX_MAX_VTAG_ACT_SIZE (4 * NIX_MAX_VTAG_INS)
-
-/* ETH_HLEN+ETH_FCS+2*VLAN_HLEN */
-#define NIX_L2_OVERHEAD \
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 8)
-#define NIX_L2_MAX_LEN \
- (RTE_ETHER_MTU + NIX_L2_OVERHEAD)
-
-/* HW config of frame size doesn't include FCS */
-#define NIX_MAX_HW_FRS 9212
-#define NIX_MIN_HW_FRS 60
-
-/* Since HW FRS includes NPC VTAG insertion space, user has reduced FRS */
-#define NIX_MAX_FRS \
- (NIX_MAX_HW_FRS + RTE_ETHER_CRC_LEN - NIX_MAX_VTAG_ACT_SIZE)
-
-#define NIX_MIN_FRS \
- (NIX_MIN_HW_FRS + RTE_ETHER_CRC_LEN)
-
-#define NIX_MAX_MTU \
- (NIX_MAX_FRS - NIX_L2_OVERHEAD)
-
-#define NIX_MAX_SQB 512
-#define NIX_DEF_SQB 16
-#define NIX_MIN_SQB 8
-#define NIX_SQB_LIST_SPACE 2
-#define NIX_RSS_RETA_SIZE_MAX 256
-/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
-#define NIX_RSS_GRPS 8
-#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
-#define NIX_RSS_RETA_SIZE 64
-#define NIX_RX_MIN_DESC 16
-#define NIX_RX_MIN_DESC_ALIGN 16
-#define NIX_RX_NB_SEG_MAX 6
-#define NIX_CQ_ENTRY_SZ 128
-#define NIX_CQ_ALIGN 512
-#define NIX_SQB_LOWER_THRESH 70
-#define LMT_SLOT_MASK 0x7f
-#define NIX_RX_DEFAULT_RING_SZ 4096
-
-/* If PTP is enabled additional SEND MEM DESC is required which
- * takes 2 words, hence max 7 iova address are possible
- */
-#if defined(RTE_LIBRTE_IEEE1588)
-#define NIX_TX_NB_SEG_MAX 7
-#else
-#define NIX_TX_NB_SEG_MAX 9
-#endif
-
-#define NIX_TX_MSEG_SG_DWORDS \
- ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \
- + NIX_TX_NB_SEG_MAX)
-
-/* Apply BP/DROP when CQ is 95% full */
-#define NIX_CQ_THRESH_LEVEL (5 * 256 / 100)
-#define NIX_CQ_FULL_ERRATA_SKID (1024ull * 256)
-
-#define CQ_OP_STAT_OP_ERR 63
-#define CQ_OP_STAT_CQ_ERR 46
-
-#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
-#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
-
-#define CQ_CQE_THRESH_DEFAULT 0x1ULL /* IRQ triggered when
- * NIX_LF_CINTX_CNT[QCOUNT]
- * crosses this value
- */
-#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
-#define CQ_TIMER_THRESH_MAX 255
-
-#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
- | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-
-#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
- RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
- RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
- RTE_ETH_RSS_C_VLAN)
-
-#define NIX_TX_OFFLOAD_CAPA ( \
- RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
- RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
- RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
- RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_TCP_TSO | \
- RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
- RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
-
-#define NIX_RX_OFFLOAD_CAPA ( \
- RTE_ETH_RX_OFFLOAD_CHECKSUM | \
- RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
- RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- RTE_ETH_RX_OFFLOAD_SCATTER | \
- RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
- RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
- RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
- RTE_ETH_RX_OFFLOAD_RSS_HASH)
-
-#define NIX_DEFAULT_RSS_CTX_GROUP 0
-#define NIX_DEFAULT_RSS_MCAM_IDX -1
-
-#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en)
-
-#define NIX_TIMESYNC_TX_CMD_LEN 8
-/* Additional timesync values. */
-#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL
-
-#define OCTEONTX2_PMD net_octeontx2
-
-#define otx2_ethdev_is_same_driver(dev) \
- (strcmp((dev)->device->driver->name, RTE_STR(OCTEONTX2_PMD)) == 0)
-
-enum nix_q_size_e {
- nix_q_size_16, /* 16 entries */
- nix_q_size_64, /* 64 entries */
- nix_q_size_256,
- nix_q_size_1K,
- nix_q_size_4K,
- nix_q_size_16K,
- nix_q_size_64K,
- nix_q_size_256K,
- nix_q_size_1M, /* Million entries */
- nix_q_size_max
-};
-
-enum nix_lso_tun_type {
- NIX_LSO_TUN_V4V4,
- NIX_LSO_TUN_V4V6,
- NIX_LSO_TUN_V6V4,
- NIX_LSO_TUN_V6V6,
- NIX_LSO_TUN_MAX,
-};
-
-struct otx2_qint {
- struct rte_eth_dev *eth_dev;
- uint8_t qintx;
-};
-
-struct otx2_rss_info {
- uint64_t nix_rss;
- uint32_t flowkey_cfg;
- uint16_t rss_size;
- uint8_t rss_grps;
- uint8_t alg_idx; /* Selected algo index */
- uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX];
- uint8_t key[NIX_HASH_KEY_SIZE];
-};
-
-struct otx2_eth_qconf {
- union {
- struct rte_eth_txconf tx;
- struct rte_eth_rxconf rx;
- } conf;
- void *mempool;
- uint32_t socket_id;
- uint16_t nb_desc;
- uint8_t valid;
-};
-
-struct otx2_fc_info {
- enum rte_eth_fc_mode mode; /**< Link flow control mode */
- uint8_t rx_pause;
- uint8_t tx_pause;
- uint8_t chan_cnt;
- uint16_t bpid[NIX_MAX_CHAN];
-};
-
-struct vlan_mkex_info {
- struct npc_xtract_info la_xtract;
- struct npc_xtract_info lb_xtract;
- uint64_t lb_lt_offset;
-};
-
-struct mcast_entry {
- struct rte_ether_addr mcast_mac;
- uint16_t mcam_index;
- TAILQ_ENTRY(mcast_entry) next;
-};
-
-TAILQ_HEAD(otx2_nix_mc_filter_tbl, mcast_entry);
-
-struct vlan_entry {
- uint32_t mcam_idx;
- uint16_t vlan_id;
- TAILQ_ENTRY(vlan_entry) next;
-};
-
-TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry);
-
-struct otx2_vlan_info {
- struct otx2_vlan_filter_tbl fltr_tbl;
- /* MKEX layer info */
- struct mcam_entry def_tx_mcam_ent;
- struct mcam_entry def_rx_mcam_ent;
- struct vlan_mkex_info mkex;
- /* Default mcam entry that matches vlan packets */
- uint32_t def_rx_mcam_idx;
- uint32_t def_tx_mcam_idx;
- /* MCAM entry that matches double vlan packets */
- uint32_t qinq_mcam_idx;
- /* Indices of tx_vtag def registers */
- uint32_t outer_vlan_idx;
- uint32_t inner_vlan_idx;
- uint16_t outer_vlan_tpid;
- uint16_t inner_vlan_tpid;
- uint16_t pvid;
- /* QinQ entry allocated before default one */
- uint8_t qinq_before_def;
- uint8_t pvid_insert_on;
- /* Rx vtag action type */
- uint8_t vtag_type_idx;
- uint8_t filter_on;
- uint8_t strip_on;
- uint8_t qinq_on;
- uint8_t promisc_on;
-};
-
-struct otx2_eth_dev {
- OTX2_DEV; /* Base class */
- RTE_MARKER otx2_eth_dev_data_start;
- uint16_t sqb_size;
- uint16_t rx_chan_base;
- uint16_t tx_chan_base;
- uint8_t rx_chan_cnt;
- uint8_t tx_chan_cnt;
- uint8_t lso_tsov4_idx;
- uint8_t lso_tsov6_idx;
- uint8_t lso_udp_tun_idx[NIX_LSO_TUN_MAX];
- uint8_t lso_tun_idx[NIX_LSO_TUN_MAX];
- uint64_t lso_tun_fmt;
- uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
- uint8_t mkex_pfl_name[MKEX_NAME_LEN];
- uint8_t max_mac_entries;
- bool dmac_filter_enable;
- uint8_t lf_tx_stats;
- uint8_t lf_rx_stats;
- uint8_t lock_rx_ctx;
- uint8_t lock_tx_ctx;
- uint16_t flags;
- uint16_t cints;
- uint16_t qints;
- uint8_t configured;
- uint8_t configured_qints;
- uint8_t configured_cints;
- uint8_t configured_nb_rx_qs;
- uint8_t configured_nb_tx_qs;
- uint8_t ptype_disable;
- uint16_t nix_msixoff;
- uintptr_t base;
- uintptr_t lmt_addr;
- uint16_t scalar_ena;
- uint16_t rss_tag_as_xor;
- uint16_t max_sqb_count;
- uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
- uint64_t rx_offloads;
- uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
- uint64_t tx_offloads;
- uint64_t rx_offload_capa;
- uint64_t tx_offload_capa;
- struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
- struct otx2_qint cints_mem[RTE_MAX_QUEUES_PER_PORT];
- uint16_t txschq[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT];
- /* Dis-contiguous queues */
- uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- /* Contiguous queues */
- uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- uint16_t otx2_tm_root_lvl;
- uint16_t link_cfg_lvl;
- uint16_t tm_flags;
- uint16_t tm_leaf_cnt;
- uint64_t tm_rate_min;
- struct otx2_nix_tm_node_list node_list;
- struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
- struct otx2_rss_info rss_info;
- struct otx2_fc_info fc_info;
- uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
- uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
- struct otx2_npc_flow_info npc_flow;
- struct otx2_vlan_info vlan_info;
- struct otx2_eth_qconf *tx_qconf;
- struct otx2_eth_qconf *rx_qconf;
- struct rte_eth_dev *eth_dev;
- eth_rx_burst_t rx_pkt_burst_no_offload;
- /* PTP counters */
- bool ptp_en;
- struct otx2_timesync_info tstamp;
- struct rte_timecounter systime_tc;
- struct rte_timecounter rx_tstamp_tc;
- struct rte_timecounter tx_tstamp_tc;
- double clk_freq_mult;
- uint64_t clk_delta;
- bool mc_tbl_set;
- struct otx2_nix_mc_filter_tbl mc_fltr_tbl;
- bool sdp_link; /* SDP flag */
- /* Inline IPsec params */
- uint16_t ipsec_in_max_spi;
- rte_spinlock_t ipsec_tbl_lock;
- uint8_t duplex;
- uint32_t speed;
-} __rte_cache_aligned;
-
-struct otx2_eth_txq {
- uint64_t cmd[8];
- int64_t fc_cache_pkts;
- uint64_t *fc_mem;
- void *lmt_addr;
- rte_iova_t io_addr;
- rte_iova_t fc_iova;
- uint16_t sqes_per_sqb_log2;
- int16_t nb_sqb_bufs_adj;
- uint64_t lso_tun_fmt;
- RTE_MARKER slow_path_start;
- uint16_t nb_sqb_bufs;
- uint16_t sq;
- uint64_t offloads;
- struct otx2_eth_dev *dev;
- struct rte_mempool *sqb_pool;
- struct otx2_eth_qconf qconf;
-} __rte_cache_aligned;
-
-struct otx2_eth_rxq {
- uint64_t mbuf_initializer;
- uint64_t data_off;
- uintptr_t desc;
- void *lookup_mem;
- uintptr_t cq_door;
- uint64_t wdata;
- int64_t *cq_status;
- uint32_t head;
- uint32_t qmask;
- uint32_t available;
- uint16_t rq;
- struct otx2_timesync_info *tstamp;
- RTE_MARKER slow_path_start;
- uint64_t aura;
- uint64_t offloads;
- uint32_t qlen;
- struct rte_mempool *pool;
- enum nix_q_size_e qsize;
- struct rte_eth_dev *eth_dev;
- struct otx2_eth_qconf qconf;
- uint16_t cq_drop;
-} __rte_cache_aligned;
-
-static inline struct otx2_eth_dev *
-otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
-{
- return eth_dev->data->dev_private;
-}
-
-/* Ops */
-int otx2_nix_info_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_info *dev_info);
-int otx2_nix_dev_flow_ops_get(struct rte_eth_dev *eth_dev,
- const struct rte_flow_ops **ops);
-int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
- size_t fw_size);
-int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_module_info *modinfo);
-int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
- struct rte_dev_eeprom_info *info);
-int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
-void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_rxq_info *qinfo);
-void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_txq_info *qinfo);
-int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
- struct rte_eth_burst_mode *mode);
-int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
- struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(void *rx_queue);
-int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
-int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
-int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
-
-void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
-int otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
-int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
-uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
-
-/* Multicast filter APIs */
-void otx2_nix_mc_filter_init(struct otx2_eth_dev *dev);
-void otx2_nix_mc_filter_fini(struct otx2_eth_dev *dev);
-int otx2_nix_mc_addr_list_install(struct rte_eth_dev *eth_dev);
-int otx2_nix_mc_addr_list_uninstall(struct rte_eth_dev *eth_dev);
-int otx2_nix_set_mc_addr_list(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *mc_addr_set,
- uint32_t nb_mc_addr);
-
-/* MTU */
-int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
-int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev);
-void otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq);
-
-
-/* Link */
-void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
-int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
-void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-void otx2_eth_dev_link_status_get(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-int otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev);
-int otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev);
-int otx2_apply_link_speed(struct rte_eth_dev *eth_dev);
-
-/* IRQ */
-int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
-int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
-int oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev);
-void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
-void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
-void oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev);
-void otx2_nix_err_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb);
-void otx2_nix_ras_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb);
-
-int otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id);
-int otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id);
-
-/* Debug */
-int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
-int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
- struct rte_dev_reg_info *regs);
-int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
-void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
-void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
-
-/* Stats */
-int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats);
-int otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
- uint16_t queue_id, uint8_t stat_idx,
- uint8_t is_rx);
-int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat *xstats, unsigned int n);
-int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit);
-int otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- uint64_t *values, unsigned int n);
-int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit);
-
-/* RSS */
-void otx2_nix_rss_set_key(struct otx2_eth_dev *dev,
- uint8_t *key, uint32_t key_len);
-uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev,
- uint64_t ethdev_rss, uint8_t rss_level);
-int otx2_rss_set_hf(struct otx2_eth_dev *dev,
- uint32_t flowkey_cfg, uint8_t *alg_idx,
- uint8_t group, int mcam_index);
-int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group,
- uint16_t *ind_tbl);
-int otx2_nix_rss_config(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size);
-int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size);
-int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf);
-
-int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf);
-
-/* CGX */
-int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
-int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
-int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr);
-
-/* Flow Control */
-int otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf);
-
-int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf);
-
-int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
-
-int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
-
-/* VLAN */
-int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
-int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
-int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
-void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
-int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
- int on);
-void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
- uint16_t queue, int on);
-int otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, uint16_t tpid);
-int otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on);
-
-/* Lookup configuration */
-void *otx2_nix_fastpath_lookup_mem_get(void);
-
-/* PTYPES */
-const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev);
-int otx2_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask);
-
-/* Mac address handling */
-int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr);
-int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
-int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr,
- uint32_t index, uint32_t pool);
-void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index);
-int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
-
-/* Devargs */
-int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
- struct otx2_eth_dev *dev);
-
-/* Rx and Tx routines */
-void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
-void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev);
-void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
-
-/* Timesync - PTP routines */
-int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp,
- uint32_t flags);
-int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp);
-int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta);
-int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
- const struct timespec *ts);
-int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev,
- struct timespec *ts);
-int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
-int otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *time);
-int otx2_nix_raw_clock_tsc_conv(struct otx2_eth_dev *dev);
-void otx2_nix_ptp_enable_vf(struct rte_eth_dev *eth_dev);
-
-#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
deleted file mode 100644
index 6d951bc7e2..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ /dev/null
@@ -1,811 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
-#define NIX_REG_INFO(reg) {reg, #reg}
-#define NIX_REG_NAME_SZ 48
-
-struct nix_lf_reg_info {
- uint32_t offset;
- const char *name;
-};
-
-static const struct
-nix_lf_reg_info nix_lf_reg[] = {
- NIX_REG_INFO(NIX_LF_RX_SECRETX(0)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(1)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(2)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(3)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(4)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(5)),
- NIX_REG_INFO(NIX_LF_CFG),
- NIX_REG_INFO(NIX_LF_GINT),
- NIX_REG_INFO(NIX_LF_GINT_W1S),
- NIX_REG_INFO(NIX_LF_GINT_ENA_W1C),
- NIX_REG_INFO(NIX_LF_GINT_ENA_W1S),
- NIX_REG_INFO(NIX_LF_ERR_INT),
- NIX_REG_INFO(NIX_LF_ERR_INT_W1S),
- NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C),
- NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S),
- NIX_REG_INFO(NIX_LF_RAS),
- NIX_REG_INFO(NIX_LF_RAS_W1S),
- NIX_REG_INFO(NIX_LF_RAS_ENA_W1C),
- NIX_REG_INFO(NIX_LF_RAS_ENA_W1S),
- NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG),
- NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG),
- NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
-};
-
-static int
-nix_lf_get_reg_count(struct otx2_eth_dev *dev)
-{
- int reg_count = 0;
-
- reg_count = RTE_DIM(nix_lf_reg);
- /* NIX_LF_TX_STATX */
- reg_count += dev->lf_tx_stats;
- /* NIX_LF_RX_STATX */
- reg_count += dev->lf_rx_stats;
- /* NIX_LF_QINTX_CNT*/
- reg_count += dev->qints;
- /* NIX_LF_QINTX_INT */
- reg_count += dev->qints;
- /* NIX_LF_QINTX_ENA_W1S */
- reg_count += dev->qints;
- /* NIX_LF_QINTX_ENA_W1C */
- reg_count += dev->qints;
- /* NIX_LF_CINTX_CNT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_WAIT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_INT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_INT_W1S */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_ENA_W1S */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_ENA_W1C */
- reg_count += dev->cints;
-
- return reg_count;
-}
-
-int
-otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data)
-{
- uintptr_t nix_lf_base = dev->base;
- bool dump_stdout;
- uint64_t reg;
- uint32_t i;
-
- dump_stdout = data ? 0 : 1;
-
- for (i = 0; i < RTE_DIM(nix_lf_reg); i++) {
- reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset);
- if (dump_stdout && reg)
- nix_dump("%32s = 0x%" PRIx64,
- nix_lf_reg[i].name, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_TX_STATX */
- for (i = 0; i < dev->lf_tx_stats; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_TX_STATX", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_RX_STATX */
- for (i = 0; i < dev->lf_rx_stats; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_RX_STATX", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_CNT*/
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_CNT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_INT */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_INT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_ENA_W1S */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_ENA_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_ENA_W1C */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_ENA_W1C", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_CNT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_CNT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_WAIT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_WAIT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_INT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_INT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_INT_W1S */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_INT_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_ENA_W1S */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_ENA_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_ENA_W1C */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_ENA_W1C", i, reg);
- if (data)
- *data++ = reg;
- }
- return 0;
-}
-
-int
-otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t *data = regs->data;
-
- if (data == NULL) {
- regs->length = nix_lf_get_reg_count(dev);
- regs->width = 8;
- return 0;
- }
-
- if (!regs->length ||
- regs->length == (uint32_t)nix_lf_get_reg_count(dev)) {
- otx2_nix_reg_dump(dev, data);
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline void
-nix_lf_sq_dump(__otx2_io struct nix_sq_ctx_s *ctx)
-{
- nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
- ctx->sqe_way_mask, ctx->cq);
- nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
- ctx->sdp_mcast, ctx->substream);
- nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n",
- ctx->qint_idx, ctx->ena);
-
- nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
- ctx->sqb_count, ctx->default_chan);
- nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d",
- ctx->smq_rr_quantum, ctx->sso_ena);
- nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
- ctx->xoff, ctx->cq_ena, ctx->smq);
-
- nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
- ctx->sqe_stype, ctx->sq_int_ena);
- nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d",
- ctx->sq_int, ctx->sqb_aura);
- nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count);
-
- nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
- ctx->smq_next_sq_vld, ctx->smq_pend);
- nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
- ctx->smenq_next_sqb_vld, ctx->head_offset);
- nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
- ctx->smenq_offset, ctx->tail_offset);
- nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
- ctx->smq_lso_segnum, ctx->smq_next_sq);
- nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d",
- ctx->mnq_dis, ctx->lmt_dis);
- nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
- ctx->cq_limit, ctx->max_sqe_size);
-
- nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
- nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
- nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
- nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
- nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
-
- nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
- ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
- nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
- ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
- nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
- ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
- nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
-
- nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
- (uint64_t)ctx->scm_lso_rem);
- nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
- nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
- nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
- (uint64_t)ctx->drop_octs);
- nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
- (uint64_t)ctx->drop_pkts);
-}
-
-static inline void
-nix_lf_rq_dump(__otx2_io struct nix_rq_ctx_s *ctx)
-{
- nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x",
- ctx->wqe_aura, ctx->substream);
- nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d",
- ctx->cq, ctx->ena_wqwd);
- nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
- ctx->ipsech_ena, ctx->sso_ena);
- nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
-
- nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
- ctx->lpb_drop_ena, ctx->spb_drop_ena);
- nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
- ctx->xqe_drop_ena, ctx->wqe_caching);
- nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
- ctx->pb_caching, ctx->sso_tt);
- nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d",
- ctx->sso_grp, ctx->lpb_aura);
- nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
-
- nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
- ctx->xqe_hdr_split, ctx->xqe_imm_copy);
- nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
- ctx->xqe_imm_size, ctx->later_skip);
- nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
- ctx->first_skip, ctx->lpb_sizem1);
- nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d",
- ctx->spb_ena, ctx->wqe_skip);
- nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1);
-
- nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
- ctx->spb_pool_pass, ctx->spb_pool_drop);
- nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
- ctx->spb_aura_pass, ctx->spb_aura_drop);
- nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
- ctx->wqe_pool_pass, ctx->wqe_pool_drop);
- nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
- ctx->xqe_pass, ctx->xqe_drop);
-
- nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
- ctx->qint_idx, ctx->rq_int_ena);
- nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d",
- ctx->rq_int, ctx->lpb_pool_pass);
- nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
- ctx->lpb_pool_drop, ctx->lpb_aura_pass);
- nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
-
- nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
- ctx->flow_tagw, ctx->bad_utag);
- nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n",
- ctx->good_utag, ctx->ltag);
-
- nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
- nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
- nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
- nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
- nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
-}
-
-static inline void
-nix_lf_cq_dump(__otx2_io struct nix_cq_ctx_s *ctx)
-{
- nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base);
-
- nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr);
- nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d",
- ctx->avg_con, ctx->cint_idx);
- nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d",
- ctx->cq_err, ctx->qint_idx);
- nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n",
- ctx->bpid, ctx->bp_ena);
-
- nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
- ctx->update_time, ctx->avg_level);
- nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n",
- ctx->head, ctx->tail);
-
- nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d",
- ctx->cq_err_int_ena, ctx->cq_err_int);
- nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d",
- ctx->qsize, ctx->caching);
- nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d",
- ctx->substream, ctx->ena);
- nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d",
- ctx->drop_ena, ctx->drop);
- nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp);
-}
-
-int
-otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, q, rq = eth_dev->data->nb_rx_queues;
- int sq = eth_dev->data->nb_tx_queues;
- struct otx2_mbox *mbox = dev->mbox;
- struct npa_aq_enq_rsp *npa_rsp;
- struct npa_aq_enq_req *npa_aq;
- struct otx2_npa_lf *npa_lf;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
-
- npa_lf = otx2_npa_lf_obj_get();
-
- for (q = 0; q < rq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get cq context");
- goto fail;
- }
- nix_dump("============== port=%d cq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_cq_dump(&rsp->cq);
- }
-
- for (q = 0; q < rq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
- if (rc) {
- otx2_err("Failed to get rq context");
- goto fail;
- }
- nix_dump("============== port=%d rq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_rq_dump(&rsp->rq);
- }
- for (q = 0; q < sq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get sq context");
- goto fail;
- }
- nix_dump("============== port=%d sq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_sq_dump(&rsp->sq);
-
- if (!npa_lf) {
- otx2_err("NPA LF doesn't exist");
- continue;
- }
-
- /* Dump SQB Aura minimal info */
- npa_aq = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- npa_aq->aura_id = rsp->sq.sqb_aura;
- npa_aq->ctype = NPA_AQ_CTYPE_AURA;
- npa_aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(npa_lf->mbox, (void *)&npa_rsp);
- if (rc) {
- otx2_err("Failed to get sq's sqb_aura context");
- continue;
- }
-
- nix_dump("\nSQB Aura W0: Pool addr\t\t0x%"PRIx64"",
- npa_rsp->aura.pool_addr);
- nix_dump("SQB Aura W1: ena\t\t\t%d",
- npa_rsp->aura.ena);
- nix_dump("SQB Aura W2: count\t\t%"PRIx64"",
- (uint64_t)npa_rsp->aura.count);
- nix_dump("SQB Aura W3: limit\t\t%"PRIx64"",
- (uint64_t)npa_rsp->aura.limit);
- nix_dump("SQB Aura W3: fc_ena\t\t%d",
- npa_rsp->aura.fc_ena);
- nix_dump("SQB Aura W4: fc_addr\t\t0x%"PRIx64"\n",
- npa_rsp->aura.fc_addr);
- }
-
-fail:
- return rc;
-}
-
-/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
-void
-otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
-
- nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
- cq->tag, cq->q, cq->node, cq->cqe_type);
-
- nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
- rx->chan, rx->desc_sizem1);
- nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
- rx->imm_copy, rx->express);
- nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
- rx->wqwd, rx->errlev, rx->errcode);
- nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
- rx->latype, rx->lbtype, rx->lctype);
- nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
- rx->ldtype, rx->letype, rx->lftype);
- nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
- rx->lgtype, rx->lhtype);
-
- nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
- nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
- rx->l2m, rx->l2b, rx->l3m, rx->l3b);
- nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
- rx->vtag0_valid, rx->vtag0_gone);
- nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
- rx->vtag1_valid, rx->vtag1_gone);
- nix_dump("W1: pkind \t%d", rx->pkind);
- nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
- rx->vtag0_tci, rx->vtag1_tci);
-
- nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
- rx->laflags, rx->lbflags, rx->lcflags);
- nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
- rx->ldflags, rx->leflags, rx->lfflags);
- nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
- rx->lgflags, rx->lhflags);
-
- nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
- rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
- nix_dump("W3: match_id \t%d", rx->match_id);
-
- nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
- rx->laptr, rx->lbptr, rx->lcptr);
- nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
- rx->ldptr, rx->leptr, rx->lfptr);
- nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
-
- nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
- rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
-}
-
-static uint8_t
-prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
- uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
-{
- uint8_t k = 0;
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- reg[k] = NIX_AF_SMQX_CFG(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_SMQ[%u]_CFG", schq);
-
- reg[k] = NIX_AF_MDQX_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_MDQX_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_PIR", schq);
-
- reg[k] = NIX_AF_MDQX_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_CIR", schq);
-
- reg[k] = NIX_AF_MDQX_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq);
-
- reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL4X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL4X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL4X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
-
- reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL3X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL3X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL3X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
-
- reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL2X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL2X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL2X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL1:
-
- reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL1X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_SW_XOFF", schq);
-
- reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq);
- break;
- default:
- break;
- }
-
- if (k > MAX_REGS_PER_MBOX_MSG) {
- nix_dump("\t!!!NIX TM Registers request overflow!!!");
- return 0;
- }
- return k;
-}
-
-/* Dump TM hierarchy and registers */
-void
-otx2_nix_tm_dump(struct otx2_eth_dev *dev)
-{
- char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ];
- struct otx2_nix_tm_node *tm_node, *root_node, *parent;
- uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2];
- struct nix_txschq_config *req;
- const char *lvlstr, *parent_lvlstr;
- struct nix_txschq_config *rsp;
- uint32_t schq, parent_schq;
- int hw_lvl, j, k, rc;
-
- nix_dump("===TM hierarchy and registers dump of %s===",
- dev->eth_dev->data->name);
-
- root_node = NULL;
-
- for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++) {
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != hw_lvl)
- continue;
-
- parent = tm_node->parent;
- if (hw_lvl == NIX_TXSCH_LVL_CNT) {
- lvlstr = "SQ";
- schq = tm_node->id;
- } else {
- lvlstr = nix_hwlvl2str(tm_node->hw_lvl);
- schq = tm_node->hw_id;
- }
-
- if (parent) {
- parent_schq = parent->hw_id;
- parent_lvlstr =
- nix_hwlvl2str(parent->hw_lvl);
- } else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
- parent_schq = otx2_nix_get_link(dev);
- parent_lvlstr = "LINK";
- } else {
- parent_schq = tm_node->parent_hw_id;
- parent_lvlstr =
- nix_hwlvl2str(tm_node->hw_lvl + 1);
- }
-
- nix_dump("%s_%d->%s_%d", lvlstr, schq,
- parent_lvlstr, parent_schq);
-
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- /* Need to dump TL1 when root is TL2 */
- if (tm_node->hw_lvl == dev->otx2_tm_root_lvl)
- root_node = tm_node;
-
- /* Dump registers only when HWRES is present */
- k = prepare_nix_tm_reg_dump(tm_node->hw_lvl, schq,
- otx2_nix_get_link(dev), reg,
- regstr);
- if (!k)
- continue;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->read = 1;
- req->lvl = tm_node->hw_lvl;
- req->num_regs = k;
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (!rc) {
- for (j = 0; j < k; j++)
- nix_dump("\t%s=0x%016"PRIx64,
- regstr[j], rsp->regval[j]);
- } else {
- nix_dump("\t!!!Failed to dump registers!!!");
- }
- }
- nix_dump("\n");
- }
-
- /* Dump TL1 node data when root level is TL2 */
- if (root_node && root_node->hw_lvl == NIX_TXSCH_LVL_TL2) {
- k = prepare_nix_tm_reg_dump(NIX_TXSCH_LVL_TL1,
- root_node->parent_hw_id,
- otx2_nix_get_link(dev),
- reg, regstr);
- if (!k)
- return;
-
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->read = 1;
- req->lvl = NIX_TXSCH_LVL_TL1;
- req->num_regs = k;
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (!rc) {
- for (j = 0; j < k; j++)
- nix_dump("\t%s=0x%016"PRIx64,
- regstr[j], rsp->regval[j]);
- } else {
- nix_dump("\t!!!Failed to dump registers!!!");
- }
- }
-
- otx2_nix_queues_ctx_dump(dev->eth_dev);
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
deleted file mode 100644
index 60bf6c3f5f..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ /dev/null
@@ -1,215 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-#include <math.h>
-
-#include "otx2_ethdev.h"
-
-static int
-parse_flow_max_priority(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint16_t val;
-
- val = atoi(value);
-
- /* Limit the max priority to 32 */
- if (val < 1 || val > 32)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_flow_prealloc_size(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint16_t val;
-
- val = atoi(value);
-
- /* Limit the prealloc size to 32 */
- if (val < 1 || val > 32)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_reta_size(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- if (val <= RTE_ETH_RSS_RETA_SIZE_64)
- val = RTE_ETH_RSS_RETA_SIZE_64;
- else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
- val = RTE_ETH_RSS_RETA_SIZE_128;
- else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
- val = RTE_ETH_RSS_RETA_SIZE_256;
- else
- val = NIX_RSS_RETA_SIZE;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_flag(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
-
- *(uint16_t *)extra_args = atoi(value);
-
- return 0;
-}
-
-static int
-parse_sqb_count(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- if (val < NIX_MIN_SQB || val > NIX_MAX_SQB)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_switch_header_type(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
-
- if (strcmp(value, "higig2") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_HIGIG;
-
- if (strcmp(value, "dsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_EDSA;
-
- if (strcmp(value, "chlen90b") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_CH_LEN_90B;
-
- if (strcmp(value, "chlen24b") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_CH_LEN_24B;
-
- if (strcmp(value, "exdsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_EXDSA;
-
- if (strcmp(value, "vlan_exdsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_VLAN_EXDSA;
-
- return 0;
-}
-
-#define OTX2_RSS_RETA_SIZE "reta_size"
-#define OTX2_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
-#define OTX2_SCL_ENABLE "scalar_enable"
-#define OTX2_MAX_SQB_COUNT "max_sqb_count"
-#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size"
-#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
-#define OTX2_SWITCH_HEADER_TYPE "switch_header"
-#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
-#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
-#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
-
-int
-otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
-{
- uint16_t rss_size = NIX_RSS_RETA_SIZE;
- uint16_t sqb_count = NIX_MAX_SQB;
- uint16_t flow_prealloc_size = 8;
- uint16_t switch_header_type = 0;
- uint16_t flow_max_priority = 3;
- uint16_t ipsec_in_max_spi = 1;
- uint16_t rss_tag_as_xor = 0;
- uint16_t scalar_enable = 0;
- struct rte_kvargs *kvlist;
- uint16_t lock_rx_ctx = 0;
- uint16_t lock_tx_ctx = 0;
-
- if (devargs == NULL)
- goto null_devargs;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- goto exit;
-
- rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE,
- &parse_reta_size, &rss_size);
- rte_kvargs_process(kvlist, OTX2_IPSEC_IN_MAX_SPI,
- &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
- rte_kvargs_process(kvlist, OTX2_SCL_ENABLE,
- &parse_flag, &scalar_enable);
- rte_kvargs_process(kvlist, OTX2_MAX_SQB_COUNT,
- &parse_sqb_count, &sqb_count);
- rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE,
- &parse_flow_prealloc_size, &flow_prealloc_size);
- rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY,
- &parse_flow_max_priority, &flow_max_priority);
- rte_kvargs_process(kvlist, OTX2_SWITCH_HEADER_TYPE,
- &parse_switch_header_type, &switch_header_type);
- rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
- &parse_flag, &rss_tag_as_xor);
- rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
- &parse_flag, &lock_rx_ctx);
- rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
- &parse_flag, &lock_tx_ctx);
- otx2_parse_common_devargs(kvlist);
- rte_kvargs_free(kvlist);
-
-null_devargs:
- dev->ipsec_in_max_spi = ipsec_in_max_spi;
- dev->scalar_ena = scalar_enable;
- dev->rss_tag_as_xor = rss_tag_as_xor;
- dev->max_sqb_count = sqb_count;
- dev->lock_rx_ctx = lock_rx_ctx;
- dev->lock_tx_ctx = lock_tx_ctx;
- dev->rss_info.rss_size = rss_size;
- dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
- dev->npc_flow.flow_max_priority = flow_max_priority;
- dev->npc_flow.switch_header_type = switch_header_type;
- return 0;
-
-exit:
- return -EINVAL;
-}
-
-RTE_PMD_REGISTER_PARAM_STRING(OCTEONTX2_PMD,
- OTX2_RSS_RETA_SIZE "=<64|128|256>"
- OTX2_IPSEC_IN_MAX_SPI "=<1-65535>"
- OTX2_SCL_ENABLE "=1"
- OTX2_MAX_SQB_COUNT "=<8-512>"
- OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
- OTX2_FLOW_MAX_PRIORITY "=<1-32>"
- OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b|chlen24b>"
- OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>"
- OTX2_LOCK_RX_CTX "=1"
- OTX2_LOCK_TX_CTX "=1");
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
deleted file mode 100644
index cc573bb2e8..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ /dev/null
@@ -1,493 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_bus_pci.h>
-#include <rte_malloc.h>
-
-#include "otx2_ethdev.h"
-
-static void
-nix_lf_err_irq(void *param)
-{
- struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_ERR_INT);
- if (intr == 0)
- return;
-
- otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-static int
-nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_nix_err_intr_enb_dis(eth_dev, false);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec);
- /* Enable all dev interrupt except for RQ_DISABLED */
- otx2_nix_err_intr_enb_dis(eth_dev, true);
-
- return rc;
-}
-
-static void
-nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_nix_err_intr_enb_dis(eth_dev, false);
- otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec);
-}
-
-static void
-nix_lf_ras_irq(void *param)
-{
- struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_RAS);
- if (intr == 0)
- return;
-
- otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_RAS);
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-static int
-nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, false);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec);
- /* Enable dev interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, true);
-
- return rc;
-}
-
-static void
-nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, false);
- otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
-}
-
-static inline uint8_t
-nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q,
- uint32_t off, uint64_t mask)
-{
- uint64_t reg, wdata;
- uint8_t qint;
-
- wdata = (uint64_t)q << 44;
- reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off));
-
- if (reg & BIT_ULL(42) /* OP_ERR */) {
- otx2_err("Failed execute irq get off=0x%x", off);
- return 0;
- }
-
- qint = reg & 0xff;
- wdata &= mask;
- otx2_write64(wdata | qint, dev->base + off);
-
- return qint;
-}
-
-static inline uint8_t
-nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq)
-{
- return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq)
-{
- return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq)
-{
- return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
-}
-
-static inline void
-nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
-{
- uint64_t reg;
-
- reg = otx2_read64(dev->base + off);
- if (reg & BIT_ULL(44))
- otx2_err("SQ=%d err_code=0x%x",
- (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
-}
-
-static void
-nix_lf_cq_irq(void *param)
-{
- struct otx2_qint *cint = (struct otx2_qint *)param;
- struct rte_eth_dev *eth_dev = cint->eth_dev;
- struct otx2_eth_dev *dev;
-
- dev = otx2_eth_pmd_priv(eth_dev);
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_INT(cint->qintx));
-}
-
-static void
-nix_lf_q_irq(void *param)
-{
- struct otx2_qint *qint = (struct otx2_qint *)param;
- struct rte_eth_dev *eth_dev = qint->eth_dev;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t irq, qintx = qint->qintx;
- int q, cq, rq, sq;
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx));
- if (intr == 0)
- return;
-
- otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d",
- intr, qintx, dev->pf, dev->vf);
-
- /* Handle RQ interrupts */
- for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
- rq = q % dev->qints;
- irq = nix_lf_rq_irq_get_and_clear(dev, rq);
-
- if (irq & BIT_ULL(NIX_RQINT_DROP))
- otx2_err("RQ=%d NIX_RQINT_DROP", rq);
-
- if (irq & BIT_ULL(NIX_RQINT_RED))
- otx2_err("RQ=%d NIX_RQINT_RED", rq);
- }
-
- /* Handle CQ interrupts */
- for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
- cq = q % dev->qints;
- irq = nix_lf_cq_irq_get_and_clear(dev, cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
- otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL))
- otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
- otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq);
- }
-
- /* Handle SQ interrupts */
- for (q = 0; q < eth_dev->data->nb_tx_queues; q++) {
- sq = q % dev->qints;
- irq = nix_lf_sq_irq_get_and_clear(dev, sq);
-
- if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
- otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
- }
- }
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-int
-oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q, sqs, rqs, qs, rc = 0;
-
- /* Figure out max qintx required */
- rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues);
- sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues);
- qs = RTE_MAX(rqs, sqs);
-
- dev->configured_qints = qs;
-
- for (q = 0; q < qs; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
-
- dev->qints_mem[q].eth_dev = eth_dev;
- dev->qints_mem[q].qintx = q;
-
- /* Sync qints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, nix_lf_q_irq,
- &dev->qints_mem[q], vec);
- if (rc)
- break;
-
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
- otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
- /* Enable QINT interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q));
- }
-
- return rc;
-}
-
-void
-oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q;
-
- for (q = 0; q < dev->configured_qints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
- otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, nix_lf_q_irq,
- &dev->qints_mem[q], vec);
- }
-}
-
-int
-oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t rc = 0, vec, q;
-
- dev->configured_cints = RTE_MIN(dev->cints,
- eth_dev->data->nb_rx_queues);
-
- for (q = 0; q < dev->configured_cints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
-
- /* Clear CINT CNT */
- otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
-
- dev->cints_mem[q].eth_dev = eth_dev;
- dev->cints_mem[q].qintx = q;
-
- /* Sync cints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, nix_lf_cq_irq,
- &dev->cints_mem[q], vec);
- if (rc) {
- otx2_err("Fail to register CQ irq, rc=%d", rc);
- return rc;
- }
-
- rc = rte_intr_vec_list_alloc(handle, "intr_vec",
- dev->configured_cints);
- if (rc) {
- otx2_err("Fail to allocate intr vec list, "
- "rc=%d", rc);
- return rc;
- }
- /* VFIO vector zero is resereved for misc interrupt so
- * doing required adjustment. (b13bfab4cd)
- */
- if (rte_intr_vec_list_index_set(handle, q,
- RTE_INTR_VEC_RXTX_OFFSET + vec))
- return -1;
-
- /* Configure CQE interrupt coalescing parameters */
- otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
- (CQ_CQE_THRESH_DEFAULT << 32) |
- (CQ_TIMER_THRESH_DEFAULT << 48)),
- dev->base + NIX_LF_CINTX_WAIT((q)));
-
- /* Keeping the CQ interrupt disabled as the rx interrupt
- * feature needs to be enabled/disabled on demand.
- */
- }
-
- return rc;
-}
-
-void
-oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q;
-
- for (q = 0; q < dev->configured_cints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
-
- /* Clear CINT CNT */
- otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, nix_lf_cq_irq,
- &dev->cints_mem[q], vec);
- }
-}
-
-int
-otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
-
- if (dev->nix_msixoff == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x",
- dev->nix_msixoff);
- return -EINVAL;
- }
-
- /* Register lf err interrupt */
- rc = nix_lf_register_err_irq(eth_dev);
- /* Register RAS interrupt */
- rc |= nix_lf_register_ras_irq(eth_dev);
-
- return rc;
-}
-
-void
-otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
-{
- nix_lf_unregister_err_irq(eth_dev);
- nix_lf_unregister_ras_irq(eth_dev);
-}
-
-int
-otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Enable CINT interrupt */
- otx2_write64(BIT_ULL(0), dev->base +
- NIX_LF_CINTX_ENA_W1S(rx_queue_id));
-
- return 0;
-}
-
-int
-otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Clear and disable CINT interrupt */
- otx2_write64(BIT_ULL(0), dev->base +
- NIX_LF_CINTX_ENA_W1C(rx_queue_id));
-
- return 0;
-}
-
-void
-otx2_nix_err_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Enable all nix lf error interrupts except
- * RQ_DISABLED and CQ_DISABLED.
- */
- if (enb)
- otx2_write64(~(BIT_ULL(11) | BIT_ULL(24)),
- dev->base + NIX_LF_ERR_INT_ENA_W1S);
- else
- otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
-}
-
-void
-otx2_nix_ras_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (enb)
- otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S);
- else
- otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
deleted file mode 100644
index 48781514c3..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ /dev/null
@@ -1,589 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_ethdev.h>
-#include <rte_mbuf_pool_ops.h>
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
-{
- uint32_t buffsz, frame_size = mtu + NIX_L2_OVERHEAD;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_frs_cfg *req;
- int rc;
-
- if (dev->configured && otx2_ethdev_is_ptp_en(dev))
- frame_size += NIX_TIMESYNC_RX_OFFSET;
-
- buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
-
- /* Refuse MTU that requires the support of scattered packets
- * when this feature has not been enabled before.
- */
- if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
- return -EINVAL;
-
- /* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
- (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
- return -EINVAL;
-
- req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
- req->update_smq = true;
- if (otx2_dev_is_sdp(dev))
- req->sdp_link = true;
- /* FRS HW config should exclude FCS but include NPC VTAG insert size */
- req->maxlen = frame_size - RTE_ETHER_CRC_LEN + NIX_MAX_VTAG_ACT_SIZE;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- /* Now just update Rx MAXLEN */
- req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
- req->maxlen = frame_size - RTE_ETHER_CRC_LEN;
- if (otx2_dev_is_sdp(dev))
- req->sdp_link = true;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- return rc;
-}
-
-int
-otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_rxq *rxq;
- int rc;
-
- rxq = data->rx_queues[0];
-
- /* Setup scatter mode if needed by jumbo */
- otx2_nix_enable_mseg_on_jumbo(rxq);
-
- rc = otx2_nix_mtu_set(eth_dev, data->mtu);
- if (rc)
- otx2_err("Failed to set default MTU size %d", rc);
-
- return rc;
-}
-
-static void
-nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return;
-
- if (en)
- otx2_mbox_alloc_msg_cgx_promisc_enable(mbox);
- else
- otx2_mbox_alloc_msg_cgx_promisc_disable(mbox);
-
- otx2_mbox_process(mbox);
-}
-
-void
-otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rx_mode *req;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
-
- if (en)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
-
- otx2_mbox_process(mbox);
- eth_dev->data->promiscuous = en;
- otx2_nix_vlan_update_promisc(eth_dev, en);
-}
-
-int
-otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev)
-{
- otx2_nix_promisc_config(eth_dev, 1);
- nix_cgx_promisc_config(eth_dev, 1);
-
- return 0;
-}
-
-int
-otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- otx2_nix_promisc_config(eth_dev, dev->dmac_filter_enable);
- nix_cgx_promisc_config(eth_dev, 0);
- dev->dmac_filter_enable = false;
-
- return 0;
-}
-
-static void
-nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rx_mode *req;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
-
- if (en)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI;
- else if (eth_dev->data->promiscuous)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
-
- otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev)
-{
- nix_allmulticast_config(eth_dev, 1);
-
- return 0;
-}
-
-int
-otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
-{
- nix_allmulticast_config(eth_dev, 0);
-
- return 0;
-}
-
-void
-otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_rxq_info *qinfo)
-{
- struct otx2_eth_rxq *rxq;
-
- rxq = eth_dev->data->rx_queues[queue_id];
-
- qinfo->mp = rxq->pool;
- qinfo->scattered_rx = eth_dev->data->scattered_rx;
- qinfo->nb_desc = rxq->qconf.nb_desc;
-
- qinfo->conf.rx_free_thresh = 0;
- qinfo->conf.rx_drop_en = 0;
- qinfo->conf.rx_deferred_start = 0;
- qinfo->conf.offloads = rxq->offloads;
-}
-
-void
-otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_txq_info *qinfo)
-{
- struct otx2_eth_txq *txq;
-
- txq = eth_dev->data->tx_queues[queue_id];
-
- qinfo->nb_desc = txq->qconf.nb_desc;
-
- qinfo->conf.tx_thresh.pthresh = 0;
- qinfo->conf.tx_thresh.hthresh = 0;
- qinfo->conf.tx_thresh.wthresh = 0;
-
- qinfo->conf.tx_free_thresh = 0;
- qinfo->conf.tx_rs_thresh = 0;
- qinfo->conf.offloads = txq->offloads;
- qinfo->conf.tx_deferred_start = 0;
-}
-
-int
-otx2_rx_burst_mode_get(struct rte_eth_dev *eth_dev,
- __rte_unused uint16_t queue_id,
- struct rte_eth_burst_mode *mode)
-{
- ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct burst_info {
- uint16_t flags;
- const char *output;
- } rx_offload_map[] = {
- {NIX_RX_OFFLOAD_RSS_F, "RSS,"},
- {NIX_RX_OFFLOAD_PTYPE_F, " Ptype,"},
- {NIX_RX_OFFLOAD_CHECKSUM_F, " Checksum,"},
- {NIX_RX_OFFLOAD_VLAN_STRIP_F, " VLAN Strip,"},
- {NIX_RX_OFFLOAD_MARK_UPDATE_F, " Mark Update,"},
- {NIX_RX_OFFLOAD_TSTAMP_F, " Timestamp,"},
- {NIX_RX_MULTI_SEG_F, " Scattered,"}
- };
- static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
- "Scalar, Rx Offloads:"
- };
- uint32_t i;
-
- /* Update burst mode info */
- rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena],
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
-
- /* Update Rx offload info */
- for (i = 0; i < RTE_DIM(rx_offload_map); i++) {
- if (dev->rx_offload_flags & rx_offload_map[i].flags) {
- rc = rte_strscpy(mode->info + bytes,
- rx_offload_map[i].output,
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
- }
- }
-
-done:
- return 0;
-}
-
-int
-otx2_tx_burst_mode_get(struct rte_eth_dev *eth_dev,
- __rte_unused uint16_t queue_id,
- struct rte_eth_burst_mode *mode)
-{
- ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct burst_info {
- uint16_t flags;
- const char *output;
- } tx_offload_map[] = {
- {NIX_TX_OFFLOAD_L3_L4_CSUM_F, " Inner L3/L4 csum,"},
- {NIX_TX_OFFLOAD_OL3_OL4_CSUM_F, " Outer L3/L4 csum,"},
- {NIX_TX_OFFLOAD_VLAN_QINQ_F, " VLAN Insertion,"},
- {NIX_TX_OFFLOAD_MBUF_NOFF_F, " MBUF free disable,"},
- {NIX_TX_OFFLOAD_TSTAMP_F, " Timestamp,"},
- {NIX_TX_OFFLOAD_TSO_F, " TSO,"},
- {NIX_TX_MULTI_SEG_F, " Scattered,"}
- };
- static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
- "Scalar, Tx Offloads:"
- };
- uint32_t i;
-
- /* Update burst mode info */
- rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena],
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
-
- /* Update Tx offload info */
- for (i = 0; i < RTE_DIM(tx_offload_map); i++) {
- if (dev->tx_offload_flags & tx_offload_map[i].flags) {
- rc = rte_strscpy(mode->info + bytes,
- tx_offload_map[i].output,
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
- }
- }
-
-done:
- return 0;
-}
-
-static void
-nix_rx_head_tail_get(struct otx2_eth_dev *dev,
- uint32_t *head, uint32_t *tail, uint16_t queue_idx)
-{
- uint64_t reg, val;
-
- if (head == NULL || tail == NULL)
- return;
-
- reg = (((uint64_t)queue_idx) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)
- (dev->base + NIX_LF_CQ_OP_STATUS));
- if (val & (OP_ERR | CQ_ERR))
- val = 0;
-
- *tail = (uint32_t)(val & 0xFFFFF);
- *head = (uint32_t)((val >> 20) & 0xFFFFF);
-}
-
-uint32_t
-otx2_nix_rx_queue_count(void *rx_queue)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
- uint32_t head, tail;
-
- nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
- return (tail - head) % rxq->qlen;
-}
-
-static inline int
-nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
-{
- /* Check given offset(queue index) has packet filled by HW */
- if (tail > head && offset <= tail && offset >= head)
- return 1;
- /* Wrap around case */
- if (head > tail && (offset >= head || offset <= tail))
- return 1;
-
- return 0;
-}
-
-int
-otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- uint32_t head, tail;
-
- if (rxq->qlen <= offset)
- return -EINVAL;
-
- nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
- &head, &tail, rxq->rq);
-
- if (nix_offset_has_packet(head, tail, offset))
- return RTE_ETH_RX_DESC_DONE;
- else
- return RTE_ETH_RX_DESC_AVAIL;
-}
-
-static void
-nix_tx_head_tail_get(struct otx2_eth_dev *dev,
- uint32_t *head, uint32_t *tail, uint16_t queue_idx)
-{
- uint64_t reg, val;
-
- if (head == NULL || tail == NULL)
- return;
-
- reg = (((uint64_t)queue_idx) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)
- (dev->base + NIX_LF_SQ_OP_STATUS));
- if (val & OP_ERR)
- val = 0;
-
- *tail = (uint32_t)((val >> 28) & 0x3F);
- *head = (uint32_t)((val >> 20) & 0x3F);
-}
-
-int
-otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset)
-{
- struct otx2_eth_txq *txq = tx_queue;
- uint32_t head, tail;
-
- if (txq->qconf.nb_desc <= offset)
- return -EINVAL;
-
- nix_tx_head_tail_get(txq->dev, &head, &tail, txq->sq);
-
- if (nix_offset_has_packet(head, tail, offset))
- return RTE_ETH_TX_DESC_DONE;
- else
- return RTE_ETH_TX_DESC_FULL;
-}
-
-/* It is a NOP for octeontx2 as HW frees the buffer on xmit */
-int
-otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
-{
- RTE_SET_USED(txq);
- RTE_SET_USED(free_cnt);
-
- return 0;
-}
-
-int
-otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
- size_t fw_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc = (int)fw_size;
-
- if (fw_size > sizeof(dev->mkex_pfl_name))
- rc = sizeof(dev->mkex_pfl_name);
-
- rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
-
- rc += 1; /* Add the size of '\0' */
- if (fw_size < (size_t)rc)
- return rc;
-
- return 0;
-}
-
-int
-otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
-{
- RTE_SET_USED(eth_dev);
-
- if (!strcmp(pool, rte_mbuf_platform_mempool_ops()))
- return 0;
-
- return -ENOTSUP;
-}
-
-int
-otx2_nix_dev_flow_ops_get(struct rte_eth_dev *eth_dev __rte_unused,
- const struct rte_flow_ops **ops)
-{
- *ops = &otx2_flow_ops;
- return 0;
-}
-
-static struct cgx_fw_data *
-nix_get_fwdata(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_fw_data *rsp = NULL;
- int rc;
-
- otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get fw data: %d", rc);
- return NULL;
- }
-
- return rsp;
-}
-
-int
-otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_module_info *modinfo)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_fw_data *rsp;
-
- rsp = nix_get_fwdata(dev);
- if (rsp == NULL)
- return -EIO;
-
- modinfo->type = rsp->fwdata.sfp_eeprom.sff_id;
- modinfo->eeprom_len = SFP_EEPROM_SIZE;
-
- return 0;
-}
-
-int
-otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
- struct rte_dev_eeprom_info *info)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_fw_data *rsp;
-
- if (info->offset + info->length > SFP_EEPROM_SIZE)
- return -EINVAL;
-
- rsp = nix_get_fwdata(dev);
- if (rsp == NULL)
- return -EIO;
-
- otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset,
- info->length);
-
- return 0;
-}
-
-int
-otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- devinfo->min_rx_bufsize = NIX_MIN_FRS;
- devinfo->max_rx_pktlen = NIX_MAX_FRS;
- devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT;
- devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT;
- devinfo->max_mac_addrs = dev->max_mac_entries;
- devinfo->max_vfs = pci_dev->max_vfs;
- devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_L2_OVERHEAD;
- devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_L2_OVERHEAD;
- if (dev->configured && otx2_ethdev_is_ptp_en(dev)) {
- devinfo->max_mtu -= NIX_TIMESYNC_RX_OFFSET;
- devinfo->min_mtu -= NIX_TIMESYNC_RX_OFFSET;
- devinfo->max_rx_pktlen -= NIX_TIMESYNC_RX_OFFSET;
- }
-
- devinfo->rx_offload_capa = dev->rx_offload_capa;
- devinfo->tx_offload_capa = dev->tx_offload_capa;
- devinfo->rx_queue_offload_capa = 0;
- devinfo->tx_queue_offload_capa = 0;
-
- devinfo->reta_size = dev->rss_info.rss_size;
- devinfo->hash_key_size = NIX_HASH_KEY_SIZE;
- devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD;
-
- devinfo->default_rxconf = (struct rte_eth_rxconf) {
- .rx_drop_en = 0,
- .offloads = 0,
- };
-
- devinfo->default_txconf = (struct rte_eth_txconf) {
- .offloads = 0,
- };
-
- devinfo->default_rxportconf = (struct rte_eth_dev_portconf) {
- .ring_size = NIX_RX_DEFAULT_RING_SZ,
- };
-
- devinfo->rx_desc_lim = (struct rte_eth_desc_lim) {
- .nb_max = UINT16_MAX,
- .nb_min = NIX_RX_MIN_DESC,
- .nb_align = NIX_RX_MIN_DESC_ALIGN,
- .nb_seg_max = NIX_RX_NB_SEG_MAX,
- .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX,
- };
- devinfo->rx_desc_lim.nb_max =
- RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max,
- NIX_RX_MIN_DESC_ALIGN);
-
- devinfo->tx_desc_lim = (struct rte_eth_desc_lim) {
- .nb_max = UINT16_MAX,
- .nb_min = 1,
- .nb_align = 1,
- .nb_seg_max = NIX_TX_NB_SEG_MAX,
- .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX,
- };
-
- /* Auto negotiation disabled */
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
- if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
- RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
-
- /* 50G and 100G to be supported for board version C0
- * and above.
- */
- if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
- RTE_ETH_LINK_SPEED_100G;
- }
-
- devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
- RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
- devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
deleted file mode 100644
index 4d40184de4..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ /dev/null
@@ -1,923 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_esp.h>
-#include <rte_ethdev.h>
-#include <rte_eventdev.h>
-#include <rte_ip.h>
-#include <rte_malloc.h>
-#include <rte_memzone.h>
-#include <rte_security.h>
-#include <rte_security_driver.h>
-#include <rte_udp.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev_qp.h"
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_ipsec_fp.h"
-#include "otx2_sec_idev.h"
-#include "otx2_security.h"
-
-#define ERR_STR_SZ 256
-
-struct eth_sec_tag_const {
- RTE_STD_C11
- union {
- struct {
- uint32_t rsvd_11_0 : 12;
- uint32_t port : 8;
- uint32_t event_type : 4;
- uint32_t rsvd_31_24 : 8;
- };
- uint32_t u32;
- };
-};
-
-static struct rte_cryptodev_capabilities otx2_eth_sec_crypto_caps[] = {
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 8,
- .max = 12,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 20,
- .max = 64,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- },
- }, }
- }, }
- },
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability otx2_eth_sec_capabilities[] = {
- { /* IPsec Inline Protocol ESP Tunnel Ingress */
- .action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_eth_sec_crypto_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- { /* IPsec Inline Protocol ESP Tunnel Egress */
- .action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_eth_sec_crypto_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- {
- .action = RTE_SECURITY_ACTION_TYPE_NONE
- }
-};
-
-static void
-lookup_mem_sa_tbl_clear(struct rte_eth_dev *eth_dev)
-{
- static const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- uint16_t port = eth_dev->data->port_id;
- const struct rte_memzone *mz;
- uint64_t **sa_tbl;
- uint8_t *mem;
-
- mz = rte_memzone_lookup(name);
- if (mz == NULL)
- return;
-
- mem = mz->addr;
-
- sa_tbl = (uint64_t **)RTE_PTR_ADD(mem, OTX2_NIX_SA_TBL_START);
- if (sa_tbl[port] == NULL)
- return;
-
- rte_free(sa_tbl[port]);
- sa_tbl[port] = NULL;
-}
-
-static int
-lookup_mem_sa_index_update(struct rte_eth_dev *eth_dev, int spi, void *sa,
- char *err_str)
-{
- static const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- const struct rte_memzone *mz;
- uint64_t **sa_tbl;
- uint8_t *mem;
-
- mz = rte_memzone_lookup(name);
- if (mz == NULL) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not find fastpath lookup table");
- return -EINVAL;
- }
-
- mem = mz->addr;
-
- sa_tbl = (uint64_t **)RTE_PTR_ADD(mem, OTX2_NIX_SA_TBL_START);
-
- if (sa_tbl[port] == NULL) {
- sa_tbl[port] = rte_malloc(NULL, dev->ipsec_in_max_spi *
- sizeof(uint64_t), 0);
- }
-
- sa_tbl[port][spi] = (uint64_t)sa;
-
- return 0;
-}
-
-static inline void
-in_sa_mz_name_get(char *name, int size, uint16_t port)
-{
- snprintf(name, size, "otx2_ipsec_in_sadb_%u", port);
-}
-
-static struct otx2_ipsec_fp_in_sa *
-in_sa_get(uint16_t port, int sa_index)
-{
- char name[RTE_MEMZONE_NAMESIZE];
- struct otx2_ipsec_fp_in_sa *sa;
- const struct rte_memzone *mz;
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_lookup(name);
- if (mz == NULL) {
- otx2_err("Could not get the memzone reserved for IN SA DB");
- return NULL;
- }
-
- sa = mz->addr;
-
- return sa + sa_index;
-}
-
-static int
-ipsec_sa_const_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_sec_session_ipsec_ip *sess)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
-
- sess->partial_len = sizeof(struct rte_ipv4_hdr);
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
- sess->partial_len += sizeof(struct rte_esp_hdr);
- sess->roundup_len = sizeof(struct rte_esp_tail);
- } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) {
- sess->partial_len += OTX2_SEC_AH_HDR_LEN;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->options.udp_encap)
- sess->partial_len += sizeof(struct rte_udp_hdr);
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- sess->partial_len += OTX2_SEC_AES_GCM_IV_LEN;
- sess->partial_len += OTX2_SEC_AES_GCM_MAC_LEN;
- sess->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN;
- }
- return 0;
- }
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
- if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- sess->partial_len += OTX2_SEC_AES_CBC_IV_LEN;
- sess->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN;
- } else {
- return -EINVAL;
- }
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- sess->partial_len += OTX2_SEC_SHA1_HMAC_LEN;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int
-hmac_init(struct otx2_ipsec_fp_sa_ctl *ctl, struct otx2_cpt_qp *qp,
- const uint8_t *auth_key, int len, uint8_t *hmac_key)
-{
- struct inst_data {
- struct otx2_cpt_res cpt_res;
- uint8_t buffer[64];
- } *md;
-
- volatile struct otx2_cpt_res *res;
- uint64_t timeout, lmt_status;
- struct otx2_cpt_inst_s inst;
- rte_iova_t md_iova;
- int ret;
-
- memset(&inst, 0, sizeof(struct otx2_cpt_inst_s));
-
- md = rte_zmalloc(NULL, sizeof(struct inst_data), OTX2_CPT_RES_ALIGN);
- if (md == NULL)
- return -ENOMEM;
-
- memcpy(md->buffer, auth_key, len);
-
- md_iova = rte_malloc_virt2iova(md);
- if (md_iova == RTE_BAD_IOVA) {
- ret = -EINVAL;
- goto free_md;
- }
-
- inst.res_addr = md_iova + offsetof(struct inst_data, cpt_res);
- inst.opcode = OTX2_CPT_OP_WRITE_HMAC_IPAD_OPAD;
- inst.param2 = ctl->auth_type;
- inst.dlen = len;
- inst.dptr = md_iova + offsetof(struct inst_data, buffer);
- inst.rptr = inst.dptr;
- inst.egrp = OTX2_CPT_EGRP_INLINE_IPSEC;
-
- md->cpt_res.compcode = 0;
- md->cpt_res.uc_compcode = 0xff;
-
- timeout = rte_get_timer_cycles() + 5 * rte_get_timer_hz();
-
- rte_io_wmb();
-
- do {
- otx2_lmt_mov(qp->lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- res = (volatile struct otx2_cpt_res *)&md->cpt_res;
-
- /* Wait until instruction completes or times out */
- while (res->uc_compcode == 0xff) {
- if (rte_get_timer_cycles() > timeout)
- break;
- }
-
- if (res->u16[0] != OTX2_SEC_COMP_GOOD) {
- ret = -EIO;
- goto free_md;
- }
-
- /* Retrieve the ipad and opad from rptr */
- memcpy(hmac_key, md->buffer, 48);
-
- ret = 0;
-
-free_md:
- rte_free(md);
- return ret;
-}
-
-static int
-eth_sec_ipsec_out_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_sec_session_ipsec_ip *sess;
- uint16_t port = eth_dev->data->port_id;
- int cipher_key_len, auth_key_len, ret;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_ipsec_fp_sa_ctl *ctl;
- struct otx2_ipsec_fp_out_sa *sa;
- struct otx2_sec_session *priv;
- struct otx2_cpt_inst_s inst;
- struct otx2_cpt_qp *qp;
-
- priv = get_sec_session_private_data(sec_sess);
- priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
- sess = &priv->ipsec.ip;
-
- sa = &sess->out_sa;
- ctl = &sa->ctl;
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sess, 0, sizeof(struct otx2_sec_session_ipsec_ip));
-
- sess->seq = 1;
-
- ret = ipsec_sa_const_set(ipsec, crypto_xform, sess);
- if (ret < 0)
- return ret;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- memcpy(sa->nonce, &ipsec->salt, 4);
-
- if (ipsec->options.udp_encap == 1) {
- sa->udp_src = 4500;
- sa->udp_dst = 4500;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- /* Start ip id from 1 */
- sess->ip_id = 1;
-
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- memcpy(&sa->ip_src, &ipsec->tunnel.ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&sa->ip_dst, &ipsec->tunnel.ipv4.dst_ip,
- sizeof(struct in_addr));
- } else {
- return -EINVAL;
- }
- } else {
- return -EINVAL;
- }
-
- cipher_xform = crypto_xform;
- auth_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
- auth_key = NULL;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- /* Determine word 7 of CPT instruction */
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_INLINE_IPSEC;
- inst.cptr = rte_mempool_virt2iova(sa);
- sess->inst_w7 = inst.u64[7];
-
- /* Get CPT QP to be used for this SA */
- ret = otx2_sec_idev_tx_cpt_qp_get(port, &qp);
- if (ret)
- return ret;
-
- sess->qp = qp;
-
- sess->cpt_lmtline = qp->lmtline;
- sess->cpt_nq_reg = qp->lf_nq_reg;
-
- /* Populate control word */
- ret = ipsec_fp_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- goto cpt_put;
-
- if (auth_key_len && auth_key) {
- ret = hmac_init(ctl, qp, auth_key, auth_key_len, sa->hmac_key);
- if (ret)
- goto cpt_put;
- }
-
- rte_io_wmb();
- ctl->valid = 1;
-
- return 0;
-cpt_put:
- otx2_sec_idev_tx_cpt_qp_put(sess->qp);
- return ret;
-}
-
-static int
-eth_sec_ipsec_in_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_sec_session_ipsec_ip *sess;
- uint16_t port = eth_dev->data->port_id;
- int cipher_key_len, auth_key_len, ret;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_ipsec_fp_sa_ctl *ctl;
- struct otx2_ipsec_fp_in_sa *sa;
- struct otx2_sec_session *priv;
- char err_str[ERR_STR_SZ];
- struct otx2_cpt_qp *qp;
-
- memset(err_str, 0, ERR_STR_SZ);
-
- if (ipsec->spi >= dev->ipsec_in_max_spi) {
- otx2_err("SPI exceeds max supported");
- return -EINVAL;
- }
-
- sa = in_sa_get(port, ipsec->spi);
- if (sa == NULL)
- return -ENOMEM;
-
- ctl = &sa->ctl;
-
- priv = get_sec_session_private_data(sec_sess);
- priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
- sess = &priv->ipsec.ip;
-
- rte_spinlock_lock(&dev->ipsec_tbl_lock);
-
- if (ctl->valid) {
- snprintf(err_str, ERR_STR_SZ, "SA already registered");
- ret = -EEXIST;
- goto tbl_unlock;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_fp_in_sa));
-
- auth_xform = crypto_xform;
- cipher_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
- auth_key = NULL;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
- }
-
- if (cipher_key_len != 0) {
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- } else {
- snprintf(err_str, ERR_STR_SZ, "Invalid cipher key len");
- ret = -EINVAL;
- goto sa_clear;
- }
-
- sess->in_sa = sa;
-
- sa->userdata = priv->userdata;
-
- sa->replay_win_sz = ipsec->replay_win_sz;
-
- if (lookup_mem_sa_index_update(eth_dev, ipsec->spi, sa, err_str)) {
- ret = -EINVAL;
- goto sa_clear;
- }
-
- ret = ipsec_fp_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not set SA CTL word (err: %d)", ret);
- goto sa_clear;
- }
-
- if (auth_key_len && auth_key) {
- /* Get a queue pair for HMAC init */
- ret = otx2_sec_idev_tx_cpt_qp_get(port, &qp);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ, "Could not get CPT QP");
- goto sa_clear;
- }
-
- ret = hmac_init(ctl, qp, auth_key, auth_key_len, sa->hmac_key);
- otx2_sec_idev_tx_cpt_qp_put(qp);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ, "Could not put CPT QP");
- goto sa_clear;
- }
- }
-
- if (sa->replay_win_sz) {
- if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) {
- snprintf(err_str, ERR_STR_SZ,
- "Replay window size is not supported");
- ret = -ENOTSUP;
- goto sa_clear;
- }
- sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay),
- 0);
- if (sa->replay == NULL) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not allocate memory");
- ret = -ENOMEM;
- goto sa_clear;
- }
-
- rte_spinlock_init(&sa->replay->lock);
- /*
- * Set window bottom to 1, base and top to size of
- * window
- */
- sa->replay->winb = 1;
- sa->replay->wint = sa->replay_win_sz;
- sa->replay->base = sa->replay_win_sz;
- sa->esn_low = 0;
- sa->esn_hi = 0;
- }
-
- rte_io_wmb();
- ctl->valid = 1;
-
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
- return 0;
-
-sa_clear:
- memset(sa, 0, sizeof(struct otx2_ipsec_fp_in_sa));
-
-tbl_unlock:
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
-
- otx2_err("%s", err_str);
-
- return ret;
-}
-
-static int
-eth_sec_ipsec_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sess)
-{
- int ret;
-
- ret = ipsec_fp_xform_verify(ipsec, crypto_xform);
- if (ret)
- return ret;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
- return eth_sec_ipsec_in_sess_create(eth_dev, ipsec,
- crypto_xform, sess);
- else
- return eth_sec_ipsec_out_sess_create(eth_dev, ipsec,
- crypto_xform, sess);
-}
-
-static int
-otx2_eth_sec_session_create(void *device,
- struct rte_security_session_conf *conf,
- struct rte_security_session *sess,
- struct rte_mempool *mempool)
-{
- struct otx2_sec_session *priv;
- int ret;
-
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
- return -ENOTSUP;
-
- if (rte_mempool_get(mempool, (void **)&priv)) {
- otx2_err("Could not allocate security session private data");
- return -ENOMEM;
- }
-
- set_sec_session_private_data(sess, priv);
-
- /*
- * Save userdata provided by the application. For ingress packets, this
- * could be used to identify the SA.
- */
- priv->userdata = conf->userdata;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
- ret = eth_sec_ipsec_sess_create(device, &conf->ipsec,
- conf->crypto_xform,
- sess);
- else
- ret = -ENOTSUP;
-
- if (ret)
- goto mempool_put;
-
- return 0;
-
-mempool_put:
- rte_mempool_put(mempool, priv);
- set_sec_session_private_data(sess, NULL);
- return ret;
-}
-
-static void
-otx2_eth_sec_free_anti_replay(struct otx2_ipsec_fp_in_sa *sa)
-{
- if (sa != NULL) {
- if (sa->replay_win_sz && sa->replay)
- rte_free(sa->replay);
- }
-}
-
-static int
-otx2_eth_sec_session_destroy(void *device,
- struct rte_security_session *sess)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(device);
- struct otx2_sec_session_ipsec_ip *sess_ip;
- struct otx2_ipsec_fp_in_sa *sa;
- struct otx2_sec_session *priv;
- struct rte_mempool *sess_mp;
- int ret;
-
- priv = get_sec_session_private_data(sess);
- if (priv == NULL)
- return -EINVAL;
-
- sess_ip = &priv->ipsec.ip;
-
- if (priv->ipsec.dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- rte_spinlock_lock(&dev->ipsec_tbl_lock);
- sa = sess_ip->in_sa;
-
- /* Release the anti replay window */
- otx2_eth_sec_free_anti_replay(sa);
-
- /* Clear SA table entry */
- if (sa != NULL) {
- sa->ctl.valid = 0;
- rte_io_wmb();
- }
-
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
- }
-
- /* Release CPT LF used for this session */
- if (sess_ip->qp != NULL) {
- ret = otx2_sec_idev_tx_cpt_qp_put(sess_ip->qp);
- if (ret)
- return ret;
- }
-
- sess_mp = rte_mempool_from_obj(priv);
-
- set_sec_session_private_data(sess, NULL);
- rte_mempool_put(sess_mp, priv);
-
- return 0;
-}
-
-static unsigned int
-otx2_eth_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct otx2_sec_session);
-}
-
-static const struct rte_security_capability *
-otx2_eth_sec_capabilities_get(void *device __rte_unused)
-{
- return otx2_eth_sec_capabilities;
-}
-
-static struct rte_security_ops otx2_eth_sec_ops = {
- .session_create = otx2_eth_sec_session_create,
- .session_destroy = otx2_eth_sec_session_destroy,
- .session_get_size = otx2_eth_sec_session_get_size,
- .capabilities_get = otx2_eth_sec_capabilities_get
-};
-
-int
-otx2_eth_sec_ctx_create(struct rte_eth_dev *eth_dev)
-{
- struct rte_security_ctx *ctx;
- int ret;
-
- ctx = rte_malloc("otx2_eth_sec_ctx",
- sizeof(struct rte_security_ctx), 0);
- if (ctx == NULL)
- return -ENOMEM;
-
- ret = otx2_sec_idev_cfg_init(eth_dev->data->port_id);
- if (ret) {
- rte_free(ctx);
- return ret;
- }
-
- /* Populate ctx */
-
- ctx->device = eth_dev;
- ctx->ops = &otx2_eth_sec_ops;
- ctx->sess_cnt = 0;
- ctx->flags =
- (RTE_SEC_CTX_F_FAST_SET_MDATA | RTE_SEC_CTX_F_FAST_GET_UDATA);
-
- eth_dev->security_ctx = ctx;
-
- return 0;
-}
-
-void
-otx2_eth_sec_ctx_destroy(struct rte_eth_dev *eth_dev)
-{
- rte_free(eth_dev->security_ctx);
-}
-
-static int
-eth_sec_ipsec_cfg(struct rte_eth_dev *eth_dev, uint8_t tt)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- struct nix_inline_ipsec_lf_cfg *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct eth_sec_tag_const tag_const;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_lookup(name);
- if (mz == NULL)
- return -EINVAL;
-
- req = otx2_mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox);
- req->enable = 1;
- req->sa_base_addr = mz->iova;
-
- req->ipsec_cfg0.tt = tt;
-
- tag_const.u32 = 0;
- tag_const.event_type = RTE_EVENT_TYPE_ETHDEV;
- tag_const.port = port;
- req->ipsec_cfg0.tag_const = tag_const.u32;
-
- req->ipsec_cfg0.sa_pow2_size =
- rte_log2_u32(sizeof(struct otx2_ipsec_fp_in_sa));
- req->ipsec_cfg0.lenm1_max = NIX_MAX_FRS - 1;
-
- req->ipsec_cfg1.sa_idx_w = rte_log2_u32(dev->ipsec_in_max_spi);
- req->ipsec_cfg1.sa_idx_max = dev->ipsec_in_max_spi - 1;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_eth_sec_update_tag_type(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- int ret;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = 0; /* Read RQ:0 context */
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret < 0) {
- otx2_err("Could not read RQ context");
- return ret;
- }
-
- /* Update tag type */
- ret = eth_sec_ipsec_cfg(eth_dev, rsp->rq.sso_tt);
- if (ret < 0)
- otx2_err("Could not update sec eth tag type");
-
- return ret;
-}
-
-int
-otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
-{
- const size_t sa_width = sizeof(struct otx2_ipsec_fp_in_sa);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int mz_sz, ret;
- uint16_t nb_sa;
-
- RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
- !RTE_IS_POWER_OF_2(sa_width));
-
- if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
- return 0;
-
- if (rte_security_dynfield_register() < 0)
- return -rte_errno;
-
- nb_sa = dev->ipsec_in_max_spi;
- mz_sz = nb_sa * sa_width;
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_reserve_aligned(name, mz_sz, rte_socket_id(),
- RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN);
-
- if (mz == NULL) {
- otx2_err("Could not allocate inbound SA DB");
- return -ENOMEM;
- }
-
- memset(mz->addr, 0, mz_sz);
-
- ret = eth_sec_ipsec_cfg(eth_dev, SSO_TT_ORDERED);
- if (ret < 0) {
- otx2_err("Could not configure inline IPsec");
- goto sec_fini;
- }
-
- rte_spinlock_init(&dev->ipsec_tbl_lock);
-
- return 0;
-
-sec_fini:
- otx2_err("Could not configure device for security");
- otx2_eth_sec_fini(eth_dev);
- return ret;
-}
-
-void
-otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- char name[RTE_MEMZONE_NAMESIZE];
-
- if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
- return;
-
- lookup_mem_sa_tbl_clear(eth_dev);
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- rte_memzone_free(rte_memzone_lookup(name));
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.h b/drivers/net/octeontx2/otx2_ethdev_sec.h
deleted file mode 100644
index 298b00bf89..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec.h
+++ /dev/null
@@ -1,130 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_SEC_H__
-#define __OTX2_ETHDEV_SEC_H__
-
-#include <rte_ethdev.h>
-
-#include "otx2_ipsec_fp.h"
-#include "otx2_ipsec_po.h"
-
-#define OTX2_CPT_RES_ALIGN 16
-#define OTX2_NIX_SEND_DESC_ALIGN 16
-#define OTX2_CPT_INST_SIZE 64
-
-#define OTX2_CPT_EGRP_INLINE_IPSEC 1
-
-#define OTX2_CPT_OP_INLINE_IPSEC_OUTB (0x40 | 0x25)
-#define OTX2_CPT_OP_INLINE_IPSEC_INB (0x40 | 0x26)
-#define OTX2_CPT_OP_WRITE_HMAC_IPAD_OPAD (0x40 | 0x27)
-
-#define OTX2_SEC_CPT_COMP_GOOD 0x1
-#define OTX2_SEC_UC_COMP_GOOD 0x0
-#define OTX2_SEC_COMP_GOOD (OTX2_SEC_UC_COMP_GOOD << 8 | \
- OTX2_SEC_CPT_COMP_GOOD)
-
-/* CPT Result */
-struct otx2_cpt_res {
- union {
- struct {
- uint64_t compcode:8;
- uint64_t uc_compcode:8;
- uint64_t doneint:1;
- uint64_t reserved_17_63:47;
- uint64_t reserved_64_127;
- };
- uint16_t u16[8];
- };
-};
-
-struct otx2_cpt_inst_s {
- union {
- struct {
- /* W0 */
- uint64_t nixtxl : 3;
- uint64_t doneint : 1;
- uint64_t nixtx_addr : 60;
- /* W1 */
- uint64_t res_addr : 64;
- /* W2 */
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t rsvd_175_172 : 4;
- uint64_t rvu_pf_func : 16;
- /* W3 */
- uint64_t qord : 1;
- uint64_t rsvd_194_193 : 2;
- uint64_t wqe_ptr : 61;
- /* W4 */
- uint64_t dlen : 16;
- uint64_t param2 : 16;
- uint64_t param1 : 16;
- uint64_t opcode : 16;
- /* W5 */
- uint64_t dptr : 64;
- /* W6 */
- uint64_t rptr : 64;
- /* W7 */
- uint64_t cptr : 61;
- uint64_t egrp : 3;
- };
- uint64_t u64[8];
- };
-};
-
-/*
- * Security session for inline IPsec protocol offload. This is private data of
- * inline capable PMD.
- */
-struct otx2_sec_session_ipsec_ip {
- RTE_STD_C11
- union {
- /*
- * Inbound SA would accessed by crypto block. And so the memory
- * is allocated differently and shared with the h/w. Only
- * holding a pointer to this memory in the session private
- * space.
- */
- void *in_sa;
- /* Outbound SA */
- struct otx2_ipsec_fp_out_sa out_sa;
- };
-
- /* Address of CPT LMTLINE */
- void *cpt_lmtline;
- /* CPT LF enqueue register address */
- rte_iova_t cpt_nq_reg;
-
- /* Pre calculated lengths and data for a session */
- uint8_t partial_len;
- uint8_t roundup_len;
- uint8_t roundup_byte;
- uint16_t ip_id;
- union {
- uint64_t esn;
- struct {
- uint32_t seq;
- uint32_t esn_hi;
- };
- };
-
- uint64_t inst_w7;
-
- /* CPT QP used by SA */
- struct otx2_cpt_qp *qp;
-};
-
-int otx2_eth_sec_ctx_create(struct rte_eth_dev *eth_dev);
-
-void otx2_eth_sec_ctx_destroy(struct rte_eth_dev *eth_dev);
-
-int otx2_eth_sec_update_tag_type(struct rte_eth_dev *eth_dev);
-
-int otx2_eth_sec_init(struct rte_eth_dev *eth_dev);
-
-void otx2_eth_sec_fini(struct rte_eth_dev *eth_dev);
-
-#endif /* __OTX2_ETHDEV_SEC_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
deleted file mode 100644
index 021782009f..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ /dev/null
@@ -1,182 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_SEC_TX_H__
-#define __OTX2_ETHDEV_SEC_TX_H__
-
-#include <rte_security.h>
-#include <rte_mbuf.h>
-
-#include "otx2_ethdev_sec.h"
-#include "otx2_security.h"
-
-struct otx2_ipsec_fp_out_hdr {
- uint32_t ip_id;
- uint32_t seq;
- uint8_t iv[16];
-};
-
-static __rte_always_inline int32_t
-otx2_ipsec_fp_out_rlen_get(struct otx2_sec_session_ipsec_ip *sess,
- uint32_t plen)
-{
- uint32_t enc_payload_len;
-
- enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len,
- sess->roundup_byte);
-
- return sess->partial_len + enc_payload_len;
-}
-
-static __rte_always_inline void
-otx2_ssogws_head_wait(uint64_t base);
-
-static __rte_always_inline int
-otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
- const struct otx2_eth_txq *txq, const uint32_t offload_flags)
-{
- uint32_t dlen, rlen, desc_headroom, extend_head, extend_tail;
- struct otx2_sec_session_ipsec_ip *sess;
- struct otx2_ipsec_fp_out_hdr *hdr;
- struct otx2_ipsec_fp_out_sa *sa;
- uint64_t data_addr, desc_addr;
- struct otx2_sec_session *priv;
- struct otx2_cpt_inst_s inst;
- uint64_t lmt_status;
- char *data;
-
- struct desc {
- struct otx2_cpt_res cpt_res __rte_aligned(OTX2_CPT_RES_ALIGN);
- struct nix_send_hdr_s nix_hdr
- __rte_aligned(OTX2_NIX_SEND_DESC_ALIGN);
- union nix_send_sg_s nix_sg;
- struct nix_iova_s nix_iova;
- } *sd;
-
- priv = (struct otx2_sec_session *)(*rte_security_dynfield(m));
- sess = &priv->ipsec.ip;
- sa = &sess->out_sa;
-
- RTE_ASSERT(sess->cpt_lmtline != NULL);
- RTE_ASSERT(!(offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F));
-
- dlen = rte_pktmbuf_pkt_len(m) + sizeof(*hdr) - RTE_ETHER_HDR_LEN;
- rlen = otx2_ipsec_fp_out_rlen_get(sess, dlen - sizeof(*hdr));
-
- RTE_BUILD_BUG_ON(OTX2_CPT_RES_ALIGN % OTX2_NIX_SEND_DESC_ALIGN);
- RTE_BUILD_BUG_ON(sizeof(sd->cpt_res) % OTX2_NIX_SEND_DESC_ALIGN);
-
- extend_head = sizeof(*hdr);
- extend_tail = rlen - dlen;
-
- desc_headroom = (OTX2_CPT_RES_ALIGN - 1) + sizeof(*sd);
-
- if (unlikely(!rte_pktmbuf_is_contiguous(m)) ||
- unlikely(rte_pktmbuf_headroom(m) < extend_head + desc_headroom) ||
- unlikely(rte_pktmbuf_tailroom(m) < extend_tail)) {
- goto drop;
- }
-
- /*
- * Extend mbuf data to point to the expected packet buffer for NIX.
- * This includes the Ethernet header followed by the encrypted IPsec
- * payload
- */
- rte_pktmbuf_append(m, extend_tail);
- data = rte_pktmbuf_prepend(m, extend_head);
- data_addr = rte_pktmbuf_iova(m);
-
- /*
- * Move the Ethernet header, to insert otx2_ipsec_fp_out_hdr prior
- * to the IP header
- */
- memcpy(data, data + sizeof(*hdr), RTE_ETHER_HDR_LEN);
-
- hdr = (struct otx2_ipsec_fp_out_hdr *)(data + RTE_ETHER_HDR_LEN);
-
- if (sa->ctl.enc_type == OTX2_IPSEC_FP_SA_ENC_AES_GCM) {
- /* AES-128-GCM */
- memcpy(hdr->iv, &sa->nonce, 4);
- memset(hdr->iv + 4, 0, 12); //TODO: make it random
- } else {
- /* AES-128-[CBC] + [SHA1] */
- memset(hdr->iv, 0, 16); //TODO: make it random
- }
-
- /* Keep CPT result and NIX send descriptors in headroom */
- sd = (void *)RTE_PTR_ALIGN(data - desc_headroom, OTX2_CPT_RES_ALIGN);
- desc_addr = data_addr - RTE_PTR_DIFF(data, sd);
-
- /* Prepare CPT instruction */
-
- inst.nixtx_addr = (desc_addr + offsetof(struct desc, nix_hdr)) >> 4;
- inst.doneint = 0;
- inst.nixtxl = 1;
- inst.res_addr = desc_addr + offsetof(struct desc, cpt_res);
- inst.u64[2] = 0;
- inst.u64[3] = 0;
- inst.wqe_ptr = desc_addr >> 3; /* FIXME: Handle errors */
- inst.qord = 1;
- inst.opcode = OTX2_CPT_OP_INLINE_IPSEC_OUTB;
- inst.dlen = dlen;
- inst.dptr = data_addr + RTE_ETHER_HDR_LEN;
- inst.u64[7] = sess->inst_w7;
-
- /* First word contains 8 bit completion code & 8 bit uc comp code */
- sd->cpt_res.u16[0] = 0;
-
- /* Prepare NIX send descriptors for output expected from CPT */
-
- sd->nix_hdr.w0.u = 0;
- sd->nix_hdr.w1.u = 0;
- sd->nix_hdr.w0.sq = txq->sq;
- sd->nix_hdr.w0.sizem1 = 1;
- sd->nix_hdr.w0.total = rte_pktmbuf_data_len(m);
- sd->nix_hdr.w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
- if (offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
- sd->nix_hdr.w0.df = otx2_nix_prefree_seg(m);
-
- sd->nix_sg.u = 0;
- sd->nix_sg.subdc = NIX_SUBDC_SG;
- sd->nix_sg.ld_type = NIX_SENDLDTYPE_LDD;
- sd->nix_sg.segs = 1;
- sd->nix_sg.seg1_size = rte_pktmbuf_data_len(m);
-
- sd->nix_iova.addr = rte_mbuf_data_iova(m);
-
- /* Mark mempool object as "put" since it is freed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
-
- if (!ev->sched_type)
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
-
- inst.param1 = sess->esn_hi >> 16;
- inst.param2 = sess->esn_hi & 0xffff;
-
- hdr->seq = rte_cpu_to_be_32(sess->seq);
- hdr->ip_id = rte_cpu_to_be_32(sess->ip_id);
-
- sess->ip_id++;
- sess->esn++;
-
- rte_io_wmb();
-
- do {
- otx2_lmt_mov(sess->cpt_lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(sess->cpt_nq_reg);
- } while (lmt_status == 0);
-
- return 1;
-
-drop:
- if (offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- /* Don't free if reference count > 1 */
- if (rte_pktmbuf_prefree_seg(m) == NULL)
- return 0;
- }
- rte_pktmbuf_free(m);
- return 0;
-}
-
-#endif /* __OTX2_ETHDEV_SEC_TX_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
deleted file mode 100644
index 1d0fe4e950..0000000000
--- a/drivers/net/octeontx2/otx2_flow.c
+++ /dev/null
@@ -1,1189 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_flow.h"
-
-enum flow_vtag_cfg_dir { VTAG_TX, VTAG_RX };
-
-int
-otx2_flow_free_all_resources(struct otx2_eth_dev *hw)
-{
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- struct otx2_mbox *mbox = hw->mbox;
- struct otx2_mcam_ents_info *info;
- struct rte_bitmap *bmap;
- struct rte_flow *flow;
- int entry_count = 0;
- int rc, idx;
-
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- info = &npc->flow_entry_info[idx];
- entry_count += info->live_ent;
- }
-
- if (entry_count == 0)
- return 0;
-
- /* Free all MCAM entries allocated */
- rc = otx2_flow_mcam_free_all_entries(mbox);
-
- /* Free any MCAM counters and delete flow list */
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) {
- if (flow->ctr_id != NPC_COUNTER_NONE)
- rc |= otx2_flow_mcam_free_counter(mbox,
- flow->ctr_id);
-
- TAILQ_REMOVE(&npc->flow_list[idx], flow, next);
- rte_free(flow);
- bmap = npc->live_entries[flow->priority];
- rte_bitmap_clear(bmap, flow->mcam_id);
- }
- info = &npc->flow_entry_info[idx];
- info->free_ent = 0;
- info->live_ent = 0;
- }
- return rc;
-}
-
-
-static int
-flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
- struct otx2_npc_flow_info *flow_info)
-{
- /* This is non-LDATA part in search key */
- uint64_t key_data[2] = {0ULL, 0ULL};
- uint64_t key_mask[2] = {0ULL, 0ULL};
- int intf = pst->flow->nix_intf;
- int key_len, bit = 0, index;
- int off, idx, data_off = 0;
- uint8_t lid, mask, data;
- uint16_t layer_info;
- uint64_t lt, flags;
-
-
- /* Skip till Layer A data start */
- while (bit < NPC_PARSE_KEX_S_LA_OFFSET) {
- if (flow_info->keyx_supp_nmask[intf] & (1 << bit))
- data_off++;
- bit++;
- }
-
- /* Each bit represents 1 nibble */
- data_off *= 4;
-
- index = 0;
- for (lid = 0; lid < NPC_MAX_LID; lid++) {
- /* Offset in key */
- off = NPC_PARSE_KEX_S_LID_OFFSET(lid);
- lt = pst->lt[lid] & 0xf;
- flags = pst->flags[lid] & 0xff;
-
- /* NPC_LAYER_KEX_S */
- layer_info = ((flow_info->keyx_supp_nmask[intf] >> off) & 0x7);
-
- if (layer_info) {
- for (idx = 0; idx <= 2 ; idx++) {
- if (layer_info & (1 << idx)) {
- if (idx == 2)
- data = lt;
- else if (idx == 1)
- data = ((flags >> 4) & 0xf);
- else
- data = (flags & 0xf);
-
- if (data_off >= 64) {
- data_off = 0;
- index++;
- }
- key_data[index] |= ((uint64_t)data <<
- data_off);
- mask = 0xf;
- if (lt == 0)
- mask = 0;
- key_mask[index] |= ((uint64_t)mask <<
- data_off);
- data_off += 4;
- }
- }
- }
- }
-
- otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64,
- key_data[0], key_data[1]);
-
- /* Copy this into mcam string */
- key_len = (pst->npc->keyx_len[intf] + 7) / 8;
- otx2_npc_dbg("Key_len = %d", key_len);
- memcpy(pst->flow->mcam_data, key_data, key_len);
- memcpy(pst->flow->mcam_mask, key_mask, key_len);
-
- otx2_npc_dbg("Final flow data");
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64,
- idx, pst->flow->mcam_data[idx],
- idx, pst->flow->mcam_mask[idx]);
- }
-
- /*
- * Now we have mcam data and mask formatted as
- * [Key_len/4 nibbles][0 or 1 nibble hole][data]
- * hole is present if key_len is odd number of nibbles.
- * mcam data must be split into 64 bits + 48 bits segments
- * for each back W0, W1.
- */
-
- return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info);
-}
-
-static int
-flow_parse_attr(struct rte_eth_dev *eth_dev,
- const struct rte_flow_attr *attr,
- struct rte_flow_error *error,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- const char *errmsg = NULL;
-
- if (attr == NULL)
- errmsg = "Attribute can't be empty";
- else if (attr->group)
- errmsg = "Groups are not supported";
- else if (attr->priority >= dev->npc_flow.flow_max_priority)
- errmsg = "Priority should be with in specified range";
- else if ((!attr->egress && !attr->ingress) ||
- (attr->egress && attr->ingress))
- errmsg = "Exactly one of ingress or egress must be set";
-
- if (errmsg != NULL) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR,
- attr, errmsg);
- return -ENOTSUP;
- }
-
- if (attr->ingress)
- flow->nix_intf = OTX2_INTF_RX;
- else
- flow->nix_intf = OTX2_INTF_TX;
-
- flow->priority = attr->priority;
- return 0;
-}
-
-static inline int
-flow_get_free_rss_grp(struct rte_bitmap *bmap,
- uint32_t size, uint32_t *pos)
-{
- for (*pos = 0; *pos < size; ++*pos) {
- if (!rte_bitmap_get(bmap, *pos))
- break;
- }
-
- return *pos < size ? 0 : -1;
-}
-
-static int
-flow_configure_rss_action(struct otx2_eth_dev *dev,
- const struct rte_flow_action_rss *rss,
- uint8_t *alg_idx, uint32_t *rss_grp,
- int mcam_index)
-{
- struct otx2_npc_flow_info *flow_info = &dev->npc_flow;
- uint16_t reta[NIX_RSS_RETA_SIZE_MAX];
- uint32_t flowkey_cfg, grp_aval, i;
- uint16_t *ind_tbl = NULL;
- uint8_t flowkey_algx;
- int rc;
-
- rc = flow_get_free_rss_grp(flow_info->rss_grp_entries,
- flow_info->rss_grps, &grp_aval);
- /* RSS group :0 is not usable for flow rss action */
- if (rc < 0 || grp_aval == 0)
- return -ENOSPC;
-
- *rss_grp = grp_aval;
-
- otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key,
- rss->key_len);
-
- /* If queue count passed in the rss action is less than
- * HW configured reta size, replicate rss action reta
- * across HW reta table.
- */
- if (dev->rss_info.rss_size > rss->queue_num) {
- ind_tbl = reta;
-
- for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++)
- memcpy(reta + i * rss->queue_num, rss->queue,
- sizeof(uint16_t) * rss->queue_num);
-
- i = dev->rss_info.rss_size % rss->queue_num;
- if (i)
- memcpy(&reta[dev->rss_info.rss_size] - i,
- rss->queue, i * sizeof(uint16_t));
- } else {
- ind_tbl = (uint16_t *)(uintptr_t)rss->queue;
- }
-
- rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl);
- if (rc) {
- otx2_err("Failed to init rss table rc = %d", rc);
- return rc;
- }
-
- flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx,
- *rss_grp, mcam_index);
- if (rc) {
- otx2_err("Failed to set rss hash function rc = %d", rc);
- return rc;
- }
-
- *alg_idx = flowkey_algx;
-
- rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp);
-
- return 0;
-}
-
-
-static int
-flow_program_rss_action(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[],
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- const struct rte_flow_action_rss *rss;
- uint32_t rss_grp;
- uint8_t alg_idx;
- int rc;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
- rss = (const struct rte_flow_action_rss *)actions->conf;
-
- rc = flow_configure_rss_action(dev,
- rss, &alg_idx, &rss_grp,
- flow->mcam_id);
- if (rc)
- return rc;
-
- flow->npc_action &= (~(0xfULL));
- flow->npc_action |= NIX_RX_ACTIONOP_RSS;
- flow->npc_action |=
- ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) <<
- NIX_RSS_ACT_ALG_OFFSET) |
- ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) <<
- NIX_RSS_ACT_GRP_OFFSET);
- }
- }
- return 0;
-}
-
-static int
-flow_free_rss_action(struct rte_eth_dev *eth_dev,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- uint32_t rss_grp;
-
- if (flow->npc_action & NIX_RX_ACTIONOP_RSS) {
- rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) &
- NIX_RSS_ACT_GRP_MASK;
- if (rss_grp == 0 || rss_grp >= npc->rss_grps)
- return -EINVAL;
-
- rte_bitmap_clear(npc->rss_grp_entries, rss_grp);
- }
-
- return 0;
-}
-
-static int
-flow_update_sec_tt(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[])
-{
- int rc = 0;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- rc = otx2_eth_sec_update_tag_type(eth_dev);
- break;
- }
- }
-
- return rc;
-}
-
-static int
-flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
-{
- otx2_npc_dbg("Meta Item");
- return 0;
-}
-
-/*
- * Parse function of each layer:
- * - Consume one or more patterns that are relevant.
- * - Update parse_state
- * - Set parse_state.pattern = last item consumed
- * - Set appropriate error code/message when returning error.
- */
-typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst);
-
-static int
-flow_parse_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- struct rte_flow_error *error,
- struct rte_flow *flow,
- struct otx2_parse_state *pst)
-{
- flow_parse_stage_func_t parse_stage_funcs[] = {
- flow_parse_meta_items,
- otx2_flow_parse_higig2_hdr,
- otx2_flow_parse_la,
- otx2_flow_parse_lb,
- otx2_flow_parse_lc,
- otx2_flow_parse_ld,
- otx2_flow_parse_le,
- otx2_flow_parse_lf,
- otx2_flow_parse_lg,
- otx2_flow_parse_lh,
- };
- struct otx2_eth_dev *hw = dev->data->dev_private;
- uint8_t layer = 0;
- int key_offset;
- int rc;
-
- if (pattern == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL,
- "pattern is NULL");
- return -EINVAL;
- }
-
- memset(pst, 0, sizeof(*pst));
- pst->npc = &hw->npc_flow;
- pst->error = error;
- pst->flow = flow;
-
- /* Use integral byte offset */
- key_offset = pst->npc->keyx_len[flow->nix_intf];
- key_offset = (key_offset + 7) / 8;
-
- /* Location where LDATA would begin */
- pst->mcam_data = (uint8_t *)flow->mcam_data;
- pst->mcam_mask = (uint8_t *)flow->mcam_mask;
-
- while (pattern->type != RTE_FLOW_ITEM_TYPE_END &&
- layer < RTE_DIM(parse_stage_funcs)) {
- otx2_npc_dbg("Pattern type = %d", pattern->type);
-
- /* Skip place-holders */
- pattern = otx2_flow_skip_void_and_any_items(pattern);
-
- pst->pattern = pattern;
- otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer);
- rc = parse_stage_funcs[layer](pst);
- if (rc != 0)
- return -rte_errno;
-
- layer++;
-
- /*
- * Parse stage function sets pst->pattern to
- * 1 past the last item it consumed.
- */
- pattern = pst->pattern;
-
- if (pst->terminate)
- break;
- }
-
- /* Skip trailing place-holders */
- pattern = otx2_flow_skip_void_and_any_items(pattern);
-
- /* Are there more items than what we can handle? */
- if (pattern->type != RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM, pattern,
- "unsupported item in the sequence");
- return -ENOTSUP;
- }
-
- return 0;
-}
-
-static int
-flow_parse_rule(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow,
- struct otx2_parse_state *pst)
-{
- int err;
-
- /* Check attributes */
- err = flow_parse_attr(dev, attr, error, flow);
- if (err)
- return err;
-
- /* Check actions */
- err = otx2_flow_parse_actions(dev, attr, actions, error, flow);
- if (err)
- return err;
-
- /* Check pattern */
- err = flow_parse_pattern(dev, pattern, error, flow, pst);
- if (err)
- return err;
-
- /* Check for overlaps? */
- return 0;
-}
-
-static int
-otx2_flow_validate(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct otx2_parse_state parse_state;
- struct rte_flow flow;
-
- memset(&flow, 0, sizeof(flow));
- return flow_parse_rule(dev, attr, pattern, actions, error, &flow,
- &parse_state);
-}
-
-static int
-flow_program_vtag_action(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[],
- struct rte_flow *flow)
-{
- uint16_t vlan_id = 0, vlan_ethtype = RTE_ETHER_TYPE_VLAN;
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- union {
- uint64_t reg;
- struct nix_tx_vtag_action_s act;
- } tx_vtag_action;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- struct nix_vtag_config_rsp *rsp;
- bool vlan_insert_action = false;
- uint64_t rx_vtag_action = 0;
- uint8_t vlan_pcp = 0;
- int rc;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_OF_POP_VLAN) {
- if (dev->npc_flow.vtag_actions == 1) {
- vtag_cfg =
- otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- vtag_cfg->cfg_type = VTAG_RX;
- vtag_cfg->rx.strip_vtag = 1;
- /* Always capture */
- vtag_cfg->rx.capture_vtag = 1;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- vtag_cfg->rx.vtag_type = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
- }
-
- rx_vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- rx_vtag_action |= (NPC_LID_LB << 8);
- rx_vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
- flow->vtag_action = rx_vtag_action;
- } else if (actions->type ==
- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) {
- const struct rte_flow_action_of_set_vlan_vid *vtag =
- (const struct rte_flow_action_of_set_vlan_vid *)
- actions->conf;
- vlan_id = rte_be_to_cpu_16(vtag->vlan_vid);
- if (vlan_id > 0xfff) {
- otx2_err("Invalid vlan_id for set vlan action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- } else if (actions->type == RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN) {
- const struct rte_flow_action_of_push_vlan *ethtype =
- (const struct rte_flow_action_of_push_vlan *)
- actions->conf;
- vlan_ethtype = rte_be_to_cpu_16(ethtype->ethertype);
- if (vlan_ethtype != RTE_ETHER_TYPE_VLAN &&
- vlan_ethtype != RTE_ETHER_TYPE_QINQ) {
- otx2_err("Invalid ethtype specified for push"
- " vlan action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- } else if (actions->type ==
- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP) {
- const struct rte_flow_action_of_set_vlan_pcp *pcp =
- (const struct rte_flow_action_of_set_vlan_pcp *)
- actions->conf;
- vlan_pcp = pcp->vlan_pcp;
- if (vlan_pcp > 0x7) {
- otx2_err("Invalid PCP value for pcp action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- }
- }
-
- if (vlan_insert_action) {
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- vtag_cfg->tx.vtag0 =
- ((vlan_ethtype << 16) | (vlan_pcp << 13) | vlan_id);
- vtag_cfg->tx.cfg_vtag0 = 1;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- tx_vtag_action.reg = 0;
- tx_vtag_action.act.vtag0_def = rsp->vtag0_idx;
- if (tx_vtag_action.act.vtag0_def < 0) {
- otx2_err("Failed to config TX VTAG action");
- return -EINVAL;
- }
- tx_vtag_action.act.vtag0_lid = NPC_LID_LA;
- tx_vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
- tx_vtag_action.act.vtag0_relptr =
- NIX_TX_VTAGACTION_VTAG0_RELPTR;
- flow->vtag_action = tx_vtag_action.reg;
- }
- return 0;
-}
-
-static struct rte_flow *
-otx2_flow_create(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_parse_state parse_state;
- struct otx2_mbox *mbox = hw->mbox;
- struct rte_flow *flow, *flow_iter;
- struct otx2_flow_list *list;
- int rc;
-
- flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0);
- if (flow == NULL) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Memory allocation failed");
- return NULL;
- }
- memset(flow, 0, sizeof(*flow));
-
- rc = flow_parse_rule(dev, attr, pattern, actions, error, flow,
- &parse_state);
- if (rc != 0)
- goto err_exit;
-
- rc = flow_program_vtag_action(dev, actions, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to program vlan action");
- goto err_exit;
- }
-
- parse_state.is_vf = otx2_dev_is_vf(hw);
-
- rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to insert filter");
- goto err_exit;
- }
-
- rc = flow_program_rss_action(dev, actions, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to program rss action");
- goto err_exit;
- }
-
- if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
- rc = flow_update_sec_tt(dev, actions);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to update tt with sec act");
- goto err_exit;
- }
- }
-
- list = &hw->npc_flow.flow_list[flow->priority];
- /* List in ascending order of mcam entries */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id > flow->mcam_id) {
- TAILQ_INSERT_BEFORE(flow_iter, flow, next);
- return flow;
- }
- }
-
- TAILQ_INSERT_TAIL(list, flow, next);
- return flow;
-
-err_exit:
- rte_free(flow);
- return NULL;
-}
-
-static int
-otx2_flow_destroy(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- struct otx2_mbox *mbox = hw->mbox;
- struct rte_bitmap *bmap;
- uint16_t match_id;
- int rc;
-
- match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) &
- NIX_RX_ACT_MATCH_MASK;
-
- if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) {
- if (rte_atomic32_read(&npc->mark_actions) == 0)
- return -EINVAL;
-
- /* Clear mark offload flag if there are no more mark actions */
- if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) {
- hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
- otx2_eth_set_rx_function(dev);
- }
- }
-
- if (flow->nix_intf == OTX2_INTF_RX && flow->vtag_action) {
- npc->vtag_actions--;
- if (npc->vtag_actions == 0) {
- if (hw->vlan_info.strip_on == 0) {
- hw->rx_offload_flags &=
- ~NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(dev);
- }
- }
- }
-
- rc = flow_free_rss_action(dev, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to free rss action");
- }
-
- rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to destroy filter");
- }
-
- TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next);
-
- bmap = npc->live_entries[flow->priority];
- rte_bitmap_clear(bmap, flow->mcam_id);
-
- rte_free(flow);
- return 0;
-}
-
-static int
-otx2_flow_flush(struct rte_eth_dev *dev,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- int rc;
-
- rc = otx2_flow_free_all_resources(hw);
- if (rc) {
- otx2_err("Error when deleting NPC MCAM entries "
- ", counters");
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to flush filter");
- return -rte_errno;
- }
-
- return 0;
-}
-
-static int
-otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused,
- int enable __rte_unused,
- struct rte_flow_error *error)
-{
- /*
- * If we support, we need to un-install the default mcam
- * entry for this port.
- */
-
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Flow isolation not supported");
-
- return -rte_errno;
-}
-
-static int
-otx2_flow_query(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- const struct rte_flow_action *action,
- void *data,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct rte_flow_query_count *query = data;
- struct otx2_mbox *mbox = hw->mbox;
- const char *errmsg = NULL;
- int errcode = ENOTSUP;
- int rc;
-
- if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) {
- errmsg = "Only COUNT is supported in query";
- goto err_exit;
- }
-
- if (flow->ctr_id == NPC_COUNTER_NONE) {
- errmsg = "Counter is not available";
- goto err_exit;
- }
-
- rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits);
- if (rc != 0) {
- errcode = EIO;
- errmsg = "Error reading flow counter";
- goto err_exit;
- }
- query->hits_set = 1;
- query->bytes_set = 0;
-
- if (query->reset)
- rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id);
- if (rc != 0) {
- errcode = EIO;
- errmsg = "Error clearing flow counter";
- goto err_exit;
- }
-
- return 0;
-
-err_exit:
- rte_flow_error_set(error, errcode,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- errmsg);
- return -rte_errno;
-}
-
-static int
-otx2_flow_dev_dump(struct rte_eth_dev *dev,
- struct rte_flow *flow, FILE *file,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_flow_list *list;
- struct rte_flow *flow_iter;
- uint32_t max_prio, i;
-
- if (file == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Invalid file");
- return -EINVAL;
- }
- if (flow != NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_HANDLE,
- NULL,
- "Invalid argument");
- return -EINVAL;
- }
-
- max_prio = hw->npc_flow.flow_max_priority;
-
- for (i = 0; i < max_prio; i++) {
- list = &hw->npc_flow.flow_list[i];
-
- /* List in ascending order of mcam entries */
- TAILQ_FOREACH(flow_iter, list, next) {
- otx2_flow_dump(file, hw, flow_iter);
- }
- }
-
- return 0;
-}
-
-const struct rte_flow_ops otx2_flow_ops = {
- .validate = otx2_flow_validate,
- .create = otx2_flow_create,
- .destroy = otx2_flow_destroy,
- .flush = otx2_flow_flush,
- .query = otx2_flow_query,
- .isolate = otx2_flow_isolate,
- .dev_dump = otx2_flow_dev_dump,
-};
-
-static int
-flow_supp_key_len(uint32_t supp_mask)
-{
- int nib_count = 0;
- while (supp_mask) {
- nib_count++;
- supp_mask &= (supp_mask - 1);
- }
- return nib_count * 4;
-}
-
-/* Refer HRM register:
- * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG
- * and
- * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG
- **/
-#define BYTESM1_SHIFT 16
-#define HDR_OFF_SHIFT 8
-static void
-flow_update_kex_info(struct npc_xtract_info *xtract_info,
- uint64_t val)
-{
- xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1;
- xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff;
- xtract_info->key_off = val & 0x3f;
- xtract_info->enable = ((val >> 7) & 0x1);
- xtract_info->flags_enable = ((val >> 6) & 0x1);
-}
-
-static void
-flow_process_mkex_cfg(struct otx2_npc_flow_info *npc,
- struct npc_get_kex_cfg_rsp *kex_rsp)
-{
- volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]
- [NPC_MAX_LD];
- struct npc_xtract_info *x_info = NULL;
- int lid, lt, ld, fl, ix;
- otx2_dxcfg_t *p;
- uint64_t keyw;
- uint64_t val;
-
- npc->keyx_supp_nmask[NPC_MCAM_RX] =
- kex_rsp->rx_keyx_cfg & 0x7fffffffULL;
- npc->keyx_supp_nmask[NPC_MCAM_TX] =
- kex_rsp->tx_keyx_cfg & 0x7fffffffULL;
- npc->keyx_len[NPC_MCAM_RX] =
- flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]);
- npc->keyx_len[NPC_MCAM_TX] =
- flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]);
-
- keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL;
- npc->keyw[NPC_MCAM_RX] = keyw;
- keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL;
- npc->keyw[NPC_MCAM_TX] = keyw;
-
- /* Update KEX_LD_FLAG */
- for (ix = 0; ix < NPC_MAX_INTF; ix++) {
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- for (fl = 0; fl < NPC_MAX_LFL; fl++) {
- x_info =
- &npc->prx_fxcfg[ix][ld][fl].xtract[0];
- val = kex_rsp->intf_ld_flags[ix][ld][fl];
- flow_update_kex_info(x_info, val);
- }
- }
- }
-
- /* Update LID, LT and LDATA cfg */
- p = &npc->prx_dxcfg;
- q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])
- (&kex_rsp->intf_lid_lt_ld);
- for (ix = 0; ix < NPC_MAX_INTF; ix++) {
- for (lid = 0; lid < NPC_MAX_LID; lid++) {
- for (lt = 0; lt < NPC_MAX_LT; lt++) {
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- x_info = &(*p)[ix][lid][lt].xtract[ld];
- val = (*q)[ix][lid][lt][ld];
- flow_update_kex_info(x_info, val);
- }
- }
- }
- }
- /* Update LDATA Flags cfg */
- npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0];
- npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1];
-}
-
-static struct otx2_idev_kex_cfg *
-flow_intra_dev_kex_cfg(void)
-{
- static const char name[] = "octeontx2_intra_device_kex_conf";
- struct otx2_idev_kex_cfg *idev;
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(name);
- if (mz)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg),
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz) {
- idev = mz->addr;
- rte_atomic16_set(&idev->kex_refcnt, 0);
- return idev;
- }
- return NULL;
-}
-
-static int
-flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
-{
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- struct npc_get_kex_cfg_rsp *kex_rsp;
- struct otx2_mbox *mbox = dev->mbox;
- char mkex_pfl_name[MKEX_NAME_LEN];
- struct otx2_idev_kex_cfg *idev;
- int rc = 0;
-
- idev = flow_intra_dev_kex_cfg();
- if (!idev)
- return -ENOMEM;
-
- /* Is kex_cfg read by any another driver? */
- if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) {
- /* Call mailbox to get key & data size */
- (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox);
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp);
- if (rc) {
- otx2_err("Failed to fetch NPC keyx config");
- goto done;
- }
- memcpy(&idev->kex_cfg, kex_rsp,
- sizeof(struct npc_get_kex_cfg_rsp));
- }
-
- otx2_mbox_memcpy(mkex_pfl_name,
- idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN);
-
- strlcpy((char *)dev->mkex_pfl_name,
- mkex_pfl_name, sizeof(dev->mkex_pfl_name));
-
- flow_process_mkex_cfg(npc, &idev->kex_cfg);
-
-done:
- return rc;
-}
-
-#define OTX2_MCAM_TOT_ENTRIES_96XX (4096)
-#define OTX2_MCAM_TOT_ENTRIES_98XX (16384)
-
-static int otx2_mcam_tot_entries(struct otx2_eth_dev *dev)
-{
- if (otx2_dev_is_98xx(dev))
- return OTX2_MCAM_TOT_ENTRIES_98XX;
- else
- return OTX2_MCAM_TOT_ENTRIES_96XX;
-}
-
-int
-otx2_flow_init(struct otx2_eth_dev *hw)
-{
- uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- uint32_t bmap_sz, tot_mcam_entries = 0;
- int rc = 0, idx;
-
- rc = flow_fetch_kex_cfg(hw);
- if (rc) {
- otx2_err("Failed to fetch NPC keyx config from idev");
- return rc;
- }
-
- rte_atomic32_init(&npc->mark_actions);
- npc->vtag_actions = 0;
-
- tot_mcam_entries = otx2_mcam_tot_entries(hw);
- npc->mcam_entries = tot_mcam_entries >> npc->keyw[NPC_MCAM_RX];
- /* Free, free_rev, live and live_rev entries */
- bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries);
- mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority,
- RTE_CACHE_LINE_SIZE);
- if (mem == NULL) {
- otx2_err("Bmap alloc failed");
- rc = -ENOMEM;
- return rc;
- }
-
- npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct otx2_mcam_ents_info),
- 0);
- if (npc->flow_entry_info == NULL) {
- otx2_err("flow_entry_info alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->free_entries == NULL) {
- otx2_err("free_entries alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->free_entries_rev == NULL) {
- otx2_err("free_entries_rev alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->live_entries == NULL) {
- otx2_err("live_entries alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->live_entries_rev == NULL) {
- otx2_err("live_entries_rev alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct otx2_flow_list),
- 0);
- if (npc->flow_list == NULL) {
- otx2_err("flow_list alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc_mem = mem;
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- TAILQ_INIT(&npc->flow_list[idx]);
-
- npc->free_entries[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->free_entries_rev[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->live_entries[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->live_entries_rev[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->flow_entry_info[idx].free_ent = 0;
- npc->flow_entry_info[idx].live_ent = 0;
- npc->flow_entry_info[idx].max_id = 0;
- npc->flow_entry_info[idx].min_id = ~(0);
- }
-
- npc->rss_grps = NIX_RSS_GRPS;
-
- bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps);
- nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE);
- if (nix_mem == NULL) {
- otx2_err("Bmap alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz);
-
- /* Group 0 will be used for RSS,
- * 1 -7 will be used for rte_flow RSS action
- */
- rte_bitmap_set(npc->rss_grp_entries, 0);
-
- return 0;
-
-err:
- if (npc->flow_list)
- rte_free(npc->flow_list);
- if (npc->live_entries_rev)
- rte_free(npc->live_entries_rev);
- if (npc->live_entries)
- rte_free(npc->live_entries);
- if (npc->free_entries_rev)
- rte_free(npc->free_entries_rev);
- if (npc->free_entries)
- rte_free(npc->free_entries);
- if (npc->flow_entry_info)
- rte_free(npc->flow_entry_info);
- if (npc_mem)
- rte_free(npc_mem);
- return rc;
-}
-
-int
-otx2_flow_fini(struct otx2_eth_dev *hw)
-{
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- int rc;
-
- rc = otx2_flow_free_all_resources(hw);
- if (rc) {
- otx2_err("Error when deleting NPC MCAM entries, counters");
- return rc;
- }
-
- if (npc->flow_list)
- rte_free(npc->flow_list);
- if (npc->live_entries_rev)
- rte_free(npc->live_entries_rev);
- if (npc->live_entries)
- rte_free(npc->live_entries);
- if (npc->free_entries_rev)
- rte_free(npc->free_entries_rev);
- if (npc->free_entries)
- rte_free(npc->free_entries);
- if (npc->flow_entry_info)
- rte_free(npc->flow_entry_info);
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
deleted file mode 100644
index 790e6ef1e8..0000000000
--- a/drivers/net/octeontx2/otx2_flow.h
+++ /dev/null
@@ -1,414 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_FLOW_H__
-#define __OTX2_FLOW_H__
-
-#include <stdint.h>
-
-#include <rte_flow_driver.h>
-#include <rte_malloc.h>
-#include <rte_tailq.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev.h"
-#include "otx2_mbox.h"
-
-struct otx2_eth_dev;
-
-int otx2_flow_init(struct otx2_eth_dev *hw);
-int otx2_flow_fini(struct otx2_eth_dev *hw);
-extern const struct rte_flow_ops otx2_flow_ops;
-
-enum {
- OTX2_INTF_RX = 0,
- OTX2_INTF_TX = 1,
- OTX2_INTF_MAX = 2,
-};
-
-#define NPC_IH_LENGTH 8
-#define NPC_TPID_LENGTH 2
-#define NPC_HIGIG2_LENGTH 16
-#define NPC_MAX_RAW_ITEM_LEN 16
-#define NPC_COUNTER_NONE (-1)
-/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */
-#define NPC_MAX_EXTRACT_DATA_LEN (64)
-#define NPC_LDATA_LFLAG_LEN (16)
-#define NPC_MAX_KEY_NIBBLES (31)
-/* Nibble offsets */
-#define NPC_LAYER_KEYX_SZ (3)
-#define NPC_PARSE_KEX_S_LA_OFFSET (7)
-#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \
- ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \
- + NPC_PARSE_KEX_S_LA_OFFSET)
-
-
-/* supported flow actions flags */
-#define OTX2_FLOW_ACT_MARK (1 << 0)
-#define OTX2_FLOW_ACT_FLAG (1 << 1)
-#define OTX2_FLOW_ACT_DROP (1 << 2)
-#define OTX2_FLOW_ACT_QUEUE (1 << 3)
-#define OTX2_FLOW_ACT_RSS (1 << 4)
-#define OTX2_FLOW_ACT_DUP (1 << 5)
-#define OTX2_FLOW_ACT_SEC (1 << 6)
-#define OTX2_FLOW_ACT_COUNT (1 << 7)
-#define OTX2_FLOW_ACT_PF (1 << 8)
-#define OTX2_FLOW_ACT_VF (1 << 9)
-#define OTX2_FLOW_ACT_VLAN_STRIP (1 << 10)
-#define OTX2_FLOW_ACT_VLAN_INSERT (1 << 11)
-#define OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT (1 << 12)
-#define OTX2_FLOW_ACT_VLAN_PCP_INSERT (1 << 13)
-
-/* terminating actions */
-#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \
- OTX2_FLOW_ACT_QUEUE | \
- OTX2_FLOW_ACT_RSS | \
- OTX2_FLOW_ACT_DUP | \
- OTX2_FLOW_ACT_SEC)
-
-/* This mark value indicates flag action */
-#define OTX2_FLOW_FLAG_VAL (0xffff)
-
-#define NIX_RX_ACT_MATCH_OFFSET (40)
-#define NIX_RX_ACT_MATCH_MASK (0xFFFF)
-
-#define NIX_RSS_ACT_GRP_OFFSET (20)
-#define NIX_RSS_ACT_ALG_OFFSET (56)
-#define NIX_RSS_ACT_GRP_MASK (0xFFFFF)
-#define NIX_RSS_ACT_ALG_MASK (0x1F)
-
-/* PMD-specific definition of the opaque struct rte_flow */
-#define OTX2_MAX_MCAM_WIDTH_DWORDS 7
-
-enum npc_mcam_intf {
- NPC_MCAM_RX,
- NPC_MCAM_TX
-};
-
-struct npc_xtract_info {
- /* Length in bytes of pkt data extracted. len = 0
- * indicates that extraction is disabled.
- */
- uint8_t len;
- uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */
- uint8_t key_off; /* Byte offset in MCAM key where data is placed */
- uint8_t enable; /* Extraction enabled or disabled */
- uint8_t flags_enable; /* Flags extraction enabled */
-};
-
-/* Information for a given {LAYER, LTYPE} */
-struct npc_lid_lt_xtract_info {
- /* Info derived from parser configuration */
- uint16_t npc_proto; /* Network protocol identified */
- uint8_t valid_flags_mask; /* Flags applicable */
- uint8_t is_terminating:1; /* No more parsing */
- struct npc_xtract_info xtract[NPC_MAX_LD];
-};
-
-union npc_kex_ldata_flags_cfg {
- struct {
- #if defined(__BIG_ENDIAN_BITFIELD)
- uint64_t rvsd_62_1 : 61;
- uint64_t lid : 3;
- #else
- uint64_t lid : 3;
- uint64_t rvsd_62_1 : 61;
- #endif
- } s;
-
- uint64_t i;
-};
-
-typedef struct npc_lid_lt_xtract_info
- otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT];
-typedef struct npc_lid_lt_xtract_info
- otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
-typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD];
-
-
-/* MBOX_MSG_NPC_GET_DATAX_CFG Response */
-struct npc_get_datax_cfg {
- /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
- union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD];
- /* Extract information indexed with [LID][LTYPE] */
- struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT];
- /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE]
- * Fields flags_ena_ld0, flags_ena_ld1 in
- * struct npc_lid_lt_xtract_info indicate if this is applicable
- * for a given {LAYER, LTYPE}
- */
- struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT];
-};
-
-struct otx2_mcam_ents_info {
- /* Current max & min values of mcam index */
- uint32_t max_id;
- uint32_t min_id;
- uint32_t free_ent;
- uint32_t live_ent;
-};
-
-struct otx2_flow_dump_data {
- uint8_t lid;
- uint16_t ltype;
-};
-
-struct rte_flow {
- uint8_t nix_intf;
- uint32_t mcam_id;
- int32_t ctr_id;
- uint32_t priority;
- /* Contiguous match string */
- uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS];
- uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS];
- uint64_t npc_action;
- uint64_t vtag_action;
- struct otx2_flow_dump_data dump_data[32];
- uint16_t num_patterns;
- TAILQ_ENTRY(rte_flow) next;
-};
-
-TAILQ_HEAD(otx2_flow_list, rte_flow);
-
-/* Accessed from ethdev private - otx2_eth_dev */
-struct otx2_npc_flow_info {
- rte_atomic32_t mark_actions;
- uint32_t vtag_actions;
- uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */
- uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */
- uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */
- uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */
- uint32_t mcam_entries; /* mcam entries supported */
- otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */
- otx2_fxcfg_t prx_fxcfg; /* Flag extract */
- otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */
- /* mcam entry info per priority level: both free & in-use */
- struct otx2_mcam_ents_info *flow_entry_info;
- /* Bitmap of free preallocated entries in ascending index &
- * descending priority
- */
- struct rte_bitmap **free_entries;
- /* Bitmap of free preallocated entries in descending index &
- * ascending priority
- */
- struct rte_bitmap **free_entries_rev;
- /* Bitmap of live entries in ascending index & descending priority */
- struct rte_bitmap **live_entries;
- /* Bitmap of live entries in descending index & ascending priority */
- struct rte_bitmap **live_entries_rev;
- /* Priority bucket wise tail queue of all rte_flow resources */
- struct otx2_flow_list *flow_list;
- uint32_t rss_grps; /* rss groups supported */
- struct rte_bitmap *rss_grp_entries;
- uint16_t channel; /*rx channel */
- uint16_t flow_prealloc_size;
- uint16_t flow_max_priority;
- uint16_t switch_header_type;
-};
-
-struct otx2_parse_state {
- struct otx2_npc_flow_info *npc;
- const struct rte_flow_item *pattern;
- const struct rte_flow_item *last_pattern; /* Temp usage */
- struct rte_flow_error *error;
- struct rte_flow *flow;
- uint8_t tunnel;
- uint8_t terminate;
- uint8_t layer_mask;
- uint8_t lt[NPC_MAX_LID];
- uint8_t flags[NPC_MAX_LID];
- uint8_t *mcam_data; /* point to flow->mcam_data + key_len */
- uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */
- bool is_vf;
-};
-
-struct otx2_flow_item_info {
- const void *def_mask; /* rte_flow default mask */
- void *hw_mask; /* hardware supported mask */
- int len; /* length of item */
- const void *spec; /* spec to use, NULL implies match any */
- const void *mask; /* mask to use */
- uint8_t hw_hdr_len; /* Extra data len at each layer*/
-};
-
-struct otx2_idev_kex_cfg {
- struct npc_get_kex_cfg_rsp kex_cfg;
- rte_atomic16_t kex_refcnt;
-};
-
-enum npc_kpu_parser_flag {
- NPC_F_NA = 0,
- NPC_F_PKI,
- NPC_F_PKI_VLAN,
- NPC_F_PKI_ETAG,
- NPC_F_PKI_ITAG,
- NPC_F_PKI_MPLS,
- NPC_F_PKI_NSH,
- NPC_F_ETYPE_UNK,
- NPC_F_ETHER_VLAN,
- NPC_F_ETHER_ETAG,
- NPC_F_ETHER_ITAG,
- NPC_F_ETHER_MPLS,
- NPC_F_ETHER_NSH,
- NPC_F_STAG_CTAG,
- NPC_F_STAG_CTAG_UNK,
- NPC_F_STAG_STAG_CTAG,
- NPC_F_STAG_STAG_STAG,
- NPC_F_QINQ_CTAG,
- NPC_F_QINQ_CTAG_UNK,
- NPC_F_QINQ_QINQ_CTAG,
- NPC_F_QINQ_QINQ_QINQ,
- NPC_F_BTAG_ITAG,
- NPC_F_BTAG_ITAG_STAG,
- NPC_F_BTAG_ITAG_CTAG,
- NPC_F_BTAG_ITAG_UNK,
- NPC_F_ETAG_CTAG,
- NPC_F_ETAG_BTAG_ITAG,
- NPC_F_ETAG_STAG,
- NPC_F_ETAG_QINQ,
- NPC_F_ETAG_ITAG,
- NPC_F_ETAG_ITAG_STAG,
- NPC_F_ETAG_ITAG_CTAG,
- NPC_F_ETAG_ITAG_UNK,
- NPC_F_ITAG_STAG_CTAG,
- NPC_F_ITAG_STAG,
- NPC_F_ITAG_CTAG,
- NPC_F_MPLS_4_LABELS,
- NPC_F_MPLS_3_LABELS,
- NPC_F_MPLS_2_LABELS,
- NPC_F_IP_HAS_OPTIONS,
- NPC_F_IP_IP_IN_IP,
- NPC_F_IP_6TO4,
- NPC_F_IP_MPLS_IN_IP,
- NPC_F_IP_UNK_PROTO,
- NPC_F_IP_IP_IN_IP_HAS_OPTIONS,
- NPC_F_IP_6TO4_HAS_OPTIONS,
- NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS,
- NPC_F_IP_UNK_PROTO_HAS_OPTIONS,
- NPC_F_IP6_HAS_EXT,
- NPC_F_IP6_TUN_IP6,
- NPC_F_IP6_MPLS_IN_IP,
- NPC_F_TCP_HAS_OPTIONS,
- NPC_F_TCP_HTTP,
- NPC_F_TCP_HTTPS,
- NPC_F_TCP_PPTP,
- NPC_F_TCP_UNK_PORT,
- NPC_F_TCP_HTTP_HAS_OPTIONS,
- NPC_F_TCP_HTTPS_HAS_OPTIONS,
- NPC_F_TCP_PPTP_HAS_OPTIONS,
- NPC_F_TCP_UNK_PORT_HAS_OPTIONS,
- NPC_F_UDP_VXLAN,
- NPC_F_UDP_VXLAN_NOVNI,
- NPC_F_UDP_VXLAN_NOVNI_NSH,
- NPC_F_UDP_VXLANGPE,
- NPC_F_UDP_VXLANGPE_NSH,
- NPC_F_UDP_VXLANGPE_MPLS,
- NPC_F_UDP_VXLANGPE_NOVNI,
- NPC_F_UDP_VXLANGPE_NOVNI_NSH,
- NPC_F_UDP_VXLANGPE_NOVNI_MPLS,
- NPC_F_UDP_VXLANGPE_UNK,
- NPC_F_UDP_VXLANGPE_NONP,
- NPC_F_UDP_GTP_GTPC,
- NPC_F_UDP_GTP_GTPU_G_PDU,
- NPC_F_UDP_GTP_GTPU_UNK,
- NPC_F_UDP_UNK_PORT,
- NPC_F_UDP_GENEVE,
- NPC_F_UDP_GENEVE_OAM,
- NPC_F_UDP_GENEVE_CRI_OPT,
- NPC_F_UDP_GENEVE_OAM_CRI_OPT,
- NPC_F_GRE_NVGRE,
- NPC_F_GRE_HAS_SRE,
- NPC_F_GRE_HAS_CSUM,
- NPC_F_GRE_HAS_KEY,
- NPC_F_GRE_HAS_SEQ,
- NPC_F_GRE_HAS_CSUM_KEY,
- NPC_F_GRE_HAS_CSUM_SEQ,
- NPC_F_GRE_HAS_KEY_SEQ,
- NPC_F_GRE_HAS_CSUM_KEY_SEQ,
- NPC_F_GRE_HAS_ROUTE,
- NPC_F_GRE_UNK_PROTO,
- NPC_F_GRE_VER1,
- NPC_F_GRE_VER1_HAS_SEQ,
- NPC_F_GRE_VER1_HAS_ACK,
- NPC_F_GRE_VER1_HAS_SEQ_ACK,
- NPC_F_GRE_VER1_UNK_PROTO,
- NPC_F_TU_ETHER_UNK,
- NPC_F_TU_ETHER_CTAG,
- NPC_F_TU_ETHER_CTAG_UNK,
- NPC_F_TU_ETHER_STAG_CTAG,
- NPC_F_TU_ETHER_STAG_CTAG_UNK,
- NPC_F_TU_ETHER_STAG,
- NPC_F_TU_ETHER_STAG_UNK,
- NPC_F_TU_ETHER_QINQ_CTAG,
- NPC_F_TU_ETHER_QINQ_CTAG_UNK,
- NPC_F_TU_ETHER_QINQ,
- NPC_F_TU_ETHER_QINQ_UNK,
- NPC_F_LAST /* has to be the last item */
-};
-
-
-int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id);
-
-int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
- uint64_t *count);
-
-int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id);
-
-int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry);
-
-int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox);
-
-int otx2_flow_update_parse_state(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- int lid, int lt, uint8_t flags);
-
-int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
- struct otx2_flow_item_info *info,
- struct rte_flow_error *error);
-
-void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
-
-int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
- struct otx2_mbox *mbox,
- struct otx2_parse_state *pst,
- struct otx2_npc_flow_info *flow_info);
-
-void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- int lid, int lt);
-
-const struct rte_flow_item *
-otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern);
-
-int otx2_flow_parse_lh(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lg(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lf(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_le(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_ld(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lc(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lb(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_la(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_higig2_hdr(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_actions(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow);
-
-int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
-
-int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
-
-void otx2_flow_dump(FILE *file, struct otx2_eth_dev *hw,
- struct rte_flow *flow);
-#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
deleted file mode 100644
index 071740de86..0000000000
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ /dev/null
@@ -1,252 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_bp_cfg_req *req;
- struct nix_bp_cfg_rsp *rsp;
- int rc;
-
- if (otx2_dev_is_sdp(dev))
- return 0;
-
- if (enb) {
- req = otx2_mbox_alloc_msg_nix_bp_enable(mbox);
- req->chan_base = 0;
- req->chan_cnt = 1;
- req->bpid_per_chan = 0;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || req->chan_cnt != rsp->chan_cnt) {
- otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d",
- rsp->chan_cnt, req->chan_cnt, rc);
- return rc;
- }
-
- fc->bpid[0] = rsp->chan_bpid[0];
- } else {
- req = otx2_mbox_alloc_msg_nix_bp_disable(mbox);
- req->chan_base = 0;
- req->chan_cnt = 1;
-
- rc = otx2_mbox_process(mbox);
-
- memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN);
- }
-
- return rc;
-}
-
-int
-otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_pause_frm_cfg *req, *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_ETH_FC_NONE;
- return 0;
- }
-
- req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- req->set = 0;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto done;
-
- if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_ETH_FC_FULL;
- else if (rsp->rx_pause)
- fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
- else if (rsp->tx_pause)
- fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
- else
- fc_conf->mode = RTE_ETH_FC_NONE;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- struct otx2_eth_rxq *rxq;
- int i, rc;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq)
- return -ENOMEM;
- }
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- if (enb) {
- aq->cq.bpid = fc->bpid[0];
- aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
- aq->cq.bp = rxq->cq_drop;
- aq->cq_mask.bp = ~(aq->cq_mask.bp);
- }
-
- aq->cq.bp_ena = !!enb;
- aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- return 0;
-}
-
-static int
-otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- return otx2_nix_cq_bp_cfg(eth_dev, enb);
-}
-
-int
-otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_pause_frm_cfg *req;
- uint8_t tx_pause, rx_pause;
- int rc = 0;
-
- if (otx2_dev_is_lbk(dev)) {
- otx2_info("No flow control support for LBK bound ethports");
- return -ENOTSUP;
- }
-
- if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time ||
- fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) {
- otx2_info("Flowctrl parameter is not supported");
- return -EINVAL;
- }
-
- if (fc_conf->mode == fc->mode)
- return 0;
-
- rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
-
- /* Check if TX pause frame is already enabled or not */
- if (fc->tx_pause ^ tx_pause) {
- if (otx2_dev_is_Ax(dev) && eth_dev->data->dev_started) {
- /* on Ax, CQ should be in disabled state
- * while setting flow control configuration.
- */
- otx2_info("Stop the port=%d for setting flow control\n",
- eth_dev->data->port_id);
- return 0;
- }
- /* TX pause frames, enable/disable flowctrl on RX side. */
- rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause);
- if (rc)
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- req->set = 1;
- req->rx_pause = rx_pause;
- req->tx_pause = tx_pause;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- fc->tx_pause = tx_pause;
- fc->rx_pause = rx_pause;
- fc->mode = fc_conf->mode;
-
- return rc;
-}
-
-int
-otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct rte_eth_fc_conf fc_conf;
-
- if (otx2_dev_is_lbk(dev) || otx2_dev_is_sdp(dev))
- return 0;
-
- memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- fc_conf.mode = fc->mode;
-
- /* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
- if (otx2_dev_is_Ax(dev) &&
- (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
- fc_conf.mode =
- (fc_conf.mode == RTE_ETH_FC_FULL ||
- fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
- RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
- }
-
- return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
-}
-
-int
-otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct rte_eth_fc_conf fc_conf;
- int rc;
-
- if (otx2_dev_is_lbk(dev) || otx2_dev_is_sdp(dev))
- return 0;
-
- memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
- * by AF driver, update those info in PMD structure.
- */
- rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
- if (rc)
- goto exit;
-
- fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
- (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
- (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
-
-exit:
- return rc;
-}
diff --git a/drivers/net/octeontx2/otx2_flow_dump.c b/drivers/net/octeontx2/otx2_flow_dump.c
deleted file mode 100644
index 3f86071300..0000000000
--- a/drivers/net/octeontx2/otx2_flow_dump.c
+++ /dev/null
@@ -1,595 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_flow.h"
-
-#define NPC_MAX_FIELD_NAME_SIZE 80
-#define NPC_RX_ACTIONOP_MASK GENMASK(3, 0)
-#define NPC_RX_ACTION_PFFUNC_MASK GENMASK(19, 4)
-#define NPC_RX_ACTION_INDEX_MASK GENMASK(39, 20)
-#define NPC_RX_ACTION_MATCH_MASK GENMASK(55, 40)
-#define NPC_RX_ACTION_FLOWKEY_MASK GENMASK(60, 56)
-
-#define NPC_TX_ACTION_INDEX_MASK GENMASK(31, 12)
-#define NPC_TX_ACTION_MATCH_MASK GENMASK(47, 32)
-
-#define NIX_RX_VTAGACT_VTAG0_RELPTR_MASK GENMASK(7, 0)
-#define NIX_RX_VTAGACT_VTAG0_LID_MASK GENMASK(10, 8)
-#define NIX_RX_VTAGACT_VTAG0_TYPE_MASK GENMASK(14, 12)
-#define NIX_RX_VTAGACT_VTAG0_VALID_MASK BIT_ULL(15)
-
-#define NIX_RX_VTAGACT_VTAG1_RELPTR_MASK GENMASK(39, 32)
-#define NIX_RX_VTAGACT_VTAG1_LID_MASK GENMASK(42, 40)
-#define NIX_RX_VTAGACT_VTAG1_TYPE_MASK GENMASK(46, 44)
-#define NIX_RX_VTAGACT_VTAG1_VALID_MASK BIT_ULL(47)
-
-#define NIX_TX_VTAGACT_VTAG0_RELPTR_MASK GENMASK(7, 0)
-#define NIX_TX_VTAGACT_VTAG0_LID_MASK GENMASK(10, 8)
-#define NIX_TX_VTAGACT_VTAG0_OP_MASK GENMASK(13, 12)
-#define NIX_TX_VTAGACT_VTAG0_DEF_MASK GENMASK(25, 16)
-
-#define NIX_TX_VTAGACT_VTAG1_RELPTR_MASK GENMASK(39, 32)
-#define NIX_TX_VTAGACT_VTAG1_LID_MASK GENMASK(42, 40)
-#define NIX_TX_VTAGACT_VTAG1_OP_MASK GENMASK(45, 44)
-#define NIX_TX_VTAGACT_VTAG1_DEF_MASK GENMASK(57, 48)
-
-struct npc_rx_parse_nibble_s {
- uint16_t chan : 3;
- uint16_t errlev : 1;
- uint16_t errcode : 2;
- uint16_t l2l3bm : 1;
- uint16_t laflags : 2;
- uint16_t latype : 1;
- uint16_t lbflags : 2;
- uint16_t lbtype : 1;
- uint16_t lcflags : 2;
- uint16_t lctype : 1;
- uint16_t ldflags : 2;
- uint16_t ldtype : 1;
- uint16_t leflags : 2;
- uint16_t letype : 1;
- uint16_t lfflags : 2;
- uint16_t lftype : 1;
- uint16_t lgflags : 2;
- uint16_t lgtype : 1;
- uint16_t lhflags : 2;
- uint16_t lhtype : 1;
-} __rte_packed;
-
-const char *intf_str[] = {
- "NIX-RX",
- "NIX-TX",
-};
-
-const char *ltype_str[NPC_MAX_LID][NPC_MAX_LT] = {
- [NPC_LID_LA][0] = "NONE",
- [NPC_LID_LA][NPC_LT_LA_ETHER] = "LA_ETHER",
- [NPC_LID_LA][NPC_LT_LA_IH_NIX_ETHER] = "LA_IH_NIX_ETHER",
- [NPC_LID_LA][NPC_LT_LA_HIGIG2_ETHER] = "LA_HIGIG2_ETHER",
- [NPC_LID_LA][NPC_LT_LA_IH_NIX_HIGIG2_ETHER] = "LA_IH_NIX_HIGIG2_ETHER",
- [NPC_LID_LB][0] = "NONE",
- [NPC_LID_LB][NPC_LT_LB_CTAG] = "LB_CTAG",
- [NPC_LID_LB][NPC_LT_LB_STAG_QINQ] = "LB_STAG_QINQ",
- [NPC_LID_LB][NPC_LT_LB_ETAG] = "LB_ETAG",
- [NPC_LID_LB][NPC_LT_LB_EXDSA] = "LB_EXDSA",
- [NPC_LID_LB][NPC_LT_LB_VLAN_EXDSA] = "LB_VLAN_EXDSA",
- [NPC_LID_LC][0] = "NONE",
- [NPC_LID_LC][NPC_LT_LC_IP] = "LC_IP",
- [NPC_LID_LC][NPC_LT_LC_IP6] = "LC_IP6",
- [NPC_LID_LC][NPC_LT_LC_ARP] = "LC_ARP",
- [NPC_LID_LC][NPC_LT_LC_IP6_EXT] = "LC_IP6_EXT",
- [NPC_LID_LC][NPC_LT_LC_NGIO] = "LC_NGIO",
- [NPC_LID_LD][0] = "NONE",
- [NPC_LID_LD][NPC_LT_LD_ICMP] = "LD_ICMP",
- [NPC_LID_LD][NPC_LT_LD_ICMP6] = "LD_ICMP6",
- [NPC_LID_LD][NPC_LT_LD_UDP] = "LD_UDP",
- [NPC_LID_LD][NPC_LT_LD_TCP] = "LD_TCP",
- [NPC_LID_LD][NPC_LT_LD_SCTP] = "LD_SCTP",
- [NPC_LID_LD][NPC_LT_LD_GRE] = "LD_GRE",
- [NPC_LID_LD][NPC_LT_LD_NVGRE] = "LD_NVGRE",
- [NPC_LID_LE][0] = "NONE",
- [NPC_LID_LE][NPC_LT_LE_VXLAN] = "LE_VXLAN",
- [NPC_LID_LE][NPC_LT_LE_ESP] = "LE_ESP",
- [NPC_LID_LE][NPC_LT_LE_GTPC] = "LE_GTPC",
- [NPC_LID_LE][NPC_LT_LE_GTPU] = "LE_GTPU",
- [NPC_LID_LE][NPC_LT_LE_GENEVE] = "LE_GENEVE",
- [NPC_LID_LE][NPC_LT_LE_VXLANGPE] = "LE_VXLANGPE",
- [NPC_LID_LF][0] = "NONE",
- [NPC_LID_LF][NPC_LT_LF_TU_ETHER] = "LF_TU_ETHER",
- [NPC_LID_LG][0] = "NONE",
- [NPC_LID_LG][NPC_LT_LG_TU_IP] = "LG_TU_IP",
- [NPC_LID_LG][NPC_LT_LG_TU_IP6] = "LG_TU_IP6",
- [NPC_LID_LH][0] = "NONE",
- [NPC_LID_LH][NPC_LT_LH_TU_UDP] = "LH_TU_UDP",
- [NPC_LID_LH][NPC_LT_LH_TU_TCP] = "LH_TU_TCP",
- [NPC_LID_LH][NPC_LT_LH_TU_SCTP] = "LH_TU_SCTP",
- [NPC_LID_LH][NPC_LT_LH_TU_ESP] = "LH_TU_ESP",
-};
-
-static uint16_t
-otx2_get_nibbles(struct rte_flow *flow, uint16_t size, uint32_t bit_offset)
-{
- uint32_t byte_index, noffset;
- uint16_t data, mask;
- uint8_t *bytes;
-
- bytes = (uint8_t *)flow->mcam_data;
- mask = (1ULL << (size * 4)) - 1;
- byte_index = bit_offset / 8;
- noffset = bit_offset % 8;
- data = *(uint16_t *)&bytes[byte_index];
- data >>= noffset;
- data &= mask;
-
- return data;
-}
-
-static void
-otx2_flow_print_parse_nibbles(FILE *file, struct rte_flow *flow,
- uint64_t parse_nibbles)
-{
- struct npc_rx_parse_nibble_s *rx_parse;
- uint32_t data, offset = 0;
-
- rx_parse = (struct npc_rx_parse_nibble_s *)&parse_nibbles;
-
- if (rx_parse->chan) {
- data = otx2_get_nibbles(flow, 3, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_CHAN:%#03X\n", data);
- offset += 12;
- }
-
- if (rx_parse->errlev) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_ERRLEV:%#X\n", data);
- offset += 4;
- }
-
- if (rx_parse->errcode) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_ERRCODE:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->l2l3bm) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_L2L3_BCAST:%#X\n", data);
- offset += 4;
- }
-
- if (rx_parse->latype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LA_LTYPE:%s\n",
- ltype_str[NPC_LID_LA][data]);
- offset += 4;
- }
-
- if (rx_parse->laflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LA_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lbtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LB_LTYPE:%s\n",
- ltype_str[NPC_LID_LB][data]);
- offset += 4;
- }
-
- if (rx_parse->lbflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LB_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lctype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LC_LTYPE:%s\n",
- ltype_str[NPC_LID_LC][data]);
- offset += 4;
- }
-
- if (rx_parse->lcflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LC_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->ldtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LD_LTYPE:%s\n",
- ltype_str[NPC_LID_LD][data]);
- offset += 4;
- }
-
- if (rx_parse->ldflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LD_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->letype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LE_LTYPE:%s\n",
- ltype_str[NPC_LID_LE][data]);
- offset += 4;
- }
-
- if (rx_parse->leflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LE_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lftype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LF_LTYPE:%s\n",
- ltype_str[NPC_LID_LF][data]);
- offset += 4;
- }
-
- if (rx_parse->lfflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LF_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lgtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LG_LTYPE:%s\n",
- ltype_str[NPC_LID_LG][data]);
- offset += 4;
- }
-
- if (rx_parse->lgflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LG_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lhtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LH_LTYPE:%s\n",
- ltype_str[NPC_LID_LH][data]);
- offset += 4;
- }
-
- if (rx_parse->lhflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LH_FLAGS:%#02X\n", data);
- }
-}
-
-static void
-otx2_flow_print_xtractinfo(FILE *file, struct npc_xtract_info *lfinfo,
- struct rte_flow *flow, int lid, int lt)
-{
- uint8_t *datastart, *maskstart;
- int i;
-
- datastart = (uint8_t *)&flow->mcam_data + lfinfo->key_off;
- maskstart = (uint8_t *)&flow->mcam_mask + lfinfo->key_off;
-
- fprintf(file, "\t%s, hdr offset:%#X, len:%#X, key offset:%#X, ",
- ltype_str[lid][lt], lfinfo->hdr_off,
- lfinfo->len, lfinfo->key_off);
-
- fprintf(file, "Data:0X");
- for (i = lfinfo->len - 1; i >= 0; i--)
- fprintf(file, "%02X", datastart[i]);
-
- fprintf(file, ", ");
-
- fprintf(file, "Mask:0X");
-
- for (i = lfinfo->len - 1; i >= 0; i--)
- fprintf(file, "%02X", maskstart[i]);
-
- fprintf(file, "\n");
-}
-
-static void
-otx2_flow_print_item(FILE *file, struct otx2_eth_dev *hw,
- struct npc_xtract_info *xinfo, struct rte_flow *flow,
- int intf, int lid, int lt, int ld)
-{
- struct otx2_npc_flow_info *npc_flow = &hw->npc_flow;
- struct npc_xtract_info *lflags_info;
- int i, lf_cfg;
-
- otx2_flow_print_xtractinfo(file, xinfo, flow, lid, lt);
-
- if (xinfo->flags_enable) {
- lf_cfg = npc_flow->prx_lfcfg[ld].i;
-
- if (lf_cfg == lid) {
- for (i = 0; i < NPC_MAX_LFL; i++) {
- lflags_info = npc_flow->prx_fxcfg[intf]
- [ld][i].xtract;
-
- otx2_flow_print_xtractinfo(file, lflags_info,
- flow, lid, lt);
- }
- }
- }
-}
-
-static void
-otx2_flow_dump_patterns(FILE *file, struct otx2_eth_dev *hw,
- struct rte_flow *flow)
-{
- struct otx2_npc_flow_info *npc_flow = &hw->npc_flow;
- struct npc_lid_lt_xtract_info *lt_xinfo;
- struct npc_xtract_info *xinfo;
- uint32_t intf, lid, ld, i;
- uint64_t parse_nibbles;
- uint16_t ltype;
-
- intf = flow->nix_intf;
- parse_nibbles = npc_flow->keyx_supp_nmask[intf];
- otx2_flow_print_parse_nibbles(file, flow, parse_nibbles);
-
- for (i = 0; i < flow->num_patterns; i++) {
- lid = flow->dump_data[i].lid;
- ltype = flow->dump_data[i].ltype;
- lt_xinfo = &npc_flow->prx_dxcfg[intf][lid][ltype];
-
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- xinfo = <_xinfo->xtract[ld];
- if (!xinfo->enable)
- continue;
- otx2_flow_print_item(file, hw, xinfo, flow, intf, lid,
- ltype, ld);
- }
- }
-}
-
-static void
-otx2_flow_dump_tx_action(FILE *file, uint64_t npc_action)
-{
- char index_name[NPC_MAX_FIELD_NAME_SIZE] = "Index:";
- uint32_t tx_op, index, match_id;
-
- tx_op = npc_action & NPC_RX_ACTIONOP_MASK;
-
- fprintf(file, "\tActionOp:");
-
- switch (tx_op) {
- case NIX_TX_ACTIONOP_DROP:
- fprintf(file, "NIX_TX_ACTIONOP_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_DROP);
- break;
- case NIX_TX_ACTIONOP_UCAST_DEFAULT:
- fprintf(file, "NIX_TX_ACTIONOP_UCAST_DEFAULT (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_UCAST_DEFAULT);
- break;
- case NIX_TX_ACTIONOP_UCAST_CHAN:
- fprintf(file, "NIX_TX_ACTIONOP_UCAST_DEFAULT (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_UCAST_CHAN);
- strncpy(index_name, "Transmit Channel:",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_TX_ACTIONOP_MCAST:
- fprintf(file, "NIX_TX_ACTIONOP_MCAST (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_MCAST);
- strncpy(index_name, "Multicast Table Index:",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_TX_ACTIONOP_DROP_VIOL:
- fprintf(file, "NIX_TX_ACTIONOP_DROP_VIOL (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_DROP_VIOL);
- break;
- }
-
- index = ((npc_action & NPC_TX_ACTION_INDEX_MASK) >> 12) & 0xFFFFF;
-
- fprintf(file, "\t%s:%#05X\n", index_name, index);
-
- match_id = ((npc_action & NPC_TX_ACTION_MATCH_MASK) >> 32) & 0xFFFF;
-
- fprintf(file, "\tMatch Id:%#04X\n", match_id);
-}
-
-static void
-otx2_flow_dump_rx_action(FILE *file, uint64_t npc_action)
-{
- uint32_t rx_op, pf_func, index, match_id, flowkey_alg;
- char index_name[NPC_MAX_FIELD_NAME_SIZE] = "Index:";
-
- rx_op = npc_action & NPC_RX_ACTIONOP_MASK;
-
- fprintf(file, "\tActionOp:");
-
- switch (rx_op) {
- case NIX_RX_ACTIONOP_DROP:
- fprintf(file, "NIX_RX_ACTIONOP_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_DROP);
- break;
- case NIX_RX_ACTIONOP_UCAST:
- fprintf(file, "NIX_RX_ACTIONOP_UCAST (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_UCAST);
- strncpy(index_name, "RQ Index", NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_UCAST_IPSEC:
- fprintf(file, "NIX_RX_ACTIONOP_UCAST_IPSEC (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_UCAST_IPSEC);
- strncpy(index_name, "RQ Index:", NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_MCAST:
- fprintf(file, "NIX_RX_ACTIONOP_MCAST (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_MCAST);
- strncpy(index_name, "Multicast/mirror table index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_RSS:
- fprintf(file, "NIX_RX_ACTIONOP_RSS (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_RSS);
- strncpy(index_name, "RSS Group Index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_PF_FUNC_DROP:
- fprintf(file, "NIX_RX_ACTIONOP_PF_FUNC_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_PF_FUNC_DROP);
- break;
- case NIX_RX_ACTIONOP_MIRROR:
- fprintf(file, "NIX_RX_ACTIONOP_MIRROR (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_MIRROR);
- strncpy(index_name, "Multicast/mirror table index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- }
-
- pf_func = ((npc_action & NPC_RX_ACTION_PFFUNC_MASK) >> 4) & 0xFFFF;
-
- fprintf(file, "\tPF_FUNC: %#04X\n", pf_func);
-
- index = ((npc_action & NPC_RX_ACTION_INDEX_MASK) >> 20) & 0xFFFFF;
-
- fprintf(file, "\t%s:%#05X\n", index_name, index);
-
- match_id = ((npc_action & NPC_RX_ACTION_MATCH_MASK) >> 40) & 0xFFFF;
-
- fprintf(file, "\tMatch Id:%#04X\n", match_id);
-
- flowkey_alg = ((npc_action & NPC_RX_ACTION_FLOWKEY_MASK) >> 56) & 0x1F;
-
- fprintf(file, "\tFlow Key Alg:%#X\n", flowkey_alg);
-}
-
-static void
-otx2_flow_dump_parsed_action(FILE *file, uint64_t npc_action, bool is_rx)
-{
- if (is_rx) {
- fprintf(file, "NPC RX Action:%#016lX\n", npc_action);
- otx2_flow_dump_rx_action(file, npc_action);
- } else {
- fprintf(file, "NPC TX Action:%#016lX\n", npc_action);
- otx2_flow_dump_tx_action(file, npc_action);
- }
-}
-
-static void
-otx2_flow_dump_rx_vtag_action(FILE *file, uint64_t vtag_action)
-{
- uint32_t type, lid, relptr;
-
- if (vtag_action & NIX_RX_VTAGACT_VTAG0_VALID_MASK) {
- relptr = vtag_action & NIX_RX_VTAGACT_VTAG0_RELPTR_MASK;
- lid = ((vtag_action & NIX_RX_VTAGACT_VTAG0_LID_MASK) >> 8)
- & 0x7;
- type = ((vtag_action & NIX_RX_VTAGACT_VTAG0_TYPE_MASK) >> 12)
- & 0x7;
-
- fprintf(file, "\tVTAG0:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\ttype:%#X\n", type);
- }
-
- if (vtag_action & NIX_RX_VTAGACT_VTAG1_VALID_MASK) {
- relptr = ((vtag_action & NIX_RX_VTAGACT_VTAG1_RELPTR_MASK)
- >> 32) & 0xFF;
- lid = ((vtag_action & NIX_RX_VTAGACT_VTAG1_LID_MASK) >> 40)
- & 0x7;
- type = ((vtag_action & NIX_RX_VTAGACT_VTAG1_TYPE_MASK) >> 44)
- & 0x7;
-
- fprintf(file, "\tVTAG1:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\ttype:%#X\n", type);
- }
-}
-
-static void
-otx2_get_vtag_opname(uint32_t op, char *opname, int len)
-{
- switch (op) {
- case 0x0:
- strncpy(opname, "NOP", len - 1);
- break;
- case 0x1:
- strncpy(opname, "INSERT", len - 1);
- break;
- case 0x2:
- strncpy(opname, "REPLACE", len - 1);
- break;
- }
-}
-
-static void
-otx2_flow_dump_tx_vtag_action(FILE *file, uint64_t vtag_action)
-{
- uint32_t relptr, lid, op, vtag_def;
- char opname[10];
-
- relptr = vtag_action & NIX_TX_VTAGACT_VTAG0_RELPTR_MASK;
- lid = ((vtag_action & NIX_TX_VTAGACT_VTAG0_LID_MASK) >> 8) & 0x7;
- op = ((vtag_action & NIX_TX_VTAGACT_VTAG0_OP_MASK) >> 12) & 0x3;
- vtag_def = ((vtag_action & NIX_TX_VTAGACT_VTAG0_DEF_MASK) >> 16)
- & 0x3FF;
-
- otx2_get_vtag_opname(op, opname, sizeof(opname));
-
- fprintf(file, "\tVTAG0 relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\top:%s\n", opname);
- fprintf(file, "\tvtag_def:%#X\n", vtag_def);
-
- relptr = ((vtag_action & NIX_TX_VTAGACT_VTAG1_RELPTR_MASK) >> 32)
- & 0xFF;
- lid = ((vtag_action & NIX_TX_VTAGACT_VTAG1_LID_MASK) >> 40) & 0x7;
- op = ((vtag_action & NIX_TX_VTAGACT_VTAG1_OP_MASK) >> 44) & 0x3;
- vtag_def = ((vtag_action & NIX_TX_VTAGACT_VTAG1_DEF_MASK) >> 48)
- & 0x3FF;
-
- otx2_get_vtag_opname(op, opname, sizeof(opname));
-
- fprintf(file, "\tVTAG1:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\top:%s\n", opname);
- fprintf(file, "\tvtag_def:%#X\n", vtag_def);
-}
-
-static void
-otx2_flow_dump_vtag_action(FILE *file, uint64_t vtag_action, bool is_rx)
-{
- if (is_rx) {
- fprintf(file, "NPC RX VTAG Action:%#016lX\n", vtag_action);
- otx2_flow_dump_rx_vtag_action(file, vtag_action);
- } else {
- fprintf(file, "NPC TX VTAG Action:%#016lX\n", vtag_action);
- otx2_flow_dump_tx_vtag_action(file, vtag_action);
- }
-}
-
-void
-otx2_flow_dump(FILE *file, struct otx2_eth_dev *hw, struct rte_flow *flow)
-{
- bool is_rx = 0;
- int i;
-
- fprintf(file, "MCAM Index:%d\n", flow->mcam_id);
- fprintf(file, "Interface :%s (%d)\n", intf_str[flow->nix_intf],
- flow->nix_intf);
- fprintf(file, "Priority :%d\n", flow->priority);
-
- if (flow->nix_intf == NIX_INTF_RX)
- is_rx = 1;
-
- otx2_flow_dump_parsed_action(file, flow->npc_action, is_rx);
- otx2_flow_dump_vtag_action(file, flow->vtag_action, is_rx);
- fprintf(file, "Patterns:\n");
- otx2_flow_dump_patterns(file, hw, flow);
-
- fprintf(file, "MCAM Raw Data :\n");
-
- for (i = 0; i < OTX2_MAX_MCAM_WIDTH_DWORDS; i++) {
- fprintf(file, "\tDW%d :%016lX\n", i, flow->mcam_data[i]);
- fprintf(file, "\tDW%d_Mask:%016lX\n", i, flow->mcam_mask[i]);
- }
-
- fprintf(file, "\n");
-}
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
deleted file mode 100644
index 91267bbb81..0000000000
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ /dev/null
@@ -1,1239 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-const struct rte_flow_item *
-otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern)
-{
- while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) ||
- (pattern->type == RTE_FLOW_ITEM_TYPE_ANY))
- pattern++;
-
- return pattern;
-}
-
-/*
- * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP,
- * Tunnel+SCTP
- */
-int
-otx2_flow_parse_lh(struct otx2_parse_state *pst)
-{
- struct otx2_flow_item_info info;
- char hw_mask[64];
- int lid, lt;
- int rc;
-
- if (!pst->tunnel)
- return 0;
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LH;
-
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_UDP:
- lt = NPC_LT_LH_TU_UDP;
- info.def_mask = &rte_flow_item_udp_mask;
- info.len = sizeof(struct rte_flow_item_udp);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- lt = NPC_LT_LH_TU_TCP;
- info.def_mask = &rte_flow_item_tcp_mask;
- info.len = sizeof(struct rte_flow_item_tcp);
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- lt = NPC_LT_LH_TU_SCTP;
- info.def_mask = &rte_flow_item_sctp_mask;
- info.len = sizeof(struct rte_flow_item_sctp);
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- lt = NPC_LT_LH_TU_ESP;
- info.def_mask = &rte_flow_item_esp_mask;
- info.len = sizeof(struct rte_flow_item_esp);
- break;
- default:
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* Tunnel+IPv4, Tunnel+IPv6 */
-int
-otx2_flow_parse_lg(struct otx2_parse_state *pst)
-{
- struct otx2_flow_item_info info;
- char hw_mask[64];
- int lid, lt;
- int rc;
-
- if (!pst->tunnel)
- return 0;
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LG;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
- lt = NPC_LT_LG_TU_IP;
- info.def_mask = &rte_flow_item_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_ipv4);
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) {
- lt = NPC_LT_LG_TU_IP6;
- info.def_mask = &rte_flow_item_ipv6_mask;
- info.len = sizeof(struct rte_flow_item_ipv6);
- } else {
- /* There is no tunneled IP header */
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* Tunnel+Ether */
-int
-otx2_flow_parse_lf(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern, *last_pattern;
- struct rte_flow_item_eth hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- int nr_vlans = 0;
- int rc;
-
- /* We hit this layer if there is a tunneling protocol */
- if (!pst->tunnel)
- return 0;
-
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
- return 0;
-
- lid = NPC_LID_LF;
- lt = NPC_LT_LF_TU_ETHER;
- lflags = 0;
-
- info.def_mask = &rte_flow_item_vlan_mask;
- /* No match support for vlan tags */
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- /* Look ahead and find out any VLAN tags. These can be
- * detected but no data matching is available.
- */
- last_pattern = pst->pattern;
- pattern = pst->pattern + 1;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- nr_vlans++;
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc != 0)
- return rc;
- last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
- otx2_npc_dbg("Nr_vlans = %d", nr_vlans);
- switch (nr_vlans) {
- case 0:
- break;
- case 1:
- lflags = NPC_F_TU_ETHER_CTAG;
- break;
- case 2:
- lflags = NPC_F_TU_ETHER_STAG_CTAG;
- break;
- default:
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- last_pattern,
- "more than 2 vlans with tunneled Ethernet "
- "not supported");
- return -rte_errno;
- }
-
- info.def_mask = &rte_flow_item_eth_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_eth);
- info.hw_hdr_len = 0;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- pst->pattern = last_pattern;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-int
-otx2_flow_parse_le(struct otx2_parse_state *pst)
-{
- /*
- * We are positioned at UDP. Scan ahead and look for
- * UDP encapsulated tunnel protocols. If available,
- * parse them. In that case handle this:
- * - RTE spec assumes we point to tunnel header.
- * - NPC parser provides offset from UDP header.
- */
-
- /*
- * Note: Add support to GENEVE, VXLAN_GPE when we
- * upgrade DPDK
- *
- * Note: Better to split flags into two nibbles:
- * - Higher nibble can have flags
- * - Lower nibble to further enumerate protocols
- * and have flags based extraction
- */
- const struct rte_flow_item *pattern = pst->pattern;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- char hw_mask[64];
- int rc;
-
- if (pst->tunnel)
- return 0;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LE);
-
- info.spec = NULL;
- info.mask = NULL;
- info.hw_mask = NULL;
- info.def_mask = NULL;
- info.len = 0;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LE;
- lflags = 0;
-
- /* Ensure we are not matching anything in UDP */
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc)
- return rc;
-
- info.hw_mask = &hw_mask;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- otx2_npc_dbg("Pattern->type = %d", pattern->type);
- switch (pattern->type) {
- case RTE_FLOW_ITEM_TYPE_VXLAN:
- lflags = NPC_F_UDP_VXLAN;
- info.def_mask = &rte_flow_item_vxlan_mask;
- info.len = sizeof(struct rte_flow_item_vxlan);
- lt = NPC_LT_LE_VXLAN;
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- lt = NPC_LT_LE_ESP;
- info.def_mask = &rte_flow_item_esp_mask;
- info.len = sizeof(struct rte_flow_item_esp);
- break;
- case RTE_FLOW_ITEM_TYPE_GTPC:
- lflags = NPC_F_UDP_GTP_GTPC;
- info.def_mask = &rte_flow_item_gtp_mask;
- info.len = sizeof(struct rte_flow_item_gtp);
- lt = NPC_LT_LE_GTPC;
- break;
- case RTE_FLOW_ITEM_TYPE_GTPU:
- lflags = NPC_F_UDP_GTP_GTPU_G_PDU;
- info.def_mask = &rte_flow_item_gtp_mask;
- info.len = sizeof(struct rte_flow_item_gtp);
- lt = NPC_LT_LE_GTPU;
- break;
- case RTE_FLOW_ITEM_TYPE_GENEVE:
- lflags = NPC_F_UDP_GENEVE;
- info.def_mask = &rte_flow_item_geneve_mask;
- info.len = sizeof(struct rte_flow_item_geneve);
- lt = NPC_LT_LE_GENEVE;
- break;
- case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
- lflags = NPC_F_UDP_VXLANGPE;
- info.def_mask = &rte_flow_item_vxlan_gpe_mask;
- info.len = sizeof(struct rte_flow_item_vxlan_gpe);
- lt = NPC_LT_LE_VXLANGPE;
- break;
- default:
- return 0;
- }
-
- pst->tunnel = 1;
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-static int
-flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag)
-{
- int nr_labels = 0;
- const struct rte_flow_item *pattern = pst->pattern;
- struct otx2_flow_item_info info;
- int rc;
- uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS,
- NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS};
-
- /*
- * pst->pattern points to first MPLS label. We only check
- * that subsequent labels do not have anything to match.
- */
- info.def_mask = &rte_flow_item_mpls_mask;
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_mpls);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) {
- nr_labels++;
-
- /* Basic validation of 2nd/3rd/4th mpls item */
- if (nr_labels > 1) {
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
- }
- pst->last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
-
- if (nr_labels > 4) {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->last_pattern,
- "more than 4 mpls labels not supported");
- return -rte_errno;
- }
-
- *flag = flag_list[nr_labels - 1];
- return 0;
-}
-
-int
-otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid)
-{
- /* Find number of MPLS labels */
- struct rte_flow_item_mpls hw_mask;
- struct otx2_flow_item_info info;
- int lt, lflags;
- int rc;
-
- lflags = 0;
-
- if (lid == NPC_LID_LC)
- lt = NPC_LT_LC_MPLS;
- else if (lid == NPC_LID_LD)
- lt = NPC_LT_LD_TU_MPLS_IN_IP;
- else
- lt = NPC_LT_LE_TU_MPLS_IN_UDP;
-
- /* Prepare for parsing the first item */
- info.def_mask = &rte_flow_item_mpls_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_mpls);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- /*
- * Parse for more labels.
- * This sets lflags and pst->last_pattern correctly.
- */
- rc = flow_parse_mpls_label_stack(pst, &lflags);
- if (rc != 0)
- return rc;
-
- pst->tunnel = 1;
- pst->pattern = pst->last_pattern;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-/*
- * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE,
- * GTP, GTPC, GTPU, ESP
- *
- * Note: UDP tunnel protocols are identified by flags.
- * LPTR for these protocol still points to UDP
- * header. Need flag based extraction to support
- * this.
- */
-int
-otx2_flow_parse_ld(struct otx2_parse_state *pst)
-{
- char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- uint32_t gre_key_mask = 0xffffffff;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- int rc;
-
- if (pst->tunnel) {
- /* We have already parsed MPLS or IPv4/v6 followed
- * by MPLS or IPv4/v6. Subsequent TCP/UDP etc
- * would be parsed as tunneled versions. Skip
- * this layer, except for tunneled MPLS. If LC is
- * MPLS, we have anyway skipped all stacked MPLS
- * labels.
- */
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LD);
- return 0;
- }
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.def_mask = NULL;
- info.len = 0;
- info.hw_hdr_len = 0;
-
- lid = NPC_LID_LD;
- lflags = 0;
-
- otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type);
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_ICMP:
- if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6)
- lt = NPC_LT_LD_ICMP6;
- else
- lt = NPC_LT_LD_ICMP;
- info.def_mask = &rte_flow_item_icmp_mask;
- info.len = sizeof(struct rte_flow_item_icmp);
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- lt = NPC_LT_LD_UDP;
- info.def_mask = &rte_flow_item_udp_mask;
- info.len = sizeof(struct rte_flow_item_udp);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- lt = NPC_LT_LD_TCP;
- info.def_mask = &rte_flow_item_tcp_mask;
- info.len = sizeof(struct rte_flow_item_tcp);
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- lt = NPC_LT_LD_SCTP;
- info.def_mask = &rte_flow_item_sctp_mask;
- info.len = sizeof(struct rte_flow_item_sctp);
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- lt = NPC_LT_LD_GRE;
- info.def_mask = &rte_flow_item_gre_mask;
- info.len = sizeof(struct rte_flow_item_gre);
- break;
- case RTE_FLOW_ITEM_TYPE_GRE_KEY:
- lt = NPC_LT_LD_GRE;
- info.def_mask = &gre_key_mask;
- info.len = sizeof(gre_key_mask);
- info.hw_hdr_len = 4;
- break;
- case RTE_FLOW_ITEM_TYPE_NVGRE:
- lt = NPC_LT_LD_NVGRE;
- lflags = NPC_F_GRE_NVGRE;
- info.def_mask = &rte_flow_item_nvgre_mask;
- info.len = sizeof(struct rte_flow_item_nvgre);
- /* Further IP/Ethernet are parsed as tunneled */
- pst->tunnel = 1;
- break;
- default:
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-static inline void
-flow_check_lc_ip_tunnel(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern = pst->pattern + 1;
-
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS ||
- pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
- pattern->type == RTE_FLOW_ITEM_TYPE_IPV6)
- pst->tunnel = 1;
-}
-
-static int
-otx2_flow_raw_item_prepare(const struct rte_flow_item_raw *raw_spec,
- const struct rte_flow_item_raw *raw_mask,
- struct otx2_flow_item_info *info,
- uint8_t *spec_buf, uint8_t *mask_buf)
-{
- uint32_t custom_hdr_size = 0;
-
- memset(spec_buf, 0, NPC_MAX_RAW_ITEM_LEN);
- memset(mask_buf, 0, NPC_MAX_RAW_ITEM_LEN);
- custom_hdr_size = raw_spec->offset + raw_spec->length;
-
- memcpy(spec_buf + raw_spec->offset, raw_spec->pattern,
- raw_spec->length);
-
- if (raw_mask->pattern) {
- memcpy(mask_buf + raw_spec->offset, raw_mask->pattern,
- raw_spec->length);
- } else {
- memset(mask_buf + raw_spec->offset, 0xFF, raw_spec->length);
- }
-
- info->len = custom_hdr_size;
- info->spec = spec_buf;
- info->mask = mask_buf;
-
- return 0;
-}
-
-/* Outer IPv4, Outer IPv6, MPLS, ARP */
-int
-otx2_flow_parse_lc(struct otx2_parse_state *pst)
-{
- uint8_t raw_spec_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t raw_mask_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- const struct rte_flow_item_raw *raw_spec;
- struct otx2_flow_item_info info;
- int lid, lt, len;
- int rc;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LC);
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LC;
-
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_IPV4:
- lt = NPC_LT_LC_IP;
- info.def_mask = &rte_flow_item_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_ipv4);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_IP6;
- info.def_mask = &rte_flow_item_ipv6_mask;
- info.len = sizeof(struct rte_flow_item_ipv6);
- break;
- case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4:
- lt = NPC_LT_LC_ARP;
- info.def_mask = &rte_flow_item_arp_eth_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_arp_eth_ipv4);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6_EXT:
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_IP6_EXT;
- info.def_mask = &rte_flow_item_ipv6_ext_mask;
- info.len = sizeof(struct rte_flow_item_ipv6_ext);
- info.hw_hdr_len = 40;
- break;
- case RTE_FLOW_ITEM_TYPE_RAW:
- raw_spec = pst->pattern->spec;
- if (!raw_spec->relative)
- return 0;
-
- len = raw_spec->length + raw_spec->offset;
- if (len > NPC_MAX_RAW_ITEM_LEN) {
- rte_flow_error_set(pst->error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Spec length too big");
- return -rte_errno;
- }
-
- otx2_flow_raw_item_prepare((const struct rte_flow_item_raw *)
- pst->pattern->spec,
- (const struct rte_flow_item_raw *)
- pst->pattern->mask, &info,
- raw_spec_buf, raw_mask_buf);
-
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_NGIO;
- info.hw_mask = &hw_mask;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- break;
- default:
- /* No match at this layer */
- return 0;
- }
-
- /* Identify if IP tunnels MPLS or IPv4/v6 */
- flow_check_lc_ip_tunnel(pst);
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* VLAN, ETAG */
-int
-otx2_flow_parse_lb(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern = pst->pattern;
- uint8_t raw_spec_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t raw_mask_buf[NPC_MAX_RAW_ITEM_LEN];
- const struct rte_flow_item *last_pattern;
- const struct rte_flow_item_raw *raw_spec;
- char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- struct otx2_flow_item_info info;
- int lid, lt, lflags, len;
- int nr_vlans = 0;
- int rc;
-
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = NPC_TPID_LENGTH;
-
- lid = NPC_LID_LB;
- lflags = 0;
- last_pattern = pattern;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- /* RTE vlan is either 802.1q or 802.1ad,
- * this maps to either CTAG/STAG. We need to decide
- * based on number of VLANS present. Matching is
- * supported on first tag only.
- */
- info.def_mask = &rte_flow_item_vlan_mask;
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
-
- pattern = pst->pattern;
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- nr_vlans++;
-
- /* Basic validation of 2nd/3rd vlan item */
- if (nr_vlans > 1) {
- otx2_npc_dbg("Vlans = %d", nr_vlans);
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
- }
- last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
-
- switch (nr_vlans) {
- case 1:
- lt = NPC_LT_LB_CTAG;
- break;
- case 2:
- lt = NPC_LT_LB_STAG_QINQ;
- lflags = NPC_F_STAG_CTAG;
- break;
- case 3:
- lt = NPC_LT_LB_STAG_QINQ;
- lflags = NPC_F_STAG_STAG_CTAG;
- break;
- default:
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- last_pattern,
- "more than 3 vlans not supported");
- return -rte_errno;
- }
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) {
- /* we can support ETAG and match a subsequent CTAG
- * without any matching support.
- */
- lt = NPC_LT_LB_ETAG;
- lflags = 0;
-
- last_pattern = pst->pattern;
- pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1);
- if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- info.def_mask = &rte_flow_item_vlan_mask;
- /* set supported mask to NULL for vlan tag */
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
-
- lflags = NPC_F_ETAG_CTAG;
- last_pattern = pattern;
- }
-
- info.def_mask = &rte_flow_item_e_tag_mask;
- info.len = sizeof(struct rte_flow_item_e_tag);
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_RAW) {
- raw_spec = pst->pattern->spec;
- if (raw_spec->relative)
- return 0;
- len = raw_spec->length + raw_spec->offset;
- if (len > NPC_MAX_RAW_ITEM_LEN) {
- rte_flow_error_set(pst->error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Spec length too big");
- return -rte_errno;
- }
-
- if (pst->npc->switch_header_type ==
- OTX2_PRIV_FLAGS_VLAN_EXDSA) {
- lt = NPC_LT_LB_VLAN_EXDSA;
- } else if (pst->npc->switch_header_type ==
- OTX2_PRIV_FLAGS_EXDSA) {
- lt = NPC_LT_LB_EXDSA;
- } else {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "exdsa or vlan_exdsa not enabled on"
- " port");
- return -rte_errno;
- }
-
- otx2_flow_raw_item_prepare((const struct rte_flow_item_raw *)
- pst->pattern->spec,
- (const struct rte_flow_item_raw *)
- pst->pattern->mask, &info,
- raw_spec_buf, raw_mask_buf);
-
- info.hw_hdr_len = 0;
- } else {
- return 0;
- }
-
- info.hw_mask = &hw_mask;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
-
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- /* Point pattern to last item consumed */
- pst->pattern = last_pattern;
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-
-int
-otx2_flow_parse_la(struct otx2_parse_state *pst)
-{
- struct rte_flow_item_eth hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt;
- int rc;
-
- /* Identify the pattern type into lid, lt */
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
- return 0;
-
- lid = NPC_LID_LA;
- lt = NPC_LT_LA_ETHER;
- info.hw_hdr_len = 0;
-
- if (pst->flow->nix_intf == NIX_INTF_TX) {
- lt = NPC_LT_LA_IH_NIX_ETHER;
- info.hw_hdr_len = NPC_IH_LENGTH;
- if (pst->npc->switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
- info.hw_hdr_len += NPC_HIGIG2_LENGTH;
- }
- } else {
- if (pst->npc->switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- lt = NPC_LT_LA_HIGIG2_ETHER;
- info.hw_hdr_len = NPC_HIGIG2_LENGTH;
- }
- }
-
- /* Prepare for parsing the item */
- info.def_mask = &rte_flow_item_eth_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_eth);
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- /* Basic validation of item parameters */
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc)
- return rc;
-
- /* Update pst if not validate only? clash check? */
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-int
-otx2_flow_parse_higig2_hdr(struct otx2_parse_state *pst)
-{
- struct rte_flow_item_higig2_hdr hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt;
- int rc;
-
- /* Identify the pattern type into lid, lt */
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_HIGIG2)
- return 0;
-
- lid = NPC_LID_LA;
- lt = NPC_LT_LA_HIGIG2_ETHER;
- info.hw_hdr_len = 0;
-
- if (pst->flow->nix_intf == NIX_INTF_TX) {
- lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
- info.hw_hdr_len = NPC_IH_LENGTH;
- }
-
- /* Prepare for parsing the item */
- info.def_mask = &rte_flow_item_higig2_hdr_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_higig2_hdr);
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- /* Basic validation of item parameters */
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc)
- return rc;
-
- /* Update pst if not validate only? clash check? */
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-static int
-parse_rss_action(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action *act,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_rss_info *rss_info = &hw->rss_info;
- const struct rte_flow_action_rss *rss;
- uint32_t i;
-
- rss = (const struct rte_flow_action_rss *)act->conf;
-
- /* Not supported */
- if (attr->egress) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
- attr, "No support of RSS in egress");
- }
-
- if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "multi-queue mode is disabled");
-
- /* Parse RSS related parameters from configuration */
- if (!rss || !rss->queue_num)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "no valid queues");
-
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions"
- " are not supported");
-
- if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key))
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, act,
- "RSS hash key too large");
-
- if (rss->queue_num > rss_info->rss_size)
- return rte_flow_error_set
- (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "too many queues for RSS context");
-
- for (i = 0; i < rss->queue_num; i++) {
- if (rss->queue[i] >= dev->data->nb_rx_queues)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "queue id > max number"
- " of queues");
- }
-
- return 0;
-}
-
-int
-otx2_flow_parse_actions(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- const struct rte_flow_action_mark *act_mark;
- const struct rte_flow_action_queue *act_q;
- const struct rte_flow_action_vf *vf_act;
- uint16_t pf_func, vf_id, port_id, pf_id;
- char if_name[RTE_ETH_NAME_MAX_LEN];
- bool vlan_insert_action = false;
- struct rte_eth_dev *eth_dev;
- const char *errmsg = NULL;
- int sel_act, req_act = 0;
- int errcode = 0;
- int mark = 0;
- int rq = 0;
-
- /* Initialize actions */
- flow->ctr_id = NPC_COUNTER_NONE;
- pf_func = otx2_pfvf_func(hw->pf, hw->vf);
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- otx2_npc_dbg("Action type = %d", actions->type);
-
- switch (actions->type) {
- case RTE_FLOW_ACTION_TYPE_VOID:
- break;
- case RTE_FLOW_ACTION_TYPE_MARK:
- act_mark =
- (const struct rte_flow_action_mark *)actions->conf;
-
- /* We have only 16 bits. Use highest val for flag */
- if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) {
- errmsg = "mark value must be < 0xfffe";
- errcode = ENOTSUP;
- goto err_exit;
- }
- mark = act_mark->id + 1;
- req_act |= OTX2_FLOW_ACT_MARK;
- rte_atomic32_inc(&npc->mark_actions);
- break;
-
- case RTE_FLOW_ACTION_TYPE_FLAG:
- mark = OTX2_FLOW_FLAG_VAL;
- req_act |= OTX2_FLOW_ACT_FLAG;
- rte_atomic32_inc(&npc->mark_actions);
- break;
-
- case RTE_FLOW_ACTION_TYPE_COUNT:
- /* Indicates, need a counter */
- flow->ctr_id = 1;
- req_act |= OTX2_FLOW_ACT_COUNT;
- break;
-
- case RTE_FLOW_ACTION_TYPE_DROP:
- req_act |= OTX2_FLOW_ACT_DROP;
- break;
-
- case RTE_FLOW_ACTION_TYPE_PF:
- req_act |= OTX2_FLOW_ACT_PF;
- pf_func &= (0xfc00);
- break;
-
- case RTE_FLOW_ACTION_TYPE_VF:
- vf_act = (const struct rte_flow_action_vf *)
- actions->conf;
- req_act |= OTX2_FLOW_ACT_VF;
- if (vf_act->original == 0) {
- vf_id = vf_act->id & RVU_PFVF_FUNC_MASK;
- if (vf_id >= hw->maxvf) {
- errmsg = "invalid vf specified";
- errcode = EINVAL;
- goto err_exit;
- }
- pf_func &= (0xfc00);
- pf_func = (pf_func | (vf_id + 1));
- }
- break;
-
- case RTE_FLOW_ACTION_TYPE_PORT_ID:
- case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR:
- if (actions->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
- const struct rte_flow_action_port_id *port_act;
-
- port_act = actions->conf;
- port_id = port_act->id;
- } else {
- const struct rte_flow_action_ethdev *ethdev_act;
-
- ethdev_act = actions->conf;
- port_id = ethdev_act->port_id;
- }
- if (rte_eth_dev_get_name_by_port(port_id, if_name)) {
- errmsg = "Name not found for output port id";
- errcode = EINVAL;
- goto err_exit;
- }
- eth_dev = rte_eth_dev_allocated(if_name);
- if (!eth_dev) {
- errmsg = "eth_dev not found for output port id";
- errcode = EINVAL;
- goto err_exit;
- }
- if (!otx2_ethdev_is_same_driver(eth_dev)) {
- errmsg = "Output port id unsupported type";
- errcode = ENOTSUP;
- goto err_exit;
- }
- if (!otx2_dev_is_vf(otx2_eth_pmd_priv(eth_dev))) {
- errmsg = "Output port should be VF";
- errcode = ENOTSUP;
- goto err_exit;
- }
- vf_id = otx2_eth_pmd_priv(eth_dev)->vf;
- if (vf_id >= hw->maxvf) {
- errmsg = "Invalid vf for output port";
- errcode = EINVAL;
- goto err_exit;
- }
- pf_id = otx2_eth_pmd_priv(eth_dev)->pf;
- if (pf_id != hw->pf) {
- errmsg = "Output port unsupported PF";
- errcode = ENOTSUP;
- goto err_exit;
- }
- pf_func &= (0xfc00);
- pf_func = (pf_func | (vf_id + 1));
- req_act |= OTX2_FLOW_ACT_VF;
- break;
-
- case RTE_FLOW_ACTION_TYPE_QUEUE:
- /* Applicable only to ingress flow */
- act_q = (const struct rte_flow_action_queue *)
- actions->conf;
- rq = act_q->index;
- if (rq >= dev->data->nb_rx_queues) {
- errmsg = "invalid queue index";
- errcode = EINVAL;
- goto err_exit;
- }
- req_act |= OTX2_FLOW_ACT_QUEUE;
- break;
-
- case RTE_FLOW_ACTION_TYPE_RSS:
- errcode = parse_rss_action(dev, attr, actions, error);
- if (errcode)
- return -rte_errno;
-
- req_act |= OTX2_FLOW_ACT_RSS;
- break;
-
- case RTE_FLOW_ACTION_TYPE_SECURITY:
- /* Assumes user has already configured security
- * session for this flow. Associated conf is
- * opaque. When RTE security is implemented for otx2,
- * we need to verify that for specified security
- * session:
- * action_type ==
- * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
- * session_protocol ==
- * RTE_SECURITY_PROTOCOL_IPSEC
- *
- * RSS is not supported with inline ipsec. Get the
- * rq from associated conf, or make
- * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this
- * action.
- * Currently, rq = 0 is assumed.
- */
- req_act |= OTX2_FLOW_ACT_SEC;
- rq = 0;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
- req_act |= OTX2_FLOW_ACT_VLAN_INSERT;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
- req_act |= OTX2_FLOW_ACT_VLAN_STRIP;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
- req_act |= OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
- req_act |= OTX2_FLOW_ACT_VLAN_PCP_INSERT;
- break;
- default:
- errmsg = "Unsupported action specified";
- errcode = ENOTSUP;
- goto err_exit;
- }
- }
-
- if (req_act &
- (OTX2_FLOW_ACT_VLAN_INSERT | OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT |
- OTX2_FLOW_ACT_VLAN_PCP_INSERT))
- vlan_insert_action = true;
-
- if ((req_act &
- (OTX2_FLOW_ACT_VLAN_INSERT | OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT |
- OTX2_FLOW_ACT_VLAN_PCP_INSERT)) ==
- OTX2_FLOW_ACT_VLAN_PCP_INSERT) {
- errmsg = " PCP insert action can't be supported alone";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- /* Both STRIP and INSERT actions are not supported */
- if (vlan_insert_action && (req_act & OTX2_FLOW_ACT_VLAN_STRIP)) {
- errmsg = "Both VLAN insert and strip actions not supported"
- " together";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- /* Check if actions specified are compatible */
- if (attr->egress) {
- if (req_act & OTX2_FLOW_ACT_VLAN_STRIP) {
- errmsg = "VLAN pop action is not supported on Egress";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_DROP) {
- flow->npc_action = NIX_TX_ACTIONOP_DROP;
- } else if ((req_act & OTX2_FLOW_ACT_COUNT) ||
- vlan_insert_action) {
- flow->npc_action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
- } else {
- errmsg = "Unsupported action for egress";
- errcode = EINVAL;
- goto err_exit;
- }
- goto set_pf_func;
- }
-
- /* We have already verified the attr, this is ingress.
- * - Exactly one terminating action is supported
- * - Exactly one of MARK or FLAG is supported
- * - If terminating action is DROP, only count is valid.
- */
- sel_act = req_act & OTX2_FLOW_ACT_TERM;
- if ((sel_act & (sel_act - 1)) != 0) {
- errmsg = "Only one terminating action supported";
- errcode = EINVAL;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_DROP) {
- sel_act = req_act & ~OTX2_FLOW_ACT_COUNT;
- if ((sel_act & (sel_act - 1)) != 0) {
- errmsg = "Only COUNT action is supported "
- "with DROP ingress action";
- errcode = ENOTSUP;
- goto err_exit;
- }
- }
-
- if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK))
- == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
- errmsg = "Only one of FLAG or MARK action is supported";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (vlan_insert_action) {
- errmsg = "VLAN push/Insert action is not supported on Ingress";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_VLAN_STRIP)
- npc->vtag_actions++;
-
- /* Only VLAN action is provided */
- if (req_act == OTX2_FLOW_ACT_VLAN_STRIP)
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- /* Set NIX_RX_ACTIONOP */
- else if (req_act & (OTX2_FLOW_ACT_PF | OTX2_FLOW_ACT_VF)) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- if (req_act & OTX2_FLOW_ACT_QUEUE)
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & OTX2_FLOW_ACT_DROP) {
- flow->npc_action = NIX_RX_ACTIONOP_DROP;
- } else if (req_act & OTX2_FLOW_ACT_QUEUE) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & OTX2_FLOW_ACT_RSS) {
- /* When user added a rule for rss, first we will add the
- *rule in MCAM and then update the action, once if we have
- *FLOW_KEY_ALG index. So, till we update the action with
- *flow_key_alg index, set the action to drop.
- */
- if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
- flow->npc_action = NIX_RX_ACTIONOP_DROP;
- else
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else if (req_act & OTX2_FLOW_ACT_SEC) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC;
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else if (req_act & OTX2_FLOW_ACT_COUNT) {
- /* Keep OTX2_FLOW_ACT_COUNT always at the end
- * This is default action, when user specify only
- * COUNT ACTION
- */
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else {
- /* Should never reach here */
- errmsg = "Invalid action specified";
- errcode = EINVAL;
- goto err_exit;
- }
-
- if (mark)
- flow->npc_action |= (uint64_t)mark << 40;
-
- if (rte_atomic32_read(&npc->mark_actions) == 1) {
- hw->rx_offload_flags |=
- NIX_RX_OFFLOAD_MARK_UPDATE_F;
- otx2_eth_set_rx_function(dev);
- }
-
- if (npc->vtag_actions == 1) {
- hw->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(dev);
- }
-
-set_pf_func:
- /* Ideally AF must ensure that correct pf_func is set */
- if (attr->egress)
- flow->npc_action |= (uint64_t)pf_func << 48;
- else
- flow->npc_action |= (uint64_t)pf_func << 4;
-
- return 0;
-
-err_exit:
- rte_flow_error_set(error, errcode,
- RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
- errmsg);
- return -rte_errno;
-}
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
deleted file mode 100644
index 35f7d0f4bc..0000000000
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ /dev/null
@@ -1,969 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-static int
-flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr)
-{
- struct npc_mcam_alloc_counter_req *req;
- struct npc_mcam_alloc_counter_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox);
- req->count = 1;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
-
- *ctr = rsp->cntr_list[0];
- return rc;
-}
-
-int
-otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
-{
- struct npc_mcam_oper_counter_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
- uint64_t *count)
-{
- struct npc_mcam_oper_counter_req *req;
- struct npc_mcam_oper_counter_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
-
- *count = rsp->stat;
- return rc;
-}
-
-int
-otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id)
-{
- struct npc_mcam_oper_counter_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry)
-{
- struct npc_mcam_free_entry_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox)
-{
- struct npc_mcam_free_entry_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->all = 1;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-static void
-flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len)
-{
- int idx;
-
- for (idx = 0; idx < len; idx++)
- ptr[idx] = data[len - 1 - idx];
-}
-
-static int
-flow_check_copysz(size_t size, size_t len)
-{
- if (len <= size)
- return len;
- return -1;
-}
-
-static inline int
-flow_mem_is_zero(const void *mem, int len)
-{
- const char *m = mem;
- int i;
-
- for (i = 0; i < len; i++) {
- if (m[i] != 0)
- return 0;
- }
- return 1;
-}
-
-static void
-flow_set_hw_mask(struct otx2_flow_item_info *info,
- struct npc_xtract_info *xinfo,
- char *hw_mask)
-{
- int max_off, offset;
- int j;
-
- if (xinfo->enable == 0)
- return;
-
- if (xinfo->hdr_off < info->hw_hdr_len)
- return;
-
- max_off = xinfo->hdr_off + xinfo->len - info->hw_hdr_len;
-
- if (max_off > info->len)
- max_off = info->len;
-
- offset = xinfo->hdr_off - info->hw_hdr_len;
- for (j = offset; j < max_off; j++)
- hw_mask[j] = 0xff;
-}
-
-void
-otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info, int lid, int lt)
-{
- struct npc_xtract_info *xinfo, *lfinfo;
- char *hw_mask = info->hw_mask;
- int lf_cfg;
- int i, j;
- int intf;
-
- intf = pst->flow->nix_intf;
- xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract;
- memset(hw_mask, 0, info->len);
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- flow_set_hw_mask(info, &xinfo[i], hw_mask);
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
-
- if (xinfo[i].flags_enable == 0)
- continue;
-
- lf_cfg = pst->npc->prx_lfcfg[i].i;
- if (lf_cfg == lid) {
- for (j = 0; j < NPC_MAX_LFL; j++) {
- lfinfo = pst->npc->prx_fxcfg[intf]
- [i][j].xtract;
- flow_set_hw_mask(info, &lfinfo[0], hw_mask);
- }
- }
- }
-}
-
-static int
-flow_update_extraction_data(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- struct npc_xtract_info *xinfo)
-{
- uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN];
- uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN];
- struct npc_xtract_info *x;
- int k, idx, hdr_off;
- int len = 0;
-
- x = xinfo;
- len = x->len;
- hdr_off = x->hdr_off;
-
- if (hdr_off < info->hw_hdr_len)
- return 0;
-
- if (x->enable == 0)
- return 0;
-
- otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d,"
- "x->key_off = %d", x->hdr_off, len, info->len,
- x->key_off);
-
- hdr_off -= info->hw_hdr_len;
-
- if (hdr_off + len > info->len)
- len = info->len - hdr_off;
-
- /* Check for over-write of previous layer */
- if (!flow_mem_is_zero(pst->mcam_mask + x->key_off,
- len)) {
- /* Cannot support this data match */
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->pattern,
- "Extraction unsupported");
- return -rte_errno;
- }
-
- len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8)
- - x->key_off,
- len);
- if (len < 0) {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->pattern,
- "Internal Error");
- return -rte_errno;
- }
-
- /* Need to reverse complete structure so that dest addr is at
- * MSB so as to program the MCAM using mcam_data & mcam_mask
- * arrays
- */
- flow_prep_mcam_ldata(int_info,
- (const uint8_t *)info->spec + hdr_off,
- x->len);
- flow_prep_mcam_ldata(int_info_mask,
- (const uint8_t *)info->mask + hdr_off,
- x->len);
-
- otx2_npc_dbg("Spec: ");
- for (k = 0; k < info->len; k++)
- otx2_npc_dbg("0x%.2x ",
- ((const uint8_t *)info->spec)[k]);
-
- otx2_npc_dbg("Int_info: ");
- for (k = 0; k < info->len; k++)
- otx2_npc_dbg("0x%.2x ", int_info[k]);
-
- memcpy(pst->mcam_mask + x->key_off, int_info_mask, len);
- memcpy(pst->mcam_data + x->key_off, int_info, len);
-
- otx2_npc_dbg("Parse state mcam data & mask");
- for (idx = 0; idx < len ; idx++)
- otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx,
- *(pst->mcam_data + idx + x->key_off), idx,
- *(pst->mcam_mask + idx + x->key_off));
- return 0;
-}
-
-int
-otx2_flow_update_parse_state(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info, int lid, int lt,
- uint8_t flags)
-{
- struct npc_lid_lt_xtract_info *xinfo;
- struct otx2_flow_dump_data *dump;
- struct npc_xtract_info *lfinfo;
- int intf, lf_cfg;
- int i, j, rc = 0;
-
- otx2_npc_dbg("Parse state function info mask total %s",
- (const uint8_t *)info->mask);
-
- pst->layer_mask |= lid;
- pst->lt[lid] = lt;
- pst->flags[lid] = flags;
-
- intf = pst->flow->nix_intf;
- xinfo = &pst->npc->prx_dxcfg[intf][lid][lt];
- otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating);
- if (xinfo->is_terminating)
- pst->terminate = 1;
-
- if (info->spec == NULL) {
- otx2_npc_dbg("Info spec NULL");
- goto done;
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- rc = flow_update_extraction_data(pst, info, &xinfo->xtract[i]);
- if (rc != 0)
- return rc;
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- if (xinfo->xtract[i].flags_enable == 0)
- continue;
-
- lf_cfg = pst->npc->prx_lfcfg[i].i;
- if (lf_cfg == lid) {
- for (j = 0; j < NPC_MAX_LFL; j++) {
- lfinfo = pst->npc->prx_fxcfg[intf]
- [i][j].xtract;
- rc = flow_update_extraction_data(pst, info,
- &lfinfo[0]);
- if (rc != 0)
- return rc;
-
- if (lfinfo[0].enable)
- pst->flags[lid] = j;
- }
- }
- }
-
-done:
- dump = &pst->flow->dump_data[pst->flow->num_patterns++];
- dump->lid = lid;
- dump->ltype = lt;
- /* Next pattern to parse by subsequent layers */
- pst->pattern++;
- return 0;
-}
-
-static inline int
-flow_range_is_valid(const char *spec, const char *last, const char *mask,
- int len)
-{
- /* Mask must be zero or equal to spec as we do not support
- * non-contiguous ranges.
- */
- while (len--) {
- if (last[len] &&
- (spec[len] & mask[len]) != (last[len] & mask[len]))
- return 0; /* False */
- }
- return 1;
-}
-
-
-static inline int
-flow_mask_is_supported(const char *mask, const char *hw_mask, int len)
-{
- /*
- * If no hw_mask, assume nothing is supported.
- * mask is never NULL
- */
- if (hw_mask == NULL)
- return flow_mem_is_zero(mask, len);
-
- while (len--) {
- if ((mask[len] | hw_mask[len]) != hw_mask[len])
- return 0; /* False */
- }
- return 1;
-}
-
-int
-otx2_flow_parse_item_basic(const struct rte_flow_item *item,
- struct otx2_flow_item_info *info,
- struct rte_flow_error *error)
-{
- /* Item must not be NULL */
- if (item == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Item is NULL");
- return -rte_errno;
- }
- /* If spec is NULL, both mask and last must be NULL, this
- * makes it to match ANY value (eq to mask = 0).
- * Setting either mask or last without spec is an error
- */
- if (item->spec == NULL) {
- if (item->last == NULL && item->mask == NULL) {
- info->spec = NULL;
- return 0;
- }
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "mask or last set without spec");
- return -rte_errno;
- }
-
- /* We have valid spec */
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW)
- info->spec = item->spec;
-
- /* If mask is not set, use default mask, err if default mask is
- * also NULL.
- */
- if (item->mask == NULL) {
- otx2_npc_dbg("Item mask null, using default mask");
- if (info->def_mask == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "No mask or default mask given");
- return -rte_errno;
- }
- info->mask = info->def_mask;
- } else {
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW)
- info->mask = item->mask;
- }
-
- /* mask specified must be subset of hw supported mask
- * mask | hw_mask == hw_mask
- */
- if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Unsupported field in the mask");
- return -rte_errno;
- }
-
- /* Now we have spec and mask. OTX2 does not support non-contiguous
- * range. We should have either:
- * - spec & mask == last & mask or,
- * - last == 0 or,
- * - last == NULL
- */
- if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) {
- if (!flow_range_is_valid(item->spec, item->last, info->mask,
- info->len)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "Unsupported range for match");
- return -rte_errno;
- }
- }
-
- return 0;
-}
-
-void
-otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
-{
- uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
- int i, j = 0;
-
- for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
- if (nibble_mask & (1 << i)) {
- nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
- cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
- j += 1;
- }
- }
-
- data[0] = cdata[0];
- data[1] = cdata[1];
-}
-
-static int
-flow_first_set_bit(uint64_t slab)
-{
- int num = 0;
-
- if ((slab & 0xffffffff) == 0) {
- num += 32;
- slab >>= 32;
- }
- if ((slab & 0xffff) == 0) {
- num += 16;
- slab >>= 16;
- }
- if ((slab & 0xff) == 0) {
- num += 8;
- slab >>= 8;
- }
- if ((slab & 0xf) == 0) {
- num += 4;
- slab >>= 4;
- }
- if ((slab & 0x3) == 0) {
- num += 2;
- slab >>= 2;
- }
- if ((slab & 0x1) == 0)
- num += 1;
-
- return num;
-}
-
-static int
-flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- uint32_t old_ent, uint32_t new_ent)
-{
- struct npc_mcam_shift_entry_req *req;
- struct npc_mcam_shift_entry_rsp *rsp;
- struct otx2_flow_list *list;
- struct rte_flow *flow_iter;
- int rc = 0;
-
- otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent,
- flow->priority);
-
- list = &flow_info->flow_list[flow->priority];
-
- /* Old entry is disabled & it's contents are moved to new_entry,
- * new entry is enabled finally.
- */
- req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox);
- req->curr_entry[0] = old_ent;
- req->new_entry[0] = new_ent;
- req->shift_count = 1;
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Remove old node from list */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id == old_ent)
- TAILQ_REMOVE(list, flow_iter, next);
- }
-
- /* Insert node with new mcam id at right place */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id > new_ent)
- TAILQ_INSERT_BEFORE(flow_iter, flow, next);
- }
- return rc;
-}
-
-/* Exchange all required entries with a given priority level */
-static int
-flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl)
-{
- struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp;
- uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries;
- uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0;
- /* Bit position within the slab */
- uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0;
- /* Overall bit position of the start of slab */
- /* free & live entry index */
- int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0;
- struct otx2_mcam_ents_info *ent_info;
- /* free & live bitmap slab */
- uint64_t sl_fr = 0, sl_lv = 0, *sl;
-
- fr_bmp = flow_info->free_entries[prio_lvl];
- fr_bmp_rev = flow_info->free_entries_rev[prio_lvl];
- lv_bmp = flow_info->live_entries[prio_lvl];
- lv_bmp_rev = flow_info->live_entries_rev[prio_lvl];
- ent_info = &flow_info->flow_entry_info[prio_lvl];
- mcam_entries = flow_info->mcam_entries;
-
-
- /* New entries allocated are always contiguous, but older entries
- * already in free/live bitmap can be non-contiguous: so return
- * shifted entries should be in non-contiguous format.
- */
- while (idx <= rsp->count) {
- if (!sl_fr && !sl_lv) {
- /* Lower index elements to be exchanged */
- if (dir < 0) {
- rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr);
- rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv);
- otx2_npc_dbg("Fwd slab rc fr %u rc lv %u "
- "e_fr %u e_lv %u", rc_fr, rc_lv,
- e_fr, e_lv);
- } else {
- rc_fr = rte_bitmap_scan(fr_bmp_rev,
- &sl_fr_bit_off,
- &sl_fr);
- rc_lv = rte_bitmap_scan(lv_bmp_rev,
- &sl_lv_bit_off,
- &sl_lv);
-
- otx2_npc_dbg("Rev slab rc fr %u rc lv %u "
- "e_fr %u e_lv %u", rc_fr, rc_lv,
- e_fr, e_lv);
- }
- }
-
- if (rc_fr) {
- fr_bit_pos = flow_first_set_bit(sl_fr);
- e_fr = sl_fr_bit_off + fr_bit_pos;
- otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos);
- } else {
- e_fr = ~(0);
- }
-
- if (rc_lv) {
- lv_bit_pos = flow_first_set_bit(sl_lv);
- e_lv = sl_lv_bit_off + lv_bit_pos;
- otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos);
- } else {
- e_lv = ~(0);
- }
-
- /* First entry is from free_bmap */
- if (e_fr < e_lv) {
- bmp = fr_bmp;
- e = e_fr;
- sl = &sl_fr;
- bit_pos = fr_bit_pos;
- if (dir > 0)
- e_id = mcam_entries - e - 1;
- else
- e_id = e;
- otx2_npc_dbg("Fr e %u e_id %u", e, e_id);
- } else {
- bmp = lv_bmp;
- e = e_lv;
- sl = &sl_lv;
- bit_pos = lv_bit_pos;
- if (dir > 0)
- e_id = mcam_entries - e - 1;
- else
- e_id = e;
-
- otx2_npc_dbg("Lv e %u e_id %u", e, e_id);
- if (idx < rsp->count)
- rc =
- flow_shift_lv_ent(mbox, flow,
- flow_info, e_id,
- rsp->entry + idx);
- }
-
- rte_bitmap_clear(bmp, e);
- rte_bitmap_set(bmp, rsp->entry + idx);
- /* Update entry list, use non-contiguous
- * list now.
- */
- rsp->entry_list[idx] = e_id;
- *sl &= ~(1 << bit_pos);
-
- /* Update min & max entry identifiers in current
- * priority level.
- */
- if (dir < 0) {
- ent_info->max_id = rsp->entry + idx;
- ent_info->min_id = e_id;
- } else {
- ent_info->max_id = e_id;
- ent_info->min_id = rsp->entry;
- }
-
- idx++;
- }
- return rc;
-}
-
-/* Validate if newly allocated entries lie in the correct priority zone
- * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
- * If not properly aligned, shift entries to do so
- */
-static int
-flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp,
- int req_prio)
-{
- int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority;
- struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
- int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1;
- uint32_t tot_ent = 0;
-
- otx2_npc_dbg("Dir %d, priority = %d", dir, prio);
-
- if (dir < 0)
- prio_idx = flow_info->flow_max_priority - 1;
-
- /* Only live entries needs to be shifted, free entries can just be
- * moved by bits manipulation.
- */
-
- /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting,
- * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority
- * level entries(lower indexes).
- *
- * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift,
- * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority
- * level entries(higher indexes) with highest indexes.
- */
- do {
- tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent;
-
- if (dir < 0 && prio_idx != prio &&
- rsp->entry > info[prio_idx].max_id && tot_ent) {
- otx2_npc_dbg("Rsp entry %u prio idx %u "
- "max id %u", rsp->entry, prio_idx,
- info[prio_idx].max_id);
-
- needs_shift = 1;
- } else if ((dir > 0) && (prio_idx != prio) &&
- (rsp->entry < info[prio_idx].min_id) && tot_ent) {
- otx2_npc_dbg("Rsp entry %u prio idx %u "
- "min id %u", rsp->entry, prio_idx,
- info[prio_idx].min_id);
- needs_shift = 1;
- }
-
- otx2_npc_dbg("Needs_shift = %d", needs_shift);
- if (needs_shift) {
- needs_shift = 0;
- rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir,
- prio_idx);
- } else {
- for (idx = 0; idx < rsp->count; idx++)
- rsp->entry_list[idx] = rsp->entry + idx;
- }
- } while ((prio_idx != prio) && (prio_idx += dir));
-
- return rc;
-}
-
-static int
-flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio,
- int prio_lvl)
-{
- struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
- int step = 1;
-
- while (step < flow_info->flow_max_priority) {
- if (((prio_lvl + step) < flow_info->flow_max_priority) &&
- info[prio_lvl + step].live_ent) {
- *prio = NPC_MCAM_HIGHER_PRIO;
- return info[prio_lvl + step].min_id;
- }
-
- if (((prio_lvl - step) >= 0) &&
- info[prio_lvl - step].live_ent) {
- otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step,
- info[prio_lvl - step].live_ent);
- *prio = NPC_MCAM_LOWER_PRIO;
- return info[prio_lvl - step].max_id;
- }
- step++;
- }
- *prio = NPC_MCAM_ANY_PRIO;
- return 0;
-}
-
-static int
-flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info, uint32_t *free_ent)
-{
- struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev;
- struct npc_mcam_alloc_entry_rsp rsp_local;
- struct npc_mcam_alloc_entry_rsp *rsp_cmd;
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mcam_ents_info *info;
- uint16_t ref_ent, idx;
- int rc, prio;
-
- info = &flow_info->flow_entry_info[flow->priority];
- free_bmp = flow_info->free_entries[flow->priority];
- free_bmp_rev = flow_info->free_entries_rev[flow->priority];
- live_bmp = flow_info->live_entries[flow->priority];
- live_bmp_rev = flow_info->live_entries_rev[flow->priority];
-
- ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority);
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->contig = 1;
- req->count = flow_info->flow_prealloc_size;
- req->priority = prio;
- req->ref_entry = ref_ent;
-
- otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio);
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd);
- if (rc)
- return rc;
-
- rsp = &rsp_local;
- memcpy(rsp, rsp_cmd, sizeof(*rsp));
-
- otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry,
- rsp->count, prio);
-
- /* Non-first ent cache fill */
- if (prio != NPC_MCAM_ANY_PRIO) {
- flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp,
- prio);
- } else {
- /* Copy into response entry list */
- for (idx = 0; idx < rsp->count; idx++)
- rsp->entry_list[idx] = rsp->entry + idx;
- }
-
- otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count);
- /* Update free entries, reverse free entries list,
- * min & max entry ids.
- */
- for (idx = 0; idx < rsp->count; idx++) {
- if (unlikely(rsp->entry_list[idx] < info->min_id))
- info->min_id = rsp->entry_list[idx];
-
- if (unlikely(rsp->entry_list[idx] > info->max_id))
- info->max_id = rsp->entry_list[idx];
-
- /* Skip entry to be returned, not to be part of free
- * list.
- */
- if (prio == NPC_MCAM_HIGHER_PRIO) {
- if (unlikely(idx == (rsp->count - 1))) {
- *free_ent = rsp->entry_list[idx];
- continue;
- }
- } else {
- if (unlikely(!idx)) {
- *free_ent = rsp->entry_list[idx];
- continue;
- }
- }
- info->free_ent++;
- rte_bitmap_set(free_bmp, rsp->entry_list[idx]);
- rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries -
- rsp->entry_list[idx] - 1);
-
- otx2_npc_dbg("Final rsp entry %u rsp entry rev %u",
- rsp->entry_list[idx],
- flow_info->mcam_entries - rsp->entry_list[idx] - 1);
- }
-
- otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent,
- flow_info->mcam_entries - *free_ent - 1);
- info->live_ent++;
- rte_bitmap_set(live_bmp, *free_ent);
- rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1);
-
- return 0;
-}
-
-static int
-flow_check_preallocated_entry_cache(struct otx2_mbox *mbox,
- struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info)
-{
- struct rte_bitmap *free, *free_rev, *live, *live_rev;
- uint32_t pos = 0, free_ent = 0, mcam_entries;
- struct otx2_mcam_ents_info *info;
- uint64_t slab = 0;
- int rc;
-
- otx2_npc_dbg("Flow priority %u", flow->priority);
-
- info = &flow_info->flow_entry_info[flow->priority];
-
- free_rev = flow_info->free_entries_rev[flow->priority];
- free = flow_info->free_entries[flow->priority];
- live_rev = flow_info->live_entries_rev[flow->priority];
- live = flow_info->live_entries[flow->priority];
- mcam_entries = flow_info->mcam_entries;
-
- if (info->free_ent) {
- rc = rte_bitmap_scan(free, &pos, &slab);
- if (rc) {
- /* Get free_ent from free entry bitmap */
- free_ent = pos + __builtin_ctzll(slab);
- otx2_npc_dbg("Allocated from cache entry %u", free_ent);
- /* Remove from free bitmaps and add to live ones */
- rte_bitmap_clear(free, free_ent);
- rte_bitmap_set(live, free_ent);
- rte_bitmap_clear(free_rev,
- mcam_entries - free_ent - 1);
- rte_bitmap_set(live_rev,
- mcam_entries - free_ent - 1);
-
- info->free_ent--;
- info->live_ent++;
- return free_ent;
- }
-
- otx2_npc_dbg("No free entry:its a mess");
- return -1;
- }
-
- rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent);
- if (rc)
- return rc;
-
- return free_ent;
-}
-
-int
-otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox,
- struct otx2_parse_state *pst,
- struct otx2_npc_flow_info *flow_info)
-{
- int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
- struct npc_mcam_read_base_rule_rsp *base_rule_rsp;
- struct npc_mcam_write_entry_req *req;
- struct mcam_entry *base_entry;
- struct mbox_msghdr *rsp;
- uint16_t ctr = ~(0);
- int rc, idx;
- int entry;
-
- if (use_ctr) {
- rc = flow_mcam_alloc_counter(mbox, &ctr);
- if (rc)
- return rc;
- }
-
- entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info);
- if (entry < 0) {
- otx2_err("Prealloc failed");
- otx2_flow_mcam_free_counter(mbox, ctr);
- return NPC_MCAM_ALLOC_FAILED;
- }
-
- if (pst->is_vf) {
- (void)otx2_mbox_alloc_msg_npc_read_base_steer_rule(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&base_rule_rsp);
- if (rc) {
- otx2_err("Failed to fetch VF's base MCAM entry");
- return rc;
- }
- base_entry = &base_rule_rsp->entry_data;
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- flow->mcam_data[idx] |= base_entry->kw[idx];
- flow->mcam_mask[idx] |= base_entry->kw_mask[idx];
- }
- }
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- req->set_cntr = use_ctr;
- req->cntr = ctr;
- req->entry = entry;
- otx2_npc_dbg("Alloc & write entry %u", entry);
-
- req->intf =
- (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX;
- req->enable_entry = 1;
- req->entry_data.action = flow->npc_action;
- req->entry_data.vtag_action = flow->vtag_action;
-
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- req->entry_data.kw[idx] = flow->mcam_data[idx];
- req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
- }
-
- if (flow->nix_intf == OTX2_INTF_RX) {
- req->entry_data.kw[0] |= flow_info->channel;
- req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
- } else {
- uint16_t pf_func = (flow->npc_action >> 48) & 0xffff;
-
- pf_func = htons(pf_func);
- req->entry_data.kw[0] |= ((uint64_t)pf_func << 32);
- req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32);
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc != 0)
- return rc;
-
- flow->mcam_id = entry;
- if (use_ctr)
- flow->ctr_id = ctr;
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
deleted file mode 100644
index 8f5d0eed92..0000000000
--- a/drivers/net/octeontx2/otx2_link.c
+++ /dev/null
@@ -1,287 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-#include <ethdev_pci.h>
-
-#include "otx2_ethdev.h"
-
-void
-otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set)
-{
- if (set)
- dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F;
- else
- dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F;
-
- rte_wmb();
-}
-
-static inline int
-nix_wait_for_link_cfg(struct otx2_eth_dev *dev)
-{
- uint16_t wait = 1000;
-
- do {
- rte_rmb();
- if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F))
- break;
- wait--;
- rte_delay_ms(1);
- } while (wait);
-
- return wait ? 0 : -1;
-}
-
-static void
-nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
-{
- if (link && link->link_status)
- otx2_info("Port %d: Link Up - speed %u Mbps - %s",
- (int)(eth_dev->data->port_id),
- (uint32_t)link->link_speed,
- link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
- "full-duplex" : "half-duplex");
- else
- otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
-}
-
-void
-otx2_eth_dev_link_status_get(struct otx2_dev *dev,
- struct cgx_link_user_info *link)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_link eth_link;
- struct rte_eth_dev *eth_dev;
-
- if (!link || !dev)
- return;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev)
- return;
-
- rte_eth_linkstatus_get(eth_dev, ð_link);
-
- link->link_up = eth_link.link_status;
- link->speed = eth_link.link_speed;
- link->an = eth_link.link_autoneg;
- link->full_duplex = eth_link.link_duplex;
-}
-
-void
-otx2_eth_dev_link_status_update(struct otx2_dev *dev,
- struct cgx_link_user_info *link)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_link eth_link;
- struct rte_eth_dev *eth_dev;
-
- if (!link || !dev)
- return;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev || !eth_dev->data->dev_conf.intr_conf.lsc)
- return;
-
- if (nix_wait_for_link_cfg(otx2_dev)) {
- otx2_err("Timeout waiting for link_cfg to complete");
- return;
- }
-
- eth_link.link_status = link->link_up;
- eth_link.link_speed = link->speed;
- eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
- eth_link.link_duplex = link->full_duplex;
-
- otx2_dev->speed = link->speed;
- otx2_dev->duplex = link->full_duplex;
-
- /* Print link info */
- nix_link_status_print(eth_dev, ð_link);
-
- /* Update link info */
- rte_eth_linkstatus_set(eth_dev, ð_link);
-
- /* Set the flag and execute application callbacks */
- rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL);
-}
-
-static int
-lbk_link_update(struct rte_eth_link *link)
-{
- link->link_status = RTE_ETH_LINK_UP;
- link->link_speed = RTE_ETH_SPEED_NUM_100G;
- link->link_autoneg = RTE_ETH_LINK_FIXED;
- link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
- return 0;
-}
-
-static int
-cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_link_info_msg *rsp;
- int rc;
- otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- link->link_status = rsp->link_info.link_up;
- link->link_speed = rsp->link_info.speed;
- link->link_autoneg = RTE_ETH_LINK_AUTONEG;
-
- if (rsp->link_info.full_duplex)
- link->link_duplex = rsp->link_info.full_duplex;
- return 0;
-}
-
-int
-otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_link link;
- int rc;
-
- RTE_SET_USED(wait_to_complete);
- memset(&link, 0, sizeof(struct rte_eth_link));
-
- if (!eth_dev->data->dev_started || otx2_dev_is_sdp(dev))
- return 0;
-
- if (otx2_dev_is_lbk(dev))
- rc = lbk_link_update(&link);
- else
- rc = cgx_link_update(dev, &link);
-
- if (rc)
- return rc;
-
- return rte_eth_linkstatus_set(eth_dev, &link);
-}
-
-static int
-nix_dev_set_link_state(struct rte_eth_dev *eth_dev, uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_set_link_state_msg *req;
-
- req = otx2_mbox_alloc_msg_cgx_set_link_state(mbox);
- req->enable = enable;
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, i;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- rc = nix_dev_set_link_state(eth_dev, 1);
- if (rc)
- goto done;
-
- /* Start tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_start(eth_dev, i);
-
-done:
- return rc;
-}
-
-int
-otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- /* Stop tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_stop(eth_dev, i);
-
- return nix_dev_set_link_state(eth_dev, 0);
-}
-
-static int
-cgx_change_mode(struct otx2_eth_dev *dev, struct cgx_set_link_mode_args *cfg)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_set_link_mode_req *req;
-
- req = otx2_mbox_alloc_msg_cgx_set_link_mode(mbox);
- req->args.speed = cfg->speed;
- req->args.duplex = cfg->duplex;
- req->args.an = cfg->an;
-
- return otx2_mbox_process(mbox);
-}
-
-#define SPEED_NONE 0
-static inline uint32_t
-nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
-{
- uint32_t link_speed = SPEED_NONE;
-
- /* 50G and 100G to be supported for board version C0 and above */
- if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & RTE_ETH_LINK_SPEED_100G)
- link_speed = 100000;
- if (link_speeds & RTE_ETH_LINK_SPEED_50G)
- link_speed = 50000;
- }
- if (link_speeds & RTE_ETH_LINK_SPEED_40G)
- link_speed = 40000;
- if (link_speeds & RTE_ETH_LINK_SPEED_25G)
- link_speed = 25000;
- if (link_speeds & RTE_ETH_LINK_SPEED_20G)
- link_speed = 20000;
- if (link_speeds & RTE_ETH_LINK_SPEED_10G)
- link_speed = 10000;
- if (link_speeds & RTE_ETH_LINK_SPEED_5G)
- link_speed = 5000;
- if (link_speeds & RTE_ETH_LINK_SPEED_1G)
- link_speed = 1000;
-
- return link_speed;
-}
-
-static inline uint8_t
-nix_parse_eth_link_duplex(uint32_t link_speeds)
-{
- if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
- return RTE_ETH_LINK_HALF_DUPLEX;
- else
- return RTE_ETH_LINK_FULL_DUPLEX;
-}
-
-int
-otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_conf *conf = ð_dev->data->dev_conf;
- struct cgx_set_link_mode_args cfg;
-
- /* If VF/SDP/LBK, link attributes cannot be changed */
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return 0;
-
- memset(&cfg, 0, sizeof(struct cgx_set_link_mode_args));
- cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
- if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
- cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
-
- return cgx_change_mode(dev, &cfg);
- }
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
deleted file mode 100644
index 5fa9ae1396..0000000000
--- a/drivers/net/octeontx2/otx2_lookup.c
+++ /dev/null
@@ -1,352 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-#include <rte_memzone.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev.h"
-
-/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
-#define ERRCODE_ERRLEN_WIDTH 12
-#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\
- sizeof(uint32_t))
-
-#define SA_TBL_SZ (RTE_MAX_ETHPORTS * sizeof(uint64_t))
-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ +\
- SA_TBL_SZ)
-
-const uint32_t *
-otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-
- static const uint32_t ptypes[] = {
- RTE_PTYPE_L2_ETHER_QINQ, /* LB */
- RTE_PTYPE_L2_ETHER_VLAN, /* LB */
- RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */
- RTE_PTYPE_L2_ETHER_ARP, /* LC */
- RTE_PTYPE_L2_ETHER_NSH, /* LC */
- RTE_PTYPE_L2_ETHER_FCOE, /* LC */
- RTE_PTYPE_L2_ETHER_MPLS, /* LC */
- RTE_PTYPE_L3_IPV4, /* LC */
- RTE_PTYPE_L3_IPV4_EXT, /* LC */
- RTE_PTYPE_L3_IPV6, /* LC */
- RTE_PTYPE_L3_IPV6_EXT, /* LC */
- RTE_PTYPE_L4_TCP, /* LD */
- RTE_PTYPE_L4_UDP, /* LD */
- RTE_PTYPE_L4_SCTP, /* LD */
- RTE_PTYPE_L4_ICMP, /* LD */
- RTE_PTYPE_L4_IGMP, /* LD */
- RTE_PTYPE_TUNNEL_GRE, /* LD */
- RTE_PTYPE_TUNNEL_ESP, /* LD */
- RTE_PTYPE_TUNNEL_NVGRE, /* LD */
- RTE_PTYPE_TUNNEL_VXLAN, /* LE */
- RTE_PTYPE_TUNNEL_GENEVE, /* LE */
- RTE_PTYPE_TUNNEL_GTPC, /* LE */
- RTE_PTYPE_TUNNEL_GTPU, /* LE */
- RTE_PTYPE_TUNNEL_VXLAN_GPE, /* LE */
- RTE_PTYPE_TUNNEL_MPLS_IN_GRE, /* LE */
- RTE_PTYPE_TUNNEL_MPLS_IN_UDP, /* LE */
- RTE_PTYPE_INNER_L2_ETHER,/* LF */
- RTE_PTYPE_INNER_L3_IPV4, /* LG */
- RTE_PTYPE_INNER_L3_IPV6, /* LG */
- RTE_PTYPE_INNER_L4_TCP, /* LH */
- RTE_PTYPE_INNER_L4_UDP, /* LH */
- RTE_PTYPE_INNER_L4_SCTP, /* LH */
- RTE_PTYPE_INNER_L4_ICMP, /* LH */
- RTE_PTYPE_UNKNOWN,
- };
-
- return ptypes;
-}
-
-int
-otx2_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (ptype_mask) {
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_PTYPE_F;
- dev->ptype_disable = 0;
- } else {
- dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_PTYPE_F;
- dev->ptype_disable = 1;
- }
-
- otx2_eth_set_rx_function(eth_dev);
-
- return 0;
-}
-
-/*
- * +------------------ +------------------ +
- * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 |
- * +-------------------+-------------------+
- *
- * +-------------------+------------------ +
- * | | LH | LG | LF | LE | LD | LC | LB |
- * +-------------------+-------------------+
- *
- * ptype [LE - LD - LC - LB] = TU - L4 - L3 - T2
- * ptype_tunnel[LH - LG - LF] = IL4 - IL3 - IL2 - TU
- *
- */
-static void
-nix_create_non_tunnel_ptype_array(uint16_t *ptype)
-{
- uint8_t lb, lc, ld, le;
- uint16_t val;
- uint32_t idx;
-
- for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) {
- lb = idx & 0xF;
- lc = (idx & 0xF0) >> 4;
- ld = (idx & 0xF00) >> 8;
- le = (idx & 0xF000) >> 12;
- val = RTE_PTYPE_UNKNOWN;
-
- switch (lb) {
- case NPC_LT_LB_STAG_QINQ:
- val |= RTE_PTYPE_L2_ETHER_QINQ;
- break;
- case NPC_LT_LB_CTAG:
- val |= RTE_PTYPE_L2_ETHER_VLAN;
- break;
- }
-
- switch (lc) {
- case NPC_LT_LC_ARP:
- val |= RTE_PTYPE_L2_ETHER_ARP;
- break;
- case NPC_LT_LC_NSH:
- val |= RTE_PTYPE_L2_ETHER_NSH;
- break;
- case NPC_LT_LC_FCOE:
- val |= RTE_PTYPE_L2_ETHER_FCOE;
- break;
- case NPC_LT_LC_MPLS:
- val |= RTE_PTYPE_L2_ETHER_MPLS;
- break;
- case NPC_LT_LC_IP:
- val |= RTE_PTYPE_L3_IPV4;
- break;
- case NPC_LT_LC_IP_OPT:
- val |= RTE_PTYPE_L3_IPV4_EXT;
- break;
- case NPC_LT_LC_IP6:
- val |= RTE_PTYPE_L3_IPV6;
- break;
- case NPC_LT_LC_IP6_EXT:
- val |= RTE_PTYPE_L3_IPV6_EXT;
- break;
- case NPC_LT_LC_PTP:
- val |= RTE_PTYPE_L2_ETHER_TIMESYNC;
- break;
- }
-
- switch (ld) {
- case NPC_LT_LD_TCP:
- val |= RTE_PTYPE_L4_TCP;
- break;
- case NPC_LT_LD_UDP:
- val |= RTE_PTYPE_L4_UDP;
- break;
- case NPC_LT_LD_SCTP:
- val |= RTE_PTYPE_L4_SCTP;
- break;
- case NPC_LT_LD_ICMP:
- case NPC_LT_LD_ICMP6:
- val |= RTE_PTYPE_L4_ICMP;
- break;
- case NPC_LT_LD_IGMP:
- val |= RTE_PTYPE_L4_IGMP;
- break;
- case NPC_LT_LD_GRE:
- val |= RTE_PTYPE_TUNNEL_GRE;
- break;
- case NPC_LT_LD_NVGRE:
- val |= RTE_PTYPE_TUNNEL_NVGRE;
- break;
- }
-
- switch (le) {
- case NPC_LT_LE_VXLAN:
- val |= RTE_PTYPE_TUNNEL_VXLAN;
- break;
- case NPC_LT_LE_ESP:
- val |= RTE_PTYPE_TUNNEL_ESP;
- break;
- case NPC_LT_LE_VXLANGPE:
- val |= RTE_PTYPE_TUNNEL_VXLAN_GPE;
- break;
- case NPC_LT_LE_GENEVE:
- val |= RTE_PTYPE_TUNNEL_GENEVE;
- break;
- case NPC_LT_LE_GTPC:
- val |= RTE_PTYPE_TUNNEL_GTPC;
- break;
- case NPC_LT_LE_GTPU:
- val |= RTE_PTYPE_TUNNEL_GTPU;
- break;
- case NPC_LT_LE_TU_MPLS_IN_GRE:
- val |= RTE_PTYPE_TUNNEL_MPLS_IN_GRE;
- break;
- case NPC_LT_LE_TU_MPLS_IN_UDP:
- val |= RTE_PTYPE_TUNNEL_MPLS_IN_UDP;
- break;
- }
- ptype[idx] = val;
- }
-}
-
-#define TU_SHIFT(x) ((x) >> PTYPE_NON_TUNNEL_WIDTH)
-static void
-nix_create_tunnel_ptype_array(uint16_t *ptype)
-{
- uint8_t lf, lg, lh;
- uint16_t val;
- uint32_t idx;
-
- /* Skip non tunnel ptype array memory */
- ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ;
-
- for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) {
- lf = idx & 0xF;
- lg = (idx & 0xF0) >> 4;
- lh = (idx & 0xF00) >> 8;
- val = RTE_PTYPE_UNKNOWN;
-
- switch (lf) {
- case NPC_LT_LF_TU_ETHER:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER);
- break;
- }
- switch (lg) {
- case NPC_LT_LG_TU_IP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4);
- break;
- case NPC_LT_LG_TU_IP6:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6);
- break;
- }
- switch (lh) {
- case NPC_LT_LH_TU_TCP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP);
- break;
- case NPC_LT_LH_TU_UDP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP);
- break;
- case NPC_LT_LH_TU_SCTP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP);
- break;
- case NPC_LT_LH_TU_ICMP:
- case NPC_LT_LH_TU_ICMP6:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP);
- break;
- }
-
- ptype[idx] = val;
- }
-}
-
-static void
-nix_create_rx_ol_flags_array(void *mem)
-{
- uint16_t idx, errcode, errlev;
- uint32_t val, *ol_flags;
-
- /* Skip ptype array memory */
- ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ);
-
- for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) {
- errlev = idx & 0xf;
- errcode = (idx & 0xff0) >> 4;
-
- val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
- val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
- val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
-
- switch (errlev) {
- case NPC_ERRLEV_RE:
- /* Mark all errors as BAD checksum errors
- * including Outer L2 length mismatch error
- */
- if (errcode) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
- break;
- case NPC_ERRLEV_LC:
- if (errcode == NPC_EC_OIP4_CSUM ||
- errcode == NPC_EC_IP_FRAG_OFFSET_1) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- }
- break;
- case NPC_ERRLEV_LG:
- if (errcode == NPC_EC_IIP4_CSUM)
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- else
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- break;
- case NPC_ERRLEV_NIX:
- if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
- errcode == NIX_RX_PERRCODE_OL4_LEN ||
- errcode == NIX_RX_PERRCODE_OL4_PORT) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
- } else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
- errcode == NIX_RX_PERRCODE_IL4_LEN ||
- errcode == NIX_RX_PERRCODE_IL4_PORT) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- } else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
- errcode == NIX_RX_PERRCODE_OL3_LEN) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
- break;
- }
- ol_flags[idx] = val;
- }
-}
-
-void *
-otx2_nix_fastpath_lookup_mem_get(void)
-{
- const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- const struct rte_memzone *mz;
- void *mem;
-
- /* SA_TBL starts after PTYPE_ARRAY & ERR_ARRAY */
- RTE_BUILD_BUG_ON(OTX2_NIX_SA_TBL_START != (PTYPE_ARRAY_SZ +
- ERR_ARRAY_SZ));
-
- mz = rte_memzone_lookup(name);
- if (mz != NULL)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ,
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz != NULL) {
- mem = mz->addr;
- /* Form the ptype array lookup memory */
- nix_create_non_tunnel_ptype_array(mem);
- nix_create_tunnel_ptype_array(mem);
- /* Form the rx ol_flags based on errcode */
- nix_create_rx_ol_flags_array(mem);
- return mem;
- }
- return NULL;
-}
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
deleted file mode 100644
index 49a700ca1d..0000000000
--- a/drivers/net/octeontx2/otx2_mac.c
+++ /dev/null
@@ -1,151 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-
-int
-otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_mac_addr_set_or_get *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (otx2_dev_active_vfs(dev))
- return -ENOTSUP;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to set mac address in CGX, rc=%d", rc);
-
- return 0;
-}
-
-int
-otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
-{
- struct cgx_max_dmac_entries_get_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- return rsp->max_dmac_filters;
-}
-
-int
-otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr,
- uint32_t index __rte_unused, uint32_t pool __rte_unused)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_mac_addr_add_req *req;
- struct cgx_mac_addr_add_rsp *rsp;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (otx2_dev_active_vfs(dev))
- return -ENOTSUP;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to add mac address, rc=%d", rc);
- goto done;
- }
-
- /* Enable promiscuous mode at NIX level */
- otx2_nix_promisc_config(eth_dev, 1);
- dev->dmac_filter_enable = true;
- eth_dev->data->promiscuous = 0;
-
-done:
- return rc;
-}
-
-void
-otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_mac_addr_del_req *req;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox);
- req->index = index;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to delete mac address, rc=%d", rc);
-}
-
-int
-otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_set_mac_addr *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to set mac address, rc=%d", rc);
- goto done;
- }
-
- otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- /* Install the same entry into CGX DMAC filter table too. */
- otx2_cgx_mac_addr_set(eth_dev, addr);
-
-done:
- return rc;
-}
-
-int
-otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_get_mac_addr_rsp *rsp;
- int rc;
-
- otx2_mbox_alloc_msg_nix_get_mac_addr(mbox);
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get mac address, rc=%d", rc);
- goto done;
- }
-
- otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN);
-
-done:
- return rc;
-}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
deleted file mode 100644
index b9c63ad3bc..0000000000
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ /dev/null
@@ -1,339 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-static int
-nix_mc_addr_list_free(struct otx2_eth_dev *dev, uint32_t entry_count)
-{
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (entry_count == 0)
- goto exit;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry->mcam_index;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- if (rc < 0)
- goto exit;
-
- TAILQ_REMOVE(&dev->mc_fltr_tbl, entry, next);
- rte_free(entry);
- entry_count--;
-
- if (entry_count == 0)
- break;
- }
-
- if (entry == NULL)
- dev->mc_tbl_set = false;
-
-exit:
- return rc;
-}
-
-static int
-nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- volatile uint8_t *key_data, *key_mask;
- struct npc_mcam_write_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct npc_xtract_info *x_info;
- uint64_t mcam_data, mcam_mask;
- struct mcast_entry *entry;
- otx2_dxcfg_t *ld_cfg;
- uint8_t *mac_addr;
- uint64_t action;
- int idx, rc = 0;
-
- ld_cfg = &npc->prx_dxcfg;
- /* Get ETH layer profile info for populating mcam entries */
- x_info = &(*ld_cfg)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- if (req == NULL) {
- /* The mbox memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- req->intf = NPC_MCAM_RX;
- req->enable_entry = 1;
-
- /* Channel base extracted to KW0[11:0] */
- req->entry_data.kw[0] = dev->rx_chan_base;
- req->entry_data.kw_mask[0] = RTE_LEN2MASK(12, uint64_t);
-
- /* Update mcam address */
- key_data = (volatile uint8_t *)req->entry_data.kw;
- key_mask = (volatile uint8_t *)req->entry_data.kw_mask;
-
- mcam_data = 0ull;
- mcam_mask = RTE_LEN2MASK(48, uint64_t);
- mac_addr = &entry->mcast_mac.addr_bytes[0];
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- otx2_mbox_memcpy(key_data + x_info->key_off,
- &mcam_data, x_info->len);
- otx2_mbox_memcpy(key_mask + x_info->key_off,
- &mcam_mask, x_info->len);
-
- action = NIX_RX_ACTIONOP_UCAST;
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
- action = NIX_RX_ACTIONOP_RSS;
- action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
- }
-
- action |= ((uint64_t)otx2_pfvf_func(dev->pf, dev->vf)) << 4;
- req->entry_data.action = action;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_mc_addr_list_install(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t entry_count = 0, idx = 0;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (!dev->mc_tbl_set)
- return 0;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- entry_count++;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->priority = NPC_MCAM_ANY_PRIO;
- req->count = entry_count;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || rsp->count < entry_count) {
- otx2_err("Failed to allocate required mcam entries");
- goto exit;
- }
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- entry->mcam_index = rsp->entry_list[idx];
-
- rc = nix_hw_update_mc_addr_list(eth_dev);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_mc_addr_list_uninstall(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (!dev->mc_tbl_set)
- return 0;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- if (req == NULL) {
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-static int
-nix_setup_mc_addr_list(struct otx2_eth_dev *dev,
- struct rte_ether_addr *mc_addr_set)
-{
- struct npc_mcam_ena_dis_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- uint32_t idx = 0;
- int rc = 0;
-
- /* Populate PMD's mcast list with given mcast mac addresses and
- * disable all mcam entries pertaining to the mcast list.
- */
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- rte_memcpy(&entry->mcast_mac, &mc_addr_set[idx++],
- RTE_ETHER_ADDR_LEN);
-
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
- if (req == NULL) {
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_set_mc_addr_list(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *mc_addr_set,
- uint32_t nb_mc_addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t idx, priv_count = 0;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (otx2_dev_is_vf(dev))
- return -ENOTSUP;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- priv_count++;
-
- if (nb_mc_addr == 0 || mc_addr_set == NULL) {
- /* Free existing list if new list is null */
- nb_mc_addr = priv_count;
- goto exit;
- }
-
- for (idx = 0; idx < nb_mc_addr; idx++) {
- if (!rte_is_multicast_ether_addr(&mc_addr_set[idx]))
- return -EINVAL;
- }
-
- /* New list is bigger than the existing list,
- * allocate mcam entries for the extra entries.
- */
- if (nb_mc_addr > priv_count) {
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->priority = NPC_MCAM_ANY_PRIO;
- req->count = nb_mc_addr - priv_count;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || (rsp->count + priv_count < nb_mc_addr)) {
- otx2_err("Failed to allocate required entries");
- nb_mc_addr = priv_count;
- goto exit;
- }
-
- /* Append new mcam entries to the existing mc list */
- for (idx = 0; idx < rsp->count; idx++) {
- entry = rte_zmalloc("otx2_nix_mc_entry",
- sizeof(struct mcast_entry), 0);
- if (!entry) {
- otx2_err("Failed to allocate memory");
- nb_mc_addr = priv_count;
- rc = -ENOMEM;
- goto exit;
- }
- entry->mcam_index = rsp->entry_list[idx];
- TAILQ_INSERT_HEAD(&dev->mc_fltr_tbl, entry, next);
- }
- } else {
- /* Free the extra mcam entries if the new list is smaller
- * than exiting list.
- */
- nix_mc_addr_list_free(dev, priv_count - nb_mc_addr);
- }
-
-
- /* Now mc_fltr_tbl has the required number of mcam entries,
- * Traverse through it and add new multicast filter table entries.
- */
- rc = nix_setup_mc_addr_list(dev, mc_addr_set);
- if (rc < 0)
- goto exit;
-
- rc = nix_hw_update_mc_addr_list(eth_dev);
- if (rc < 0)
- goto exit;
-
- dev->mc_tbl_set = true;
-
- return 0;
-
-exit:
- nix_mc_addr_list_free(dev, nb_mc_addr);
- return rc;
-}
-
-void
-otx2_nix_mc_filter_init(struct otx2_eth_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- return;
-
- TAILQ_INIT(&dev->mc_fltr_tbl);
-}
-
-void
-otx2_nix_mc_filter_fini(struct otx2_eth_dev *dev)
-{
- struct mcast_entry *entry;
- uint32_t count = 0;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- count++;
-
- nix_mc_addr_list_free(dev, count);
-}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
deleted file mode 100644
index abb2130587..0000000000
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ /dev/null
@@ -1,450 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <ethdev_driver.h>
-
-#include "otx2_ethdev.h"
-
-#define PTP_FREQ_ADJUST (1 << 9)
-
-/* Function to enable ptp config for VFs */
-void
-otx2_nix_ptp_enable_vf(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (otx2_nix_recalc_mtu(eth_dev))
- otx2_err("Failed to set MTU size for ptp");
-
- dev->scalar_ena = true;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
-}
-
-static uint16_t
-nix_eth_ptp_vf_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- struct otx2_eth_rxq *rxq = queue;
- struct rte_eth_dev *eth_dev;
-
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- eth_dev = rxq->eth_dev;
- otx2_nix_ptp_enable_vf(eth_dev);
-
- return 0;
-}
-
-static int
-nix_read_raw_clock(struct otx2_eth_dev *dev, uint64_t *clock, uint64_t *tsc,
- uint8_t is_pmu)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_GET_CLOCK;
- req->is_pmu = is_pmu;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto fail;
-
- if (clock)
- *clock = rsp->clk;
- if (tsc)
- *tsc = rsp->tsc;
-
-fail:
- return rc;
-}
-
-/* This function calculates two parameters "clk_freq_mult" and
- * "clk_delta" which is useful in deriving PTP HI clock from
- * timestamp counter (tsc) value.
- */
-int
-otx2_nix_raw_clock_tsc_conv(struct otx2_eth_dev *dev)
-{
- uint64_t ticks_base = 0, ticks = 0, tsc = 0, t_freq;
- int rc, val;
-
- /* Calculating the frequency at which PTP HI clock is running */
- rc = nix_read_raw_clock(dev, &ticks_base, &tsc, false);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- rte_delay_ms(100);
-
- rc = nix_read_raw_clock(dev, &ticks, &tsc, false);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- t_freq = (ticks - ticks_base) * 10;
-
- /* Calculating the freq multiplier viz the ratio between the
- * frequency at which PTP HI clock works and tsc clock runs
- */
- dev->clk_freq_mult =
- (double)pow(10, floor(log10(t_freq))) / rte_get_timer_hz();
-
- val = false;
-#ifdef RTE_ARM_EAL_RDTSC_USE_PMU
- val = true;
-#endif
- rc = nix_read_raw_clock(dev, &ticks, &tsc, val);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- /* Calculating delta between PTP HI clock and tsc */
- dev->clk_delta = ((uint64_t)(ticks / dev->clk_freq_mult) - tsc);
-
-fail:
- return rc;
-}
-
-static void
-nix_start_timecounters(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter));
- memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter));
- memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter));
-
- dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
- dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
- dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
-}
-
-static int
-nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t rc = -EINVAL;
-
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return rc;
-
- if (en) {
- /* Enable time stamping of sent PTP packets. */
- otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("MBOX ptp tx conf enable failed: err %d", rc);
- return rc;
- }
- /* Enable time stamping of received PTP packets. */
- otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox);
- } else {
- /* Disable time stamping of sent PTP packets. */
- otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("MBOX ptp tx conf disable failed: err %d", rc);
- return rc;
- }
- /* Disable time stamping of received PTP packets. */
- otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox);
- }
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_dev *eth_dev;
- int i;
-
- if (!dev)
- return -EINVAL;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev)
- return -EINVAL;
-
- otx2_dev->ptp_en = ptp_en;
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i];
- rxq->mbuf_initializer =
- otx2_nix_rxq_mbuf_setup(otx2_dev,
- eth_dev->data->port_id);
- }
- if (otx2_dev_is_vf(otx2_dev) && !(otx2_dev_is_sdp(otx2_dev)) &&
- !(otx2_dev_is_lbk(otx2_dev))) {
- /* In case of VF, setting of MTU cant be done directly in this
- * function as this is running as part of MBOX request(PF->VF)
- * and MTU setting also requires MBOX message to be
- * sent(VF->PF)
- */
- eth_dev->rx_pkt_burst = nix_eth_ptp_vf_burst;
- rte_mb();
- }
-
- return 0;
-}
-
-int
-otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i, rc = 0;
-
- /* If we are VF/SDP/LBK, ptp cannot not be enabled */
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev)) {
- otx2_info("PTP cannot be enabled in case of VF/SDP/LBK");
- return -EINVAL;
- }
-
- if (otx2_ethdev_is_ptp_en(dev)) {
- otx2_info("PTP mode is already enabled");
- return -EINVAL;
- }
-
- if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) {
- otx2_err("Ptype offload is disabled, it should be enabled");
- return -EINVAL;
- }
-
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- otx2_err("Both PTP and switch header enabled");
- return -EINVAL;
- }
-
- /* Allocating a iova address for tx tstamp */
- const struct rte_memzone *ts;
- ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts",
- 0, OTX2_ALIGN, OTX2_ALIGN,
- dev->node);
- if (ts == NULL) {
- otx2_err("Failed to allocate mem for tx tstamp addr");
- return -ENOMEM;
- }
-
- dev->tstamp.tx_tstamp_iova = ts->iova;
- dev->tstamp.tx_tstamp = ts->addr;
-
- rc = rte_mbuf_dyn_rx_timestamp_register(
- &dev->tstamp.tstamp_dynfield_offset,
- &dev->tstamp.rx_tstamp_dynflag);
- if (rc != 0) {
- otx2_err("Failed to register Rx timestamp field/flag");
- return -rte_errno;
- }
-
- /* System time should be already on by default */
- nix_start_timecounters(eth_dev);
-
- dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
-
- rc = nix_ptp_config(eth_dev, 1);
- if (!rc) {
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
- otx2_nix_form_default_desc(txq);
- }
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
- }
-
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- otx2_err("Failed to set MTU size for ptp");
-
- return rc;
-}
-
-int
-otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i, rc = 0;
-
- if (!otx2_ethdev_is_ptp_en(dev)) {
- otx2_nix_dbg("PTP mode is disabled");
- return -EINVAL;
- }
-
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return -EINVAL;
-
- dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
- dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
-
- rc = nix_ptp_config(eth_dev, 0);
- if (!rc) {
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
- otx2_nix_form_default_desc(txq);
- }
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
- }
-
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- otx2_err("Failed to set MTU size for ptp");
-
- return rc;
-}
-
-int
-otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp,
- uint32_t __rte_unused flags)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_timesync_info *tstamp = &dev->tstamp;
- uint64_t ns;
-
- if (!tstamp->rx_ready)
- return -EINVAL;
-
- ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp);
- *timestamp = rte_ns_to_timespec(ns);
- tstamp->rx_ready = 0;
-
- otx2_nix_dbg("rx timestamp: %"PRIu64" sec: %"PRIu64" nsec %"PRIu64"",
- (uint64_t)tstamp->rx_tstamp, (uint64_t)timestamp->tv_sec,
- (uint64_t)timestamp->tv_nsec);
-
- return 0;
-}
-
-int
-otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_timesync_info *tstamp = &dev->tstamp;
- uint64_t ns;
-
- if (*tstamp->tx_tstamp == 0)
- return -EINVAL;
-
- ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp);
- *timestamp = rte_ns_to_timespec(ns);
-
- otx2_nix_dbg("tx timestamp: %"PRIu64" sec: %"PRIu64" nsec %"PRIu64"",
- *tstamp->tx_tstamp, (uint64_t)timestamp->tv_sec,
- (uint64_t)timestamp->tv_nsec);
-
- *tstamp->tx_tstamp = 0;
- rte_wmb();
-
- return 0;
-}
-
-int
-otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- int rc;
-
- /* Adjust the frequent to make tics increments in 10^9 tics per sec */
- if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) {
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_ADJFINE;
- req->scaled_ppm = delta;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
- /* Since the frequency of PTP comp register is tuned, delta and
- * freq mult calculation for deriving PTP_HI from timestamp
- * counter should be done again.
- */
- rc = otx2_nix_raw_clock_tsc_conv(dev);
- if (rc)
- otx2_err("Failed to calculate delta and freq mult");
- }
- dev->systime_tc.nsec += delta;
- dev->rx_tstamp_tc.nsec += delta;
- dev->tx_tstamp_tc.nsec += delta;
-
- return 0;
-}
-
-int
-otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
- const struct timespec *ts)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t ns;
-
- ns = rte_timespec_to_ns(ts);
- /* Set the time counters to a new value. */
- dev->systime_tc.nsec = ns;
- dev->rx_tstamp_tc.nsec = ns;
- dev->tx_tstamp_tc.nsec = ns;
-
- return 0;
-}
-
-int
-otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- uint64_t ns;
- int rc;
-
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_GET_CLOCK;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- ns = rte_timecounter_update(&dev->systime_tc, rsp->clk);
- *ts = rte_ns_to_timespec(ns);
-
- otx2_nix_dbg("PTP time read: %"PRIu64" .%09"PRIu64"",
- (uint64_t)ts->tv_sec, (uint64_t)ts->tv_nsec);
-
- return 0;
-}
-
-
-int
-otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *clock)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* This API returns the raw PTP HI clock value. Since LFs doesn't
- * have direct access to PTP registers and it requires mbox msg
- * to AF for this value. In fastpath reading this value for every
- * packet (which involes mbox call) becomes very expensive, hence
- * we should be able to derive PTP HI clock value from tsc by
- * using freq_mult and clk_delta calculated during configure stage.
- */
- *clock = (rte_get_tsc_cycles() + dev->clk_delta) * dev->clk_freq_mult;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
deleted file mode 100644
index 68cef1caa3..0000000000
--- a/drivers/net/octeontx2/otx2_rss.c
+++ /dev/null
@@ -1,427 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
- uint8_t group, uint16_t *ind_tbl)
-{
- struct otx2_rss_info *rss = &dev->rss_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *req;
- int rc, idx;
-
- for (idx = 0; idx < rss->rss_size; idx++) {
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req)
- return -ENOMEM;
- }
- req->rss.rq = ind_tbl[idx];
- /* Fill AQ info */
- req->qidx = (group * rss->rss_size) + idx;
- req->ctype = NIX_AQ_CTYPE_RSS;
- req->op = NIX_AQ_INSTOP_INIT;
-
- if (!dev->lock_rx_ctx)
- continue;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req)
- return -ENOMEM;
- }
- req->rss.rq = ind_tbl[idx];
- /* Fill AQ info */
- req->qidx = (group * rss->rss_size) + idx;
- req->ctype = NIX_AQ_CTYPE_RSS;
- req->op = NIX_AQ_INSTOP_LOCK;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- return 0;
-}
-
-int
-otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_rss_info *rss = &dev->rss_info;
- int rc, i, j;
- int idx = 0;
-
- rc = -EINVAL;
- if (reta_size != dev->rss_info.rss_size) {
- otx2_err("Size of hash lookup table configured "
- "(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, dev->rss_info.rss_size);
- goto fail;
- }
-
- /* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
- if ((reta_conf[i].mask >> j) & 0x01)
- rss->ind_tbl[idx] = reta_conf[i].reta[j];
- idx++;
- }
- }
-
- return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
-
-fail:
- return rc;
-}
-
-int
-otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_rss_info *rss = &dev->rss_info;
- int rc, i, j;
-
- rc = -EINVAL;
-
- if (reta_size != dev->rss_info.rss_size) {
- otx2_err("Size of hash lookup table configured "
- "(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, dev->rss_info.rss_size);
- goto fail;
- }
-
- /* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
- if ((reta_conf[i].mask >> j) & 0x01)
- reta_conf[i].reta[j] = rss->ind_tbl[j];
- }
-
- return 0;
-
-fail:
- return rc;
-}
-
-void
-otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key,
- uint32_t key_len)
-{
- const uint8_t default_key[NIX_HASH_KEY_SIZE] = {
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
- };
- struct otx2_rss_info *rss = &dev->rss_info;
- uint64_t *keyptr;
- uint64_t val;
- uint32_t idx;
-
- if (key == NULL || key == 0) {
- keyptr = (uint64_t *)(uintptr_t)default_key;
- key_len = NIX_HASH_KEY_SIZE;
- memset(rss->key, 0, key_len);
- } else {
- memcpy(rss->key, key, key_len);
- keyptr = (uint64_t *)rss->key;
- }
-
- for (idx = 0; idx < (key_len >> 3); idx++) {
- val = rte_cpu_to_be_64(*keyptr);
- otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx));
- keyptr++;
- }
-}
-
-static void
-rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
-{
- uint64_t *keyptr = (uint64_t *)key;
- uint64_t val;
- int idx;
-
- for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) {
- val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx));
- *keyptr = rte_be_to_cpu_64(val);
- keyptr++;
- }
-}
-
-#define RSS_IPV4_ENABLE ( \
- RTE_ETH_RSS_IPV4 | \
- RTE_ETH_RSS_FRAG_IPV4 | \
- RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-
-#define RSS_IPV6_ENABLE ( \
- RTE_ETH_RSS_IPV6 | \
- RTE_ETH_RSS_FRAG_IPV6 | \
- RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define RSS_IPV6_EX_ENABLE ( \
- RTE_ETH_RSS_IPV6_EX | \
- RTE_ETH_RSS_IPV6_TCP_EX | \
- RTE_ETH_RSS_IPV6_UDP_EX)
-
-#define RSS_MAX_LEVELS 3
-
-#define RSS_IPV4_INDEX 0
-#define RSS_IPV6_INDEX 1
-#define RSS_TCP_INDEX 2
-#define RSS_UDP_INDEX 3
-#define RSS_SCTP_INDEX 4
-#define RSS_DMAC_INDEX 5
-
-uint32_t
-otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
- uint8_t rss_level)
-{
- uint32_t flow_key_type[RSS_MAX_LEVELS][6] = {
- {
- FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6,
- FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP,
- FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC
- },
- {
- FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6,
- FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP,
- FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC
- },
- {
- FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4,
- FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6,
- FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP,
- FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP,
- FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP,
- FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC
- }
- };
- uint32_t flowkey_cfg = 0;
-
- dev->rss_info.nix_rss = ethdev_rss;
-
- if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
- flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
- }
-
- if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
- flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
-
- if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
-
- if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
-
- if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
-
- if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
-
- if (ethdev_rss & RSS_IPV4_ENABLE)
- flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX];
-
- if (ethdev_rss & RSS_IPV6_ENABLE)
- flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_TCP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_UDP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_SCTP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
- flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
-
- if (ethdev_rss & RSS_IPV6_EX_ENABLE)
- flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
-
- if (ethdev_rss & RTE_ETH_RSS_PORT)
- flowkey_cfg |= FLOW_KEY_TYPE_PORT;
-
- if (ethdev_rss & RTE_ETH_RSS_NVGRE)
- flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
-
- if (ethdev_rss & RTE_ETH_RSS_VXLAN)
- flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
-
- if (ethdev_rss & RTE_ETH_RSS_GENEVE)
- flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
-
- if (ethdev_rss & RTE_ETH_RSS_GTPU)
- flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
-
- return flowkey_cfg;
-}
-
-int
-otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg,
- uint8_t *alg_idx, uint8_t group, int mcam_index)
-{
- struct nix_rss_flowkey_cfg_rsp *rss_rsp;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rss_flowkey_cfg *cfg;
- int rc;
-
- rc = -EINVAL;
-
- dev->rss_info.flowkey_cfg = flowkey_cfg;
-
- cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox);
-
- cfg->flowkey_cfg = flowkey_cfg;
- cfg->mcam_index = mcam_index; /* -1 indicates default group */
- cfg->group = group; /* 0 is default group */
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp);
- if (rc)
- return rc;
-
- if (alg_idx)
- *alg_idx = rss_rsp->alg_idx;
-
- return rc;
-}
-
-int
-otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t rss_hash_level;
- uint32_t flowkey_cfg;
- uint8_t alg_idx;
- int rc;
-
- rc = -EINVAL;
-
- if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) {
- otx2_err("Hash key size mismatch %d vs %d",
- rss_conf->rss_key_len, NIX_HASH_KEY_SIZE);
- goto fail;
- }
-
- if (rss_conf->rss_key)
- otx2_nix_rss_set_key(dev, rss_conf->rss_key,
- (uint32_t)rss_conf->rss_key_len);
-
- rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
- if (rss_hash_level)
- rss_hash_level -= 1;
- flowkey_cfg =
- otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, rss_hash_level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
- NIX_DEFAULT_RSS_CTX_GROUP,
- NIX_DEFAULT_RSS_MCAM_IDX);
- if (rc) {
- otx2_err("Failed to set RSS hash function rc=%d", rc);
- return rc;
- }
-
- dev->rss_info.alg_idx = alg_idx;
-
-fail:
- return rc;
-}
-
-int
-otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (rss_conf->rss_key)
- rss_get_key(dev, rss_conf->rss_key);
-
- rss_conf->rss_key_len = NIX_HASH_KEY_SIZE;
- rss_conf->rss_hf = dev->rss_info.nix_rss;
-
- return 0;
-}
-
-int
-otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t idx, qcnt = eth_dev->data->nb_rx_queues;
- uint8_t rss_hash_level;
- uint32_t flowkey_cfg;
- uint64_t rss_hf;
- uint8_t alg_idx;
- int rc;
-
- /* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
- return 0;
-
- /* Update default RSS key and cfg */
- otx2_nix_rss_set_key(dev, NULL, 0);
-
- /* Update default RSS RETA */
- for (idx = 0; idx < dev->rss_info.rss_size; idx++)
- dev->rss_info.ind_tbl[idx] = idx % qcnt;
-
- /* Init RSS table context */
- rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
- if (rc) {
- otx2_err("Failed to init RSS table rc=%d", rc);
- return rc;
- }
-
- rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
- if (rss_hash_level)
- rss_hash_level -= 1;
- flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
- NIX_DEFAULT_RSS_CTX_GROUP,
- NIX_DEFAULT_RSS_MCAM_IDX);
- if (rc) {
- otx2_err("Failed to set RSS hash function rc=%d", rc);
- return rc;
- }
-
- dev->rss_info.alg_idx = alg_idx;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
deleted file mode 100644
index 5ee1aed786..0000000000
--- a/drivers/net/octeontx2/otx2_rx.c
+++ /dev/null
@@ -1,430 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_vect.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_rx.h"
-
-#define NIX_DESCS_PER_LOOP 4
-#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
-#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ)
-
-static inline uint16_t
-nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata,
- const uint16_t pkts, const uint32_t qmask)
-{
- uint32_t available = rxq->available;
-
- /* Update the available count if cached value is not enough */
- if (unlikely(available < pkts)) {
- uint64_t reg, head, tail;
-
- /* Use LDADDA version to avoid reorder */
- reg = otx2_atomic64_add_sync(wdata, rxq->cq_status);
- /* CQ_OP_STATUS operation error */
- if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) ||
- reg & BIT_ULL(CQ_OP_STAT_CQ_ERR))
- return 0;
-
- tail = reg & 0xFFFFF;
- head = (reg >> 20) & 0xFFFFF;
- if (tail < head)
- available = tail - head + qmask + 1;
- else
- available = tail - head;
-
- rxq->available = available;
- }
-
- return RTE_MIN(pkts, available);
-}
-
-static __rte_always_inline uint16_t
-nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- const uint64_t mbuf_init = rxq->mbuf_initializer;
- const void *lookup_mem = rxq->lookup_mem;
- const uint64_t data_off = rxq->data_off;
- const uintptr_t desc = rxq->desc;
- const uint64_t wdata = rxq->wdata;
- const uint32_t qmask = rxq->qmask;
- uint16_t packets = 0, nb_pkts;
- uint32_t head = rxq->head;
- struct nix_cqe_hdr_s *cq;
- struct rte_mbuf *mbuf;
-
- nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
-
- while (packets < nb_pkts) {
- /* Prefetch N desc ahead */
- rte_prefetch_non_temporal((void *)(desc +
- (CQE_SZ((head + 2) & qmask))));
- cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
-
- mbuf = nix_get_mbuf_from_cqe(cq, data_off);
-
- otx2_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
- flags);
- otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags,
- (uint64_t *)((uint8_t *)mbuf + data_off));
- rx_pkts[packets++] = mbuf;
- otx2_prefetch_store_keep(mbuf);
- head++;
- head &= qmask;
- }
-
- rxq->head = head;
- rxq->available -= nb_pkts;
-
- /* Free all the CQs that we've processed */
- otx2_write64((wdata | nb_pkts), rxq->cq_door);
-
- return nb_pkts;
-}
-
-#if defined(RTE_ARCH_ARM64)
-
-static __rte_always_inline uint64_t
-nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
-{
- if (w2 & BIT_ULL(21) /* vtag0_gone */) {
- ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
- *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline uint64_t
-nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
-{
- if (w2 & BIT_ULL(23) /* vtag1_gone */) {
- ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
- mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline uint16_t
-nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0;
- uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
- const uint64_t mbuf_initializer = rxq->mbuf_initializer;
- const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
- uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
- uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
- struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
- const uint16_t *lookup_mem = rxq->lookup_mem;
- const uint32_t qmask = rxq->qmask;
- const uint64_t wdata = rxq->wdata;
- const uintptr_t desc = rxq->desc;
- uint8x16_t f0, f1, f2, f3;
- uint32_t head = rxq->head;
- uint16_t pkts_left;
-
- pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
- pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
-
- /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
- pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
-
- while (packets < pkts) {
- /* Exit loop if head is about to wrap and become unaligned */
- if (((head + NIX_DESCS_PER_LOOP - 1) & qmask) <
- NIX_DESCS_PER_LOOP) {
- pkts_left += (pkts - packets);
- break;
- }
-
- const uintptr_t cq0 = desc + CQE_SZ(head);
-
- /* Prefetch N desc ahead */
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
-
- /* Get NIX_RX_SG_S for size and buffer pointer */
- cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
- cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
- cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
- cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
-
- /* Extract mbuf from NIX_RX_SG_S */
- mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
- mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
- mbuf01 = vqsubq_u64(mbuf01, data_off);
- mbuf23 = vqsubq_u64(mbuf23, data_off);
-
- /* Move mbufs to scalar registers for future use */
- mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
- mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
- mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
- mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
-
- /* Mask to get packet len from NIX_RX_SG_S */
- const uint8x16_t shuf_msk = {
- 0xFF, 0xFF, /* pkt_type set as unknown */
- 0xFF, 0xFF, /* pkt_type set as unknown */
- 0, 1, /* octet 1~0, low 16 bits pkt_len */
- 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
- 0, 1, /* octet 1~0, 16 bits data_len */
- 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF
- };
-
- /* Form the rx_descriptor_fields1 with pkt_len and data_len */
- f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
- f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
- f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
- f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
-
- /* Load CQE word0 and word 1 */
- uint64_t cq0_w0 = ((uint64_t *)(cq0 + CQE_SZ(0)))[0];
- uint64_t cq0_w1 = ((uint64_t *)(cq0 + CQE_SZ(0)))[1];
- uint64_t cq1_w0 = ((uint64_t *)(cq0 + CQE_SZ(1)))[0];
- uint64_t cq1_w1 = ((uint64_t *)(cq0 + CQE_SZ(1)))[1];
- uint64_t cq2_w0 = ((uint64_t *)(cq0 + CQE_SZ(2)))[0];
- uint64_t cq2_w1 = ((uint64_t *)(cq0 + CQE_SZ(2)))[1];
- uint64_t cq3_w0 = ((uint64_t *)(cq0 + CQE_SZ(3)))[0];
- uint64_t cq3_w1 = ((uint64_t *)(cq0 + CQE_SZ(3)))[1];
-
- if (flags & NIX_RX_OFFLOAD_RSS_F) {
- /* Fill rss in the rx_descriptor_fields1 */
- f0 = vsetq_lane_u32(cq0_w0, f0, 3);
- f1 = vsetq_lane_u32(cq1_w0, f1, 3);
- f2 = vsetq_lane_u32(cq2_w0, f2, 3);
- f3 = vsetq_lane_u32(cq3_w0, f3, 3);
- ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
- } else {
- ol_flags0 = 0; ol_flags1 = 0;
- ol_flags2 = 0; ol_flags3 = 0;
- }
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F) {
- /* Fill packet_type in the rx_descriptor_fields1 */
- f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq0_w1),
- f0, 0);
- f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq1_w1),
- f1, 0);
- f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq2_w1),
- f2, 0);
- f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq3_w1),
- f3, 0);
- }
-
- if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) {
- ol_flags0 |= nix_rx_olflags_get(lookup_mem, cq0_w1);
- ol_flags1 |= nix_rx_olflags_get(lookup_mem, cq1_w1);
- ol_flags2 |= nix_rx_olflags_get(lookup_mem, cq2_w1);
- ol_flags3 |= nix_rx_olflags_get(lookup_mem, cq3_w1);
- }
-
- if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
- uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
- uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
- uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16);
- uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16);
-
- ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0);
- ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1);
- ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2);
- ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3);
-
- ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0);
- ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1);
- ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2);
- ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3);
- }
-
- if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) {
- ol_flags0 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0);
- ol_flags1 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1);
- ol_flags2 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2);
- ol_flags3 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3);
- }
-
- /* Form rearm_data with ol_flags */
- rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1);
- rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1);
- rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1);
- rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1);
-
- /* Update rx_descriptor_fields1 */
- vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0);
- vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1);
- vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2);
- vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3);
-
- /* Update rearm_data */
- vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0);
- vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1);
- vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
- vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
-
- /* Update that no more segments */
- mbuf0->next = NULL;
- mbuf1->next = NULL;
- mbuf2->next = NULL;
- mbuf3->next = NULL;
-
- /* Store the mbufs to rx_pkts */
- vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01);
- vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23);
-
- /* Prefetch mbufs */
- otx2_prefetch_store_keep(mbuf0);
- otx2_prefetch_store_keep(mbuf1);
- otx2_prefetch_store_keep(mbuf2);
- otx2_prefetch_store_keep(mbuf3);
-
- /* Mark mempool obj as "get" as it is alloc'ed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
-
- /* Advance head pointer and packets */
- head += NIX_DESCS_PER_LOOP; head &= qmask;
- packets += NIX_DESCS_PER_LOOP;
- }
-
- rxq->head = head;
- rxq->available -= packets;
-
- rte_io_wmb();
- /* Free all the CQs that we've processed */
- otx2_write64((rxq->wdata | packets), rxq->cq_door);
-
- if (unlikely(pkts_left))
- packets += nix_recv_pkts(rx_queue, &rx_pkts[packets],
- pkts_left, flags);
-
- return packets;
-}
-
-#else
-
-static inline uint16_t
-nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- RTE_SET_USED(rx_queue);
- RTE_SET_USED(rx_pkts);
- RTE_SET_USED(pkts);
- RTE_SET_USED(flags);
-
- return 0;
-}
-
-#endif
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
-} \
- \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
- (flags) | NIX_RX_MULTI_SEG_F); \
-} \
- \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- /* TSTMP is not supported by vector */ \
- if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \
- return 0; \
- return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \
-} \
-
-NIX_RX_FASTPATH_MODES
-#undef R
-
-static inline void
-pick_rx_func(struct rte_eth_dev *eth_dev,
- const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* [SEC] [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
- eth_dev->rx_pkt_burst = rx_burst
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
-}
-
-void
-otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- /* For PTP enabled, scalar rx function should be chosen as most of the
- * PTP apps are implemented to rx burst 1 pkt.
- */
- if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- pick_rx_func(eth_dev, nix_eth_rx_burst);
- else
- pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
- pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
-
- /* Copy multi seg version with no offload for tear down sequence */
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- dev->rx_pkt_burst_no_offload =
- nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
- rte_mb();
-}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
deleted file mode 100644
index 98406244e2..0000000000
--- a/drivers/net/octeontx2/otx2_rx.h
+++ /dev/null
@@ -1,583 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_RX_H__
-#define __OTX2_RX_H__
-
-#include <rte_ether.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_ipsec_anti_replay.h"
-#include "otx2_ipsec_fp.h"
-
-/* Default mark value used when none is provided. */
-#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff
-
-#define PTYPE_NON_TUNNEL_WIDTH 16
-#define PTYPE_TUNNEL_WIDTH 12
-#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_NON_TUNNEL_WIDTH)
-#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_TUNNEL_WIDTH)
-#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\
- PTYPE_TUNNEL_ARRAY_SZ) *\
- sizeof(uint16_t))
-
-#define NIX_RX_OFFLOAD_NONE (0)
-#define NIX_RX_OFFLOAD_RSS_F BIT(0)
-#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
-#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2)
-#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
-#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
-#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
-#define NIX_RX_OFFLOAD_SECURITY_F BIT(6)
-
-/* Flags to control cqe_to_mbuf conversion function.
- * Defining it from backwards to denote its been
- * not used as offload flags to pick function
- */
-#define NIX_RX_MULTI_SEG_F BIT(15)
-#define NIX_TIMESYNC_RX_OFFSET 8
-
-/* Inline IPsec offsets */
-
-/* nix_cqe_hdr_s + nix_rx_parse_s + nix_rx_sg_s + nix_iova_s */
-#define INLINE_CPT_RESULT_OFFSET 80
-
-struct otx2_timesync_info {
- uint64_t rx_tstamp;
- rte_iova_t tx_tstamp_iova;
- uint64_t *tx_tstamp;
- uint64_t rx_tstamp_dynflag;
- int tstamp_dynfield_offset;
- uint8_t tx_ready;
- uint8_t rx_ready;
-} __rte_cache_aligned;
-
-union mbuf_initializer {
- struct {
- uint16_t data_off;
- uint16_t refcnt;
- uint16_t nb_segs;
- uint16_t port;
- } fields;
- uint64_t value;
-};
-
-static inline rte_mbuf_timestamp_t *
-otx2_timestamp_dynfield(struct rte_mbuf *mbuf,
- struct otx2_timesync_info *info)
-{
- return RTE_MBUF_DYNFIELD(mbuf,
- info->tstamp_dynfield_offset, rte_mbuf_timestamp_t *);
-}
-
-static __rte_always_inline void
-otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
- struct otx2_timesync_info *tstamp, const uint16_t flag,
- uint64_t *tstamp_ptr)
-{
- if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) &&
- (mbuf->data_off == RTE_PKTMBUF_HEADROOM +
- NIX_TIMESYNC_RX_OFFSET)) {
-
- mbuf->pkt_len -= NIX_TIMESYNC_RX_OFFSET;
-
- /* Reading the rx timestamp inserted by CGX, viz at
- * starting of the packet data.
- */
- *otx2_timestamp_dynfield(mbuf, tstamp) =
- rte_be_to_cpu_64(*tstamp_ptr);
- /* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
- * PTP packets are received.
- */
- if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
- tstamp->rx_tstamp =
- *otx2_timestamp_dynfield(mbuf, tstamp);
- tstamp->rx_ready = 1;
- mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
- RTE_MBUF_F_RX_IEEE1588_TMST |
- tstamp->rx_tstamp_dynflag;
- }
- }
-}
-
-static __rte_always_inline uint64_t
-nix_clear_data_off(uint64_t oldval)
-{
- union mbuf_initializer mbuf_init = { .value = oldval };
-
- mbuf_init.fields.data_off = 0;
- return mbuf_init.value;
-}
-
-static __rte_always_inline struct rte_mbuf *
-nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
-{
- rte_iova_t buff;
-
- /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */
- buff = *((rte_iova_t *)((uint64_t *)cq + 9));
- return (struct rte_mbuf *)(buff - data_off);
-}
-
-
-static __rte_always_inline uint32_t
-nix_ptype_get(const void * const lookup_mem, const uint64_t in)
-{
- const uint16_t * const ptype = lookup_mem;
- const uint16_t lh_lg_lf = (in & 0xFFF0000000000000) >> 52;
- const uint16_t tu_l2 = ptype[(in & 0x000FFFF000000000) >> 36];
- const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lh_lg_lf];
-
- return (il4_tu << PTYPE_NON_TUNNEL_WIDTH) | tu_l2;
-}
-
-static __rte_always_inline uint32_t
-nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in)
-{
- const uint32_t * const ol_flags = (const uint32_t *)
- ((const uint8_t *)lookup_mem + PTYPE_ARRAY_SZ);
-
- return ol_flags[(in & 0xfff00000) >> 20];
-}
-
-static inline uint64_t
-nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
- struct rte_mbuf *mbuf)
-{
- /* There is no separate bit to check match_id
- * is valid or not? and no flag to identify it is an
- * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK
- * action. The former case addressed through 0 being invalid
- * value and inc/dec match_id pair when MARK is activated.
- * The later case addressed through defining
- * OTX2_FLOW_MARK_DEFAULT as value for
- * RTE_FLOW_ACTION_TYPE_MARK.
- * This would translate to not use
- * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and
- * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id.
- * i.e valid mark_id's are from
- * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
- */
- if (likely(match_id)) {
- ol_flags |= RTE_MBUF_F_RX_FDIR;
- if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
- ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
- mbuf->hash.fdir.hi = match_id - 1;
- }
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline void
-nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
- struct rte_mbuf *mbuf, uint64_t rearm)
-{
- const rte_iova_t *iova_list;
- struct rte_mbuf *head;
- const rte_iova_t *eol;
- uint8_t nb_segs;
- uint64_t sg;
-
- sg = *(const uint64_t *)(rx + 1);
- nb_segs = (sg >> 48) & 0x3;
- mbuf->nb_segs = nb_segs;
- mbuf->data_len = sg & 0xFFFF;
- sg = sg >> 16;
-
- eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1));
- /* Skip SG_S and first IOVA*/
- iova_list = ((const rte_iova_t *)(rx + 1)) + 2;
- nb_segs--;
-
- rearm = rearm & ~0xFFFF;
-
- head = mbuf;
- while (nb_segs) {
- mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
- mbuf = mbuf->next;
-
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
-
- mbuf->data_len = sg & 0xFFFF;
- sg = sg >> 16;
- *(uint64_t *)(&mbuf->rearm_data) = rearm;
- nb_segs--;
- iova_list++;
-
- if (!nb_segs && (iova_list + 1 < eol)) {
- sg = *(const uint64_t *)(iova_list);
- nb_segs = (sg >> 48) & 0x3;
- head->nb_segs += nb_segs;
- iova_list = (const rte_iova_t *)(iova_list + 1);
- }
- }
- mbuf->next = NULL;
-}
-
-static __rte_always_inline uint16_t
-nix_rx_sec_cptres_get(const void *cq)
-{
- volatile const struct otx2_cpt_res *res;
-
- res = (volatile const struct otx2_cpt_res *)((const char *)cq +
- INLINE_CPT_RESULT_OFFSET);
-
- return res->u16[0];
-}
-
-static __rte_always_inline void *
-nix_rx_sec_sa_get(const void * const lookup_mem, int spi, uint16_t port)
-{
- const uint64_t *const *sa_tbl = (const uint64_t * const *)
- ((const uint8_t *)lookup_mem + OTX2_NIX_SA_TBL_START);
-
- return (void *)sa_tbl[port][spi];
-}
-
-static __rte_always_inline uint64_t
-nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
- const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
- const void * const lookup_mem)
-{
- uint8_t *l2_ptr, *l3_ptr, *l2_ptr_actual, *l3_ptr_actual;
- struct otx2_ipsec_fp_in_sa *sa;
- uint16_t m_len, l2_len, ip_len;
- struct rte_ipv6_hdr *ip6h;
- struct rte_ipv4_hdr *iph;
- uint16_t *ether_type;
- uint32_t spi;
- int i;
-
- if (unlikely(nix_rx_sec_cptres_get(cq) != OTX2_SEC_COMP_GOOD))
- return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
-
- /* 20 bits of tag would have the SPI */
- spi = cq->tag & 0xFFFFF;
-
- sa = nix_rx_sec_sa_get(lookup_mem, spi, m->port);
- *rte_security_dynfield(m) = sa->udata64;
-
- l2_ptr = rte_pktmbuf_mtod(m, uint8_t *);
- l2_len = rx->lcptr - rx->laptr;
- l3_ptr = RTE_PTR_ADD(l2_ptr, l2_len);
-
- if (sa->replay_win_sz) {
- if (cpt_ipsec_ip_antireplay_check(sa, l3_ptr) < 0)
- return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
- }
-
- l2_ptr_actual = RTE_PTR_ADD(l2_ptr,
- sizeof(struct otx2_ipsec_fp_res_hdr));
- l3_ptr_actual = RTE_PTR_ADD(l3_ptr,
- sizeof(struct otx2_ipsec_fp_res_hdr));
-
- for (i = l2_len - RTE_ETHER_TYPE_LEN - 1; i >= 0; i--)
- l2_ptr_actual[i] = l2_ptr[i];
-
- m->data_off += sizeof(struct otx2_ipsec_fp_res_hdr);
-
- ether_type = RTE_PTR_SUB(l3_ptr_actual, RTE_ETHER_TYPE_LEN);
-
- iph = (struct rte_ipv4_hdr *)l3_ptr_actual;
- if ((iph->version_ihl >> 4) == 4) {
- ip_len = rte_be_to_cpu_16(iph->total_length);
- *ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
- } else {
- ip6h = (struct rte_ipv6_hdr *)iph;
- ip_len = rte_be_to_cpu_16(ip6h->payload_len);
- *ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
- }
-
- m_len = ip_len + l2_len;
- m->data_len = m_len;
- m->pkt_len = m_len;
- return RTE_MBUF_F_RX_SEC_OFFLOAD;
-}
-
-static __rte_always_inline void
-otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
- struct rte_mbuf *mbuf, const void *lookup_mem,
- const uint64_t val, const uint16_t flag)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
- const uint64_t w1 = *(const uint64_t *)rx;
- const uint16_t len = rx->pkt_lenm1 + 1;
- uint64_t ol_flags = 0;
-
- /* Mark mempool obj as "get" as it is alloc'ed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
-
- if (flag & NIX_RX_OFFLOAD_PTYPE_F)
- mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
- else
- mbuf->packet_type = 0;
-
- if (flag & NIX_RX_OFFLOAD_RSS_F) {
- mbuf->hash.rss = tag;
- ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
- }
-
- if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
- ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
-
- if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
- if (rx->vtag0_gone) {
- ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
- mbuf->vlan_tci = rx->vtag0_tci;
- }
- if (rx->vtag1_gone) {
- ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
- mbuf->vlan_tci_outer = rx->vtag1_tci;
- }
- }
-
- if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F)
- ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf);
-
- if ((flag & NIX_RX_OFFLOAD_SECURITY_F) &&
- cq->cqe_type == NIX_XQE_TYPE_RX_IPSECH) {
- *(uint64_t *)(&mbuf->rearm_data) = val;
- ol_flags |= nix_rx_sec_mbuf_update(rx, cq, mbuf, lookup_mem);
- mbuf->ol_flags = ol_flags;
- return;
- }
-
- mbuf->ol_flags = ol_flags;
- *(uint64_t *)(&mbuf->rearm_data) = val;
- mbuf->pkt_len = len;
-
- if (flag & NIX_RX_MULTI_SEG_F) {
- nix_cqe_xtract_mseg(rx, mbuf, val);
- } else {
- mbuf->data_len = len;
- mbuf->next = NULL;
- }
-}
-
-#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
-#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F
-#define RSS_F NIX_RX_OFFLOAD_RSS_F
-#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
-#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F
-#define TS_F NIX_RX_OFFLOAD_TSTAMP_F
-#define RX_SEC_F NIX_RX_OFFLOAD_SECURITY_F
-
-/* [SEC] [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
-#define NIX_RX_FASTPATH_MODES \
-R(no_offload, 0, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \
-R(rss, 0, 0, 0, 0, 0, 0, 1, RSS_F) \
-R(ptype, 0, 0, 0, 0, 0, 1, 0, PTYPE_F) \
-R(ptype_rss, 0, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \
-R(cksum, 0, 0, 0, 0, 1, 0, 0, CKSUM_F) \
-R(cksum_rss, 0, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \
-R(cksum_ptype, 0, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \
-R(cksum_ptype_rss, 0, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\
-R(vlan, 0, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \
-R(vlan_rss, 0, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \
-R(vlan_ptype, 0, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \
-R(vlan_ptype_rss, 0, 0, 0, 1, 0, 1, 1, \
- RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum, 0, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \
-R(vlan_cksum_rss, 0, 0, 0, 1, 1, 0, 1, \
- RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype, 0, 0, 0, 1, 1, 1, 0, \
- RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(vlan_cksum_ptype_rss, 0, 0, 0, 1, 1, 1, 1, \
- RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(mark, 0, 0, 1, 0, 0, 0, 0, MARK_F) \
-R(mark_rss, 0, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \
-R(mark_ptype, 0, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \
-R(mark_ptype_rss, 0, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F) \
-R(mark_cksum, 0, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \
-R(mark_cksum_rss, 0, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F) \
-R(mark_cksum_ptype, 0, 0, 1, 0, 1, 1, 0, \
- MARK_F | CKSUM_F | PTYPE_F) \
-R(mark_cksum_ptype_rss, 0, 0, 1, 0, 1, 1, 1, \
- MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(mark_vlan, 0, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \
-R(mark_vlan_rss, 0, 0, 1, 1, 0, 0, 1, \
- MARK_F | RX_VLAN_F | RSS_F) \
-R(mark_vlan_ptype, 0, 0, 1, 1, 0, 1, 0, \
- MARK_F | RX_VLAN_F | PTYPE_F) \
-R(mark_vlan_ptype_rss, 0, 0, 1, 1, 0, 1, 1, \
- MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(mark_vlan_cksum, 0, 0, 1, 1, 1, 0, 0, \
- MARK_F | RX_VLAN_F | CKSUM_F) \
-R(mark_vlan_cksum_rss, 0, 0, 1, 1, 1, 0, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(mark_vlan_cksum_ptype, 0, 0, 1, 1, 1, 1, 0, \
- MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(mark_vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts, 0, 1, 0, 0, 0, 0, 0, TS_F) \
-R(ts_rss, 0, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \
-R(ts_ptype, 0, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \
-R(ts_ptype_rss, 0, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F) \
-R(ts_cksum, 0, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \
-R(ts_cksum_rss, 0, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F) \
-R(ts_cksum_ptype, 0, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F) \
-R(ts_cksum_ptype_rss, 0, 1, 0, 0, 1, 1, 1, \
- TS_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_vlan, 0, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \
-R(ts_vlan_rss, 0, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F) \
-R(ts_vlan_ptype, 0, 1, 0, 1, 0, 1, 0, \
- TS_F | RX_VLAN_F | PTYPE_F) \
-R(ts_vlan_ptype_rss, 0, 1, 0, 1, 0, 1, 1, \
- TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(ts_vlan_cksum, 0, 1, 0, 1, 1, 0, 0, \
- TS_F | RX_VLAN_F | CKSUM_F) \
-R(ts_vlan_cksum_rss, 0, 1, 0, 1, 1, 0, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(ts_vlan_cksum_ptype, 0, 1, 0, 1, 1, 1, 0, \
- TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(ts_vlan_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, 1, \
- TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_mark, 0, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \
-R(ts_mark_rss, 0, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F) \
-R(ts_mark_ptype, 0, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F) \
-R(ts_mark_ptype_rss, 0, 1, 1, 0, 0, 1, 1, \
- TS_F | MARK_F | PTYPE_F | RSS_F) \
-R(ts_mark_cksum, 0, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F) \
-R(ts_mark_cksum_rss, 0, 1, 1, 0, 1, 0, 1, \
- TS_F | MARK_F | CKSUM_F | RSS_F) \
-R(ts_mark_cksum_ptype, 0, 1, 1, 0, 1, 1, 0, \
- TS_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(ts_mark_cksum_ptype_rss, 0, 1, 1, 0, 1, 1, 1, \
- TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_mark_vlan, 0, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\
-R(ts_mark_vlan_rss, 0, 1, 1, 1, 0, 0, 1, \
- TS_F | MARK_F | RX_VLAN_F | RSS_F) \
-R(ts_mark_vlan_ptype, 0, 1, 1, 1, 0, 1, 0, \
- TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(ts_mark_vlan_ptype_rss, 0, 1, 1, 1, 0, 1, 1, \
- TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(ts_mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 1, 0, \
- TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(ts_mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, 1, \
- TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec, 1, 0, 0, 0, 0, 0, 0, RX_SEC_F) \
-R(sec_rss, 1, 0, 0, 0, 0, 0, 1, RX_SEC_F | RSS_F) \
-R(sec_ptype, 1, 0, 0, 0, 0, 1, 0, RX_SEC_F | PTYPE_F) \
-R(sec_ptype_rss, 1, 0, 0, 0, 0, 1, 1, \
- RX_SEC_F | PTYPE_F | RSS_F) \
-R(sec_cksum, 1, 0, 0, 0, 1, 0, 0, RX_SEC_F | CKSUM_F) \
-R(sec_cksum_rss, 1, 0, 0, 0, 1, 0, 1, \
- RX_SEC_F | CKSUM_F | RSS_F) \
-R(sec_cksum_ptype, 1, 0, 0, 0, 1, 1, 0, \
- RX_SEC_F | CKSUM_F | PTYPE_F) \
-R(sec_cksum_ptype_rss, 1, 0, 0, 0, 1, 1, 1, \
- RX_SEC_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_vlan, 1, 0, 0, 1, 0, 0, 0, RX_SEC_F | RX_VLAN_F) \
-R(sec_vlan_rss, 1, 0, 0, 1, 0, 0, 1, \
- RX_SEC_F | RX_VLAN_F | RSS_F) \
-R(sec_vlan_ptype, 1, 0, 0, 1, 0, 1, 0, \
- RX_SEC_F | RX_VLAN_F | PTYPE_F) \
-R(sec_vlan_ptype_rss, 1, 0, 0, 1, 0, 1, 1, \
- RX_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_vlan_cksum, 1, 0, 0, 1, 1, 0, 0, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F) \
-R(sec_vlan_cksum_rss, 1, 0, 0, 1, 1, 0, 1, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_vlan_cksum_ptype, 1, 0, 0, 1, 1, 1, 0, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_vlan_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, 1, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_mark, 1, 0, 1, 0, 0, 0, 0, RX_SEC_F | MARK_F) \
-R(sec_mark_rss, 1, 0, 1, 0, 0, 0, 1, RX_SEC_F | MARK_F | RSS_F)\
-R(sec_mark_ptype, 1, 0, 1, 0, 0, 1, 0, \
- RX_SEC_F | MARK_F | PTYPE_F) \
-R(sec_mark_ptype_rss, 1, 0, 1, 0, 0, 1, 1, \
- RX_SEC_F | MARK_F | PTYPE_F | RSS_F) \
-R(sec_mark_cksum, 1, 0, 1, 0, 1, 0, 0, \
- RX_SEC_F | MARK_F | CKSUM_F) \
-R(sec_mark_cksum_rss, 1, 0, 1, 0, 1, 0, 1, \
- RX_SEC_F | MARK_F | CKSUM_F | RSS_F) \
-R(sec_mark_cksum_ptype, 1, 0, 1, 0, 1, 1, 0, \
- RX_SEC_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(sec_mark_cksum_ptype_rss, 1, 0, 1, 0, 1, 1, 1, \
- RX_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_mark_vlan, 1, 0, 1, 1, 0, 0, 0, RX_SEC_F | RX_VLAN_F) \
-R(sec_mark_vlan_rss, 1, 0, 1, 1, 0, 0, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | RSS_F) \
-R(sec_mark_vlan_ptype, 1, 0, 1, 1, 0, 1, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(sec_mark_vlan_ptype_rss, 1, 0, 1, 1, 0, 1, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_mark_vlan_cksum, 1, 0, 1, 1, 1, 0, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F) \
-R(sec_mark_vlan_cksum_rss, 1, 0, 1, 1, 1, 0, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_mark_vlan_cksum_ptype, 1, 0, 1, 1, 1, 1, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_mark_vlan_cksum_ptype_rss, \
- 1, 0, 1, 1, 1, 1, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | \
- RSS_F) \
-R(sec_ts, 1, 1, 0, 0, 0, 0, 0, RX_SEC_F | TS_F) \
-R(sec_ts_rss, 1, 1, 0, 0, 0, 0, 1, RX_SEC_F | TS_F | RSS_F) \
-R(sec_ts_ptype, 1, 1, 0, 0, 0, 1, 0, RX_SEC_F | TS_F | PTYPE_F)\
-R(sec_ts_ptype_rss, 1, 1, 0, 0, 0, 1, 1, \
- RX_SEC_F | TS_F | PTYPE_F | RSS_F) \
-R(sec_ts_cksum, 1, 1, 0, 0, 1, 0, 0, RX_SEC_F | TS_F | CKSUM_F)\
-R(sec_ts_cksum_rss, 1, 1, 0, 0, 1, 0, 1, \
- RX_SEC_F | TS_F | CKSUM_F | RSS_F) \
-R(sec_ts_cksum_ptype, 1, 1, 0, 0, 1, 1, 0, \
- RX_SEC_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_cksum_ptype_rss, 1, 1, 0, 0, 1, 1, 1, \
- RX_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_ts_vlan, 1, 1, 0, 1, 0, 0, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F) \
-R(sec_ts_vlan_rss, 1, 1, 0, 1, 0, 0, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | RSS_F) \
-R(sec_ts_vlan_ptype, 1, 1, 0, 1, 0, 1, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | PTYPE_F) \
-R(sec_ts_vlan_ptype_rss, 1, 1, 0, 1, 0, 1, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_ts_vlan_cksum, 1, 1, 0, 1, 1, 0, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F) \
-R(sec_ts_vlan_cksum_rss, 1, 1, 0, 1, 1, 0, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_ts_vlan_cksum_ptype, 1, 1, 0, 1, 1, 1, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_vlan_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | \
- RSS_F) \
-R(sec_ts_mark, 1, 1, 1, 0, 0, 0, 0, RX_SEC_F | TS_F | MARK_F) \
-R(sec_ts_mark_rss, 1, 1, 1, 0, 0, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | RSS_F) \
-R(sec_ts_mark_ptype, 1, 1, 1, 0, 0, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | PTYPE_F) \
-R(sec_ts_mark_ptype_rss, 1, 1, 1, 0, 0, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F) \
-R(sec_ts_mark_cksum, 1, 1, 1, 0, 1, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F) \
-R(sec_ts_mark_cksum_rss, 1, 1, 1, 0, 1, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F) \
-R(sec_ts_mark_cksum_ptype, 1, 1, 1, 0, 1, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_mark_cksum_ptype_rss, 1, 1, 1, 0, 1, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_ts_mark_vlan, 1, 1, 1, 1, 0, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F) \
-R(sec_ts_mark_vlan_rss, 1, 1, 1, 1, 0, 0, 1, \
- RX_SEC_F | RX_VLAN_F | RSS_F) \
-R(sec_ts_mark_vlan_ptype, 1, 1, 1, 1, 0, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(sec_ts_mark_vlan_ptype_rss, 1, 1, 1, 1, 0, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F)\
-R(sec_ts_mark_vlan_cksum, 1, 1, 1, 1, 1, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F) \
-R(sec_ts_mark_vlan_cksum_rss, 1, 1, 1, 1, 1, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | RSS_F)\
-R(sec_ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | \
- PTYPE_F) \
-R(sec_ts_mark_vlan_cksum_ptype_rss, \
- 1, 1, 1, 1, 1, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | \
- PTYPE_F | RSS_F)
-#endif /* __OTX2_RX_H__ */
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
deleted file mode 100644
index 3adf21608c..0000000000
--- a/drivers/net/octeontx2/otx2_stats.c
+++ /dev/null
@@ -1,397 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include "otx2_ethdev.h"
-
-struct otx2_nix_xstats_name {
- char name[RTE_ETH_XSTATS_NAME_SIZE];
- uint32_t offset;
-};
-
-static const struct otx2_nix_xstats_name nix_tx_xstats[] = {
- {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST},
- {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST},
- {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST},
- {"tx_drop", NIX_STAT_LF_TX_TX_DROP},
- {"tx_octs", NIX_STAT_LF_TX_TX_OCTS},
-};
-
-static const struct otx2_nix_xstats_name nix_rx_xstats[] = {
- {"rx_octs", NIX_STAT_LF_RX_RX_OCTS},
- {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST},
- {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST},
- {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST},
- {"rx_drop", NIX_STAT_LF_RX_RX_DROP},
- {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS},
- {"rx_fcs", NIX_STAT_LF_RX_RX_FCS},
- {"rx_err", NIX_STAT_LF_RX_RX_ERR},
- {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST},
- {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST},
- {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST},
- {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST},
-};
-
-static const struct otx2_nix_xstats_name nix_q_xstats[] = {
- {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS},
-};
-
-#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats)
-#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats)
-#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats)
-
-#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \
- OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS)
-
-int
-otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t reg, val;
- uint32_t qidx, i;
- int64_t *addr;
-
- stats->opackets = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST));
- stats->opackets += otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST));
- stats->opackets += otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST));
- stats->oerrors = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP));
- stats->obytes = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS));
-
- stats->ipackets = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST));
- stats->ipackets += otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST));
- stats->ipackets += otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST));
- stats->imissed = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP));
- stats->ibytes = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS));
- stats->ierrors = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR));
-
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
- if (dev->txmap[i] & (1U << 31)) {
- qidx = dev->txmap[i] & 0xFFFF;
- reg = (((uint64_t)qidx) << 32);
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_opackets[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_obytes[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_errors[i] = val;
- }
- }
-
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
- if (dev->rxmap[i] & (1U << 31)) {
- qidx = dev->rxmap[i] & 0xFFFF;
- reg = (((uint64_t)qidx) << 32);
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_ipackets[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_ibytes[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_errors[i] += val;
- }
- }
-
- return 0;
-}
-
-int
-otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
- return -ENOMEM;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- uint8_t stat_idx, uint8_t is_rx)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (is_rx)
- dev->rxmap[stat_idx] = ((1U << 31) | queue_id);
- else
- dev->txmap[stat_idx] = ((1U << 31) | queue_id);
-
- return 0;
-}
-
-int
-otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- unsigned int i, count = 0;
- uint64_t reg, val;
-
- if (n < OTX2_NIX_NUM_XSTATS_REG)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (xstats == NULL)
- return 0;
-
- for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
- xstats[count].value = otx2_read64(dev->base +
- NIX_LF_TX_STATX(nix_tx_xstats[i].offset));
- xstats[count].id = count;
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
- xstats[count].value = otx2_read64(dev->base +
- NIX_LF_RX_STATX(nix_rx_xstats[i].offset));
- xstats[count].id = count;
- count++;
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- reg = (((uint64_t)i) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base +
- nix_q_xstats[0].offset));
- if (val & OP_ERR)
- val = 0;
- xstats[count].value += val;
- }
- xstats[count].id = count;
- count++;
-
- return count;
-}
-
-int
-otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit)
-{
- unsigned int i, count = 0;
-
- RTE_SET_USED(eth_dev);
-
- if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL)
- return -ENOMEM;
-
- if (xstats_names) {
- for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_tx_xstats[i].name);
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_rx_xstats[i].name);
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_q_xstats[i].name);
- count++;
- }
- }
-
- return OTX2_NIX_NUM_XSTATS_REG;
-}
-
-int
-otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit)
-{
- struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG];
- uint16_t i;
-
- if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (limit > OTX2_NIX_NUM_XSTATS_REG)
- return -EINVAL;
-
- if (xstats_names == NULL)
- return -ENOMEM;
-
- otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit);
-
- for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
- if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
- otx2_err("Invalid id value");
- return -EINVAL;
- }
- strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
- sizeof(xstats_names[i].name));
- }
-
- return limit;
-}
-
-int
-otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
- uint64_t *values, unsigned int n)
-{
- struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG];
- uint16_t i;
-
- if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (n > OTX2_NIX_NUM_XSTATS_REG)
- return -EINVAL;
-
- if (values == NULL)
- return -ENOMEM;
-
- otx2_nix_xstats_get(eth_dev, xstats, n);
-
- for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
- if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
- otx2_err("Invalid id value");
- return -EINVAL;
- }
- values[i] = xstats[ids[i]].value;
- }
-
- return n;
-}
-
-static int
-nix_queue_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- uint32_t i;
- int rc;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read rq context");
- return rc;
- }
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
- otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq));
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask));
- aq->rq.octs = 0;
- aq->rq.pkts = 0;
- aq->rq.drop_octs = 0;
- aq->rq.drop_pkts = 0;
- aq->rq.re_pkts = 0;
-
- aq->rq_mask.octs = ~(aq->rq_mask.octs);
- aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
- aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
- aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
- aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to write rq context");
- return rc;
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read sq context");
- return rc;
- }
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
- otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq));
- otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask));
- aq->sq.octs = 0;
- aq->sq.pkts = 0;
- aq->sq.drop_octs = 0;
- aq->sq.drop_pkts = 0;
-
- aq->sq_mask.octs = ~(aq->sq_mask.octs);
- aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
- aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
- aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to write sq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-int
-otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- int ret;
-
- if (otx2_mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
- return -ENOMEM;
-
- ret = otx2_mbox_process(mbox);
- if (ret != 0)
- return ret;
-
- /* Reset queue stats */
- return nix_queue_stats_reset(eth_dev);
-}
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
deleted file mode 100644
index 6aff1f9587..0000000000
--- a/drivers/net/octeontx2/otx2_tm.c
+++ /dev/null
@@ -1,3317 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_malloc.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_tm.h"
-
-/* Use last LVL_CNT nodes as default nodes */
-#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT)
-
-enum otx2_tm_node_level {
- OTX2_TM_LVL_ROOT = 0,
- OTX2_TM_LVL_SCH1,
- OTX2_TM_LVL_SCH2,
- OTX2_TM_LVL_SCH3,
- OTX2_TM_LVL_SCH4,
- OTX2_TM_LVL_QUEUE,
- OTX2_TM_LVL_MAX,
-};
-
-static inline
-uint64_t shaper2regval(struct shaper_params *shaper)
-{
- return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) |
- (shaper->div_exp << 13) | (shaper->exponent << 9) |
- (shaper->mantissa << 1);
-}
-
-int
-otx2_nix_get_link(struct otx2_eth_dev *dev)
-{
- int link = 13 /* SDP */;
- uint16_t lmac_chan;
- uint16_t map;
-
- lmac_chan = dev->tx_chan_base;
-
- /* CGX lmac link */
- if (lmac_chan >= 0x800) {
- map = lmac_chan & 0x7FF;
- link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF);
- } else if (lmac_chan < 0x700) {
- /* LBK channel */
- link = 12;
- }
-
- return link;
-}
-
-static uint8_t
-nix_get_relchan(struct otx2_eth_dev *dev)
-{
- return dev->tx_chan_base & 0xff;
-}
-
-static bool
-nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
-{
- bool is_lbk = otx2_dev_is_lbk(dev);
- return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) && !is_lbk;
-}
-
-static bool
-nix_tm_is_leaf(struct otx2_eth_dev *dev, int lvl)
-{
- if (nix_tm_have_tl1_access(dev))
- return (lvl == OTX2_TM_LVL_QUEUE);
-
- return (lvl == OTX2_TM_LVL_SCH4);
-}
-
-static int
-find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id)
-{
- struct otx2_nix_tm_node *child_node;
-
- TAILQ_FOREACH(child_node, &dev->node_list, node) {
- if (!child_node->parent)
- continue;
- if (!(child_node->parent->id == node_id))
- continue;
- if (child_node->priority == child_node->parent->rr_prio)
- continue;
- return child_node->hw_id - child_node->priority;
- }
- return 0;
-}
-
-
-static struct otx2_nix_tm_shaper_profile *
-nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
-{
- struct otx2_nix_tm_shaper_profile *tm_shaper_profile;
-
- TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) {
- if (tm_shaper_profile->shaper_profile_id == shaper_id)
- return tm_shaper_profile;
- }
- return NULL;
-}
-
-static inline uint64_t
-shaper_rate_to_nix(uint64_t value, uint64_t *exponent_p,
- uint64_t *mantissa_p, uint64_t *div_exp_p)
-{
- uint64_t div_exp, exponent, mantissa;
-
- /* Boundary checks */
- if (value < MIN_SHAPER_RATE ||
- value > MAX_SHAPER_RATE)
- return 0;
-
- if (value <= SHAPER_RATE(0, 0, 0)) {
- /* Calculate rate div_exp and mantissa using
- * the following formula:
- *
- * value = (2E6 * (256 + mantissa)
- * / ((1 << div_exp) * 256))
- */
- div_exp = 0;
- exponent = 0;
- mantissa = MAX_RATE_MANTISSA;
-
- while (value < (NIX_SHAPER_RATE_CONST / (1 << div_exp)))
- div_exp += 1;
-
- while (value <
- ((NIX_SHAPER_RATE_CONST * (256 + mantissa)) /
- ((1 << div_exp) * 256)))
- mantissa -= 1;
- } else {
- /* Calculate rate exponent and mantissa using
- * the following formula:
- *
- * value = (2E6 * ((256 + mantissa) << exponent)) / 256
- *
- */
- div_exp = 0;
- exponent = MAX_RATE_EXPONENT;
- mantissa = MAX_RATE_MANTISSA;
-
- while (value < (NIX_SHAPER_RATE_CONST * (1 << exponent)))
- exponent -= 1;
-
- while (value < ((NIX_SHAPER_RATE_CONST *
- ((256 + mantissa) << exponent)) / 256))
- mantissa -= 1;
- }
-
- if (div_exp > MAX_RATE_DIV_EXP ||
- exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA)
- return 0;
-
- if (div_exp_p)
- *div_exp_p = div_exp;
- if (exponent_p)
- *exponent_p = exponent;
- if (mantissa_p)
- *mantissa_p = mantissa;
-
- /* Calculate real rate value */
- return SHAPER_RATE(exponent, mantissa, div_exp);
-}
-
-static inline uint64_t
-shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
- uint64_t *mantissa_p)
-{
- uint64_t exponent, mantissa;
-
- if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST)
- return 0;
-
- /* Calculate burst exponent and mantissa using
- * the following formula:
- *
- * value = (((256 + mantissa) << (exponent + 1)
- / 256)
- *
- */
- exponent = MAX_BURST_EXPONENT;
- mantissa = MAX_BURST_MANTISSA;
-
- while (value < (1ull << (exponent + 1)))
- exponent -= 1;
-
- while (value < ((256 + mantissa) << (exponent + 1)) / 256)
- mantissa -= 1;
-
- if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA)
- return 0;
-
- if (exponent_p)
- *exponent_p = exponent;
- if (mantissa_p)
- *mantissa_p = mantissa;
-
- return SHAPER_BURST(exponent, mantissa);
-}
-
-static void
-shaper_config_to_nix(struct otx2_nix_tm_shaper_profile *profile,
- struct shaper_params *cir,
- struct shaper_params *pir)
-{
- struct rte_tm_shaper_params *param = &profile->params;
-
- if (!profile)
- return;
-
- /* Calculate CIR exponent and mantissa */
- if (param->committed.rate)
- cir->rate = shaper_rate_to_nix(param->committed.rate,
- &cir->exponent,
- &cir->mantissa,
- &cir->div_exp);
-
- /* Calculate PIR exponent and mantissa */
- if (param->peak.rate)
- pir->rate = shaper_rate_to_nix(param->peak.rate,
- &pir->exponent,
- &pir->mantissa,
- &pir->div_exp);
-
- /* Calculate CIR burst exponent and mantissa */
- if (param->committed.size)
- cir->burst = shaper_burst_to_nix(param->committed.size,
- &cir->burst_exponent,
- &cir->burst_mantissa);
-
- /* Calculate PIR burst exponent and mantissa */
- if (param->peak.size)
- pir->burst = shaper_burst_to_nix(param->peak.size,
- &pir->burst_exponent,
- &pir->burst_mantissa);
-}
-
-static void
-shaper_default_red_algo(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- struct otx2_nix_tm_shaper_profile *profile)
-{
- struct shaper_params cir, pir;
-
- /* C0 doesn't support STALL when both PIR & CIR are enabled */
- if (profile && otx2_dev_is_96xx_Cx(dev)) {
- memset(&cir, 0, sizeof(cir));
- memset(&pir, 0, sizeof(pir));
- shaper_config_to_nix(profile, &cir, &pir);
-
- if (pir.rate && cir.rate) {
- tm_node->red_algo = NIX_REDALG_DISCARD;
- tm_node->flags |= NIX_TM_NODE_RED_DISCARD;
- return;
- }
- }
-
- tm_node->red_algo = NIX_REDALG_STD;
- tm_node->flags &= ~NIX_TM_NODE_RED_DISCARD;
-}
-
-static int
-populate_tm_tl1_default(struct otx2_eth_dev *dev, uint32_t schq)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txschq_config *req;
-
- /*
- * Default config for TL1.
- * For VF this is always ignored.
- */
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_TL1;
-
- /* Set DWRR quantum */
- req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
- req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
- req->num_regs++;
-
- req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
- req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
- req->num_regs++;
-
- req->reg[2] = NIX_AF_TL1X_CIR(schq);
- req->regval[2] = 0;
- req->num_regs++;
-
- return otx2_mbox_process(mbox);
-}
-
-static uint8_t
-prepare_tm_sched_reg(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- uint64_t strict_prio = tm_node->priority;
- uint32_t hw_lvl = tm_node->hw_lvl;
- uint32_t schq = tm_node->hw_id;
- uint64_t rr_quantum;
- uint8_t k = 0;
-
- rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- /* For children to root, strict prio is default if either
- * device root is TL2 or TL1 Static Priority is disabled.
- */
- if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
- (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
- dev->tm_flags & NIX_TM_TL1_NO_SP))
- strict_prio = TXSCH_TL1_DFLT_RR_PRIO;
-
- otx2_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
- "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)",
- nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
- tm_node->id, strict_prio, rr_quantum, tm_node);
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
- regval[k] = rr_quantum;
- k++;
-
- break;
- }
-
- return k;
-}
-
-static uint8_t
-prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
- struct otx2_nix_tm_shaper_profile *profile,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- struct shaper_params cir, pir;
- uint32_t schq = tm_node->hw_id;
- uint64_t adjust = 0;
- uint8_t k = 0;
-
- memset(&cir, 0, sizeof(cir));
- memset(&pir, 0, sizeof(pir));
- shaper_config_to_nix(profile, &cir, &pir);
-
- /* Packet length adjust */
- if (tm_node->pkt_mode)
- adjust = 1;
- else if (profile)
- adjust = profile->params.pkt_length_adjust & 0x1FF;
-
- otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, pir %" PRIu64
- "(%" PRIu64 "B), cir %" PRIu64 "(%" PRIu64 "B)"
- "adjust 0x%" PRIx64 "(pktmode %u) (%p)",
- nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
- tm_node->id, pir.rate, pir.burst, cir.rate, cir.burst,
- adjust, tm_node->pkt_mode, tm_node);
-
- switch (tm_node->hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_MDQX_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_MDQX_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED ALG */
- reg[k] = NIX_AF_MDQX_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- case NIX_TXSCH_LVL_TL4:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL4X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL4X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL4X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- case NIX_TXSCH_LVL_TL3:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL3X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL3X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL3X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL2:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL2X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL2X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL2X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL1:
- /* Configure CIR */
- reg[k] = NIX_AF_TL1X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure length disable and adjust */
- reg[k] = NIX_AF_TL1X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- }
-
- return k;
-}
-
-static uint8_t
-prepare_tm_sw_xoff(struct otx2_nix_tm_node *tm_node, bool enable,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- uint32_t hw_lvl = tm_node->hw_lvl;
- uint32_t schq = tm_node->hw_id;
- uint8_t k = 0;
-
- otx2_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)",
- nix_hwlvl2str(hw_lvl), schq, tm_node->lvl,
- tm_node->id, enable, tm_node);
-
- regval[k] = enable;
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_MDQ:
- reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
- k++;
- break;
- default:
- break;
- }
-
- return k;
-}
-
-static int
-populate_tm_reg(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG];
- uint64_t regval[MAX_REGS_PER_MBOX_MSG];
- uint64_t reg[MAX_REGS_PER_MBOX_MSG];
- struct otx2_mbox *mbox = dev->mbox;
- uint64_t parent = 0, child = 0;
- uint32_t hw_lvl, rr_prio, schq;
- struct nix_txschq_config *req;
- int rc = -EFAULT;
- uint8_t k = 0;
-
- memset(regval_mask, 0, sizeof(regval_mask));
- profile = nix_tm_shaper_profile_search(dev,
- tm_node->params.shaper_profile_id);
- rr_prio = tm_node->rr_prio;
- hw_lvl = tm_node->hw_lvl;
- schq = tm_node->hw_id;
-
- /* Root node will not have a parent node */
- if (hw_lvl == dev->otx2_tm_root_lvl)
- parent = tm_node->parent_hw_id;
- else
- parent = tm_node->parent->hw_id;
-
- /* Do we need this trigger to configure TL1 */
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
- hw_lvl == dev->otx2_tm_root_lvl) {
- rc = populate_tm_tl1_default(dev, parent);
- if (rc)
- goto error;
- }
-
- if (hw_lvl != NIX_TXSCH_LVL_SMQ)
- child = find_prio_anchor(dev, tm_node->id);
-
- /* Override default rr_prio when TL1
- * Static Priority is disabled
- */
- if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
- dev->tm_flags & NIX_TM_TL1_NO_SP) {
- rr_prio = TXSCH_TL1_DFLT_RR_PRIO;
- child = 0;
- }
-
- otx2_tm_dbg("Topology config node %s(%u)->%s(%"PRIu64") lvl %u, id %u"
- " prio_anchor %"PRIu64" rr_prio %u (%p)",
- nix_hwlvl2str(hw_lvl), schq, nix_hwlvl2str(hw_lvl + 1),
- parent, tm_node->lvl, tm_node->id, child, rr_prio, tm_node);
-
- /* Prepare Topology and Link config */
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
-
- /* Set xoff which will be cleared later and minimum length
- * which will be used for zero padding if packet length is
- * smaller
- */
- reg[k] = NIX_AF_SMQX_CFG(schq);
- regval[k] = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
- NIX_MIN_HW_FRS;
- regval_mask[k] = ~(BIT_ULL(50) | (0x7ULL << 36) | 0x7f);
- k++;
-
- /* Parent and schedule conf */
- reg[k] = NIX_AF_MDQX_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL4:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL4X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Configure TL4 to send to SDP channel instead of CGX/LBK */
- if (otx2_dev_is_sdp(dev)) {
- reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
- regval[k] = BIT_ULL(12);
- k++;
- }
- break;
- case NIX_TXSCH_LVL_TL3:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL3X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Link configuration */
- if (!otx2_dev_is_sdp(dev) &&
- dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
- otx2_nix_get_link(dev));
- regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
- k++;
- }
-
- break;
- case NIX_TXSCH_LVL_TL2:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL2X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Link configuration */
- if (!otx2_dev_is_sdp(dev) &&
- dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
- otx2_nix_get_link(dev));
- regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
- k++;
- }
-
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
- k++;
-
- break;
- }
-
- /* Prepare schedule config */
- k += prepare_tm_sched_reg(dev, tm_node, ®[k], ®val[k]);
-
- /* Prepare shaping config */
- k += prepare_tm_shaper_reg(tm_node, profile, ®[k], ®val[k]);
-
- if (!k)
- return 0;
-
- /* Copy and send config mbox */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = hw_lvl;
- req->num_regs = k;
-
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- otx2_mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k);
- otx2_mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k);
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- goto error;
-
- return 0;
-error:
- otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
- return rc;
-}
-
-
-static int
-nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *tm_node;
- uint32_t hw_lvl;
- int rc = 0;
-
- for (hw_lvl = 0; hw_lvl <= dev->otx2_tm_root_lvl; hw_lvl++) {
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl == hw_lvl &&
- tm_node->hw_lvl != NIX_TXSCH_LVL_CNT) {
- rc = populate_tm_reg(dev, tm_node);
- if (rc)
- goto exit;
- }
- }
- }
-exit:
- return rc;
-}
-
-static struct otx2_nix_tm_node *
-nix_tm_node_search(struct otx2_eth_dev *dev,
- uint32_t node_id, bool user)
-{
- struct otx2_nix_tm_node *tm_node;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->id == node_id &&
- (user == !!(tm_node->flags & NIX_TM_NODE_USER)))
- return tm_node;
- }
- return NULL;
-}
-
-static uint32_t
-check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id)
-{
- struct otx2_nix_tm_node *tm_node;
- uint32_t rr_num = 0;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
-
- if (!(tm_node->parent->id == parent_id))
- continue;
-
- if (tm_node->priority == priority)
- rr_num++;
- }
- return rr_num;
-}
-
-static int
-nix_tm_update_parent_info(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *tm_node_child;
- struct otx2_nix_tm_node *tm_node;
- struct otx2_nix_tm_node *parent;
- uint32_t rr_num = 0;
- uint32_t priority;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
- /* Count group of children of same priority i.e are RR */
- parent = tm_node->parent;
- priority = tm_node->priority;
- rr_num = check_rr(dev, priority, parent->id);
-
- /* Assuming that multiple RR groups are
- * not configured based on capability.
- */
- if (rr_num > 1) {
- parent->rr_prio = priority;
- parent->rr_num = rr_num;
- }
-
- /* Find out static priority children that are not in RR */
- TAILQ_FOREACH(tm_node_child, &dev->node_list, node) {
- if (!tm_node_child->parent)
- continue;
- if (parent->id != tm_node_child->parent->id)
- continue;
- if (parent->max_prio == UINT32_MAX &&
- tm_node_child->priority != parent->rr_prio)
- parent->max_prio = 0;
-
- if (parent->max_prio < tm_node_child->priority &&
- parent->rr_prio != tm_node_child->priority)
- parent->max_prio = tm_node_child->priority;
- }
- }
-
- return 0;
-}
-
-static int
-nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
- uint32_t parent_node_id, uint32_t priority,
- uint32_t weight, uint16_t hw_lvl,
- uint16_t lvl, bool user,
- struct rte_tm_node_params *params)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_nix_tm_node *tm_node, *parent_node;
- uint32_t profile_id;
-
- profile_id = params->shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
-
- parent_node = nix_tm_node_search(dev, parent_node_id, user);
-
- tm_node = rte_zmalloc("otx2_nix_tm_node",
- sizeof(struct otx2_nix_tm_node), 0);
- if (!tm_node)
- return -ENOMEM;
-
- tm_node->lvl = lvl;
- tm_node->hw_lvl = hw_lvl;
-
- /* Maintain minimum weight */
- if (!weight)
- weight = 1;
-
- tm_node->id = node_id;
- tm_node->priority = priority;
- tm_node->weight = weight;
- tm_node->rr_prio = 0xf;
- tm_node->max_prio = UINT32_MAX;
- tm_node->hw_id = UINT32_MAX;
- tm_node->flags = 0;
- if (user)
- tm_node->flags = NIX_TM_NODE_USER;
-
- /* Packet mode */
- if (!nix_tm_is_leaf(dev, lvl) &&
- ((profile && profile->params.packet_mode) ||
- (params->nonleaf.wfq_weight_mode &&
- params->nonleaf.n_sp_priorities &&
- !params->nonleaf.wfq_weight_mode[0])))
- tm_node->pkt_mode = 1;
-
- rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
-
- if (profile)
- profile->reference_count++;
-
- tm_node->parent = parent_node;
- tm_node->parent_hw_id = UINT32_MAX;
- shaper_default_red_algo(dev, tm_node, profile);
-
- TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
-
- return 0;
-}
-
-static int
-nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_shaper_profile *shaper_profile;
-
- while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) {
- if (shaper_profile->reference_count)
- otx2_tm_dbg("Shaper profile %u has non zero references",
- shaper_profile->shaper_profile_id);
- TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper);
- rte_free(shaper_profile);
- }
-
- return 0;
-}
-
-static int
-nix_clear_path_xoff(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node)
-{
- struct nix_txschq_config *req;
- struct otx2_nix_tm_node *p;
- int rc;
-
- /* Manipulating SW_XOFF not supported on Ax */
- if (otx2_dev_is_Ax(dev))
- return 0;
-
- /* Enable nodes in path for flush to succeed */
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- p = tm_node;
- else
- p = tm_node->parent;
- while (p) {
- if (!(p->flags & NIX_TM_NODE_ENABLED) &&
- (p->flags & NIX_TM_NODE_HWRES)) {
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = p->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(p, false, req->reg,
- req->regval);
- rc = otx2_mbox_process(dev->mbox);
- if (rc)
- return rc;
-
- p->flags |= NIX_TM_NODE_ENABLED;
- }
- p = p->parent;
- }
-
- return 0;
-}
-
-static int
-nix_smq_xoff(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- bool enable)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txschq_config *req;
- uint16_t smq;
- int rc;
-
- smq = tm_node->hw_id;
- otx2_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq,
- enable ? "enable" : "disable");
-
- rc = nix_clear_path_xoff(dev, tm_node);
- if (rc)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_SMQ;
- req->num_regs = 1;
-
- req->reg[0] = NIX_AF_SMQX_CFG(smq);
- req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0;
- req->regval_mask[0] = enable ?
- ~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
-{
- struct otx2_eth_txq *txq = __txq;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- struct otx2_npa_lf *lf;
- struct otx2_mbox *mbox;
- uint64_t aura_handle;
- int rc;
-
- otx2_tm_dbg("Setting SQ %u SQB aura FC to %s", txq->sq,
- enable ? "enable" : "disable");
-
- lf = otx2_npa_lf_obj_get();
- if (!lf)
- return -EFAULT;
- mbox = lf->mbox;
- /* Set/clear sqb aura fc_ena */
- aura_handle = txq->sqb_pool->pool_id;
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
- /* Below is not needed for aura writes but AF driver needs it */
- /* AF will translate to associated poolctx */
- req->aura.pool_addr = req->aura_id;
-
- req->aura.fc_ena = enable;
- req->aura_mask.fc_ena = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- /* Read back npa aura ctx */
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Init when enabled as there might be no triggers */
- if (enable)
- *(volatile uint64_t *)txq->fc_mem = rsp->aura.count;
- else
- *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs;
- /* Sync write barrier */
- rte_wmb();
-
- return 0;
-}
-
-static int
-nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
-{
- uint16_t sqb_cnt, head_off, tail_off;
- struct otx2_eth_dev *dev = txq->dev;
- uint64_t wdata, val, prev;
- uint16_t sq = txq->sq;
- int64_t *regaddr;
- uint64_t timeout;/* 10's of usec */
-
- /* Wait for enough time based on shaper min rate */
- timeout = (txq->qconf.nb_desc * NIX_MAX_HW_FRS * 8 * 1E5);
- timeout = timeout / dev->tm_rate_min;
- if (!timeout)
- timeout = 10000;
-
- wdata = ((uint64_t)sq << 32);
- regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
- val = otx2_atomic64_add_nosync(wdata, regaddr);
-
- /* Spin multiple iterations as "txq->fc_cache_pkts" can still
- * have space to send pkts even though fc_mem is disabled
- */
-
- while (true) {
- prev = val;
- rte_delay_us(10);
- val = otx2_atomic64_add_nosync(wdata, regaddr);
- /* Continue on error */
- if (val & BIT_ULL(63))
- continue;
-
- if (prev != val)
- continue;
-
- sqb_cnt = val & 0xFFFF;
- head_off = (val >> 20) & 0x3F;
- tail_off = (val >> 28) & 0x3F;
-
- /* SQ reached quiescent state */
- if (sqb_cnt <= 1 && head_off == tail_off &&
- (*txq->fc_mem == txq->nb_sqb_bufs)) {
- break;
- }
-
- /* Timeout */
- if (!timeout)
- goto exit;
- timeout--;
- }
-
- return 0;
-exit:
- otx2_nix_tm_dump(dev);
- return -EFAULT;
-}
-
-/* Flush and disable tx queue and its parent SMQ */
-int otx2_nix_sq_flush_pre(void *_txq, bool dev_started)
-{
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_eth_txq *txq;
- struct otx2_eth_dev *dev;
- uint16_t sq;
- bool user;
- int rc;
-
- txq = _txq;
- dev = txq->dev;
- sq = txq->sq;
-
- user = !!(dev->tm_flags & NIX_TM_COMMITTED);
-
- /* Find the node for this SQ */
- tm_node = nix_tm_node_search(dev, sq, user);
- if (!tm_node || !(tm_node->flags & NIX_TM_NODE_ENABLED)) {
- otx2_err("Invalid node/state for sq %u", sq);
- return -EFAULT;
- }
-
- /* Enable CGX RXTX to drain pkts */
- if (!dev_started) {
- /* Though it enables both RX MCAM Entries and CGX Link
- * we assume all the rx queues are stopped way back.
- */
- otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
- rc = otx2_mbox_process(dev->mbox);
- if (rc) {
- otx2_err("cgx start failed, rc=%d", rc);
- return rc;
- }
- }
-
- /* Disable smq xoff for case it was enabled earlier */
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- return rc;
- }
-
- /* As per HRM, to disable an SQ, all other SQ's
- * that feed to same SMQ must be paused before SMQ flush.
- */
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- if (!(sibling->flags & NIX_TM_NODE_ENABLED))
- continue;
-
- sq = sibling->id;
- txq = dev->eth_dev->data->tx_queues[sq];
- if (!txq)
- continue;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
- goto cleanup;
- }
-
- /* Wait for sq entries to be flushed */
- rc = nix_txq_flush_sq_spin(txq);
- if (rc) {
- otx2_err("Failed to drain sq %u, rc=%d\n", txq->sq, rc);
- return rc;
- }
- }
-
- tm_node->flags &= ~NIX_TM_NODE_ENABLED;
-
- /* Disable and flush */
- rc = nix_smq_xoff(dev, tm_node->parent, true);
- if (rc) {
- otx2_err("Failed to disable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- goto cleanup;
- }
-cleanup:
- /* Restore cgx state */
- if (!dev_started) {
- otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
- rc |= otx2_mbox_process(dev->mbox);
- }
-
- return rc;
-}
-
-int otx2_nix_sq_flush_post(void *_txq)
-{
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_eth_txq *txq = _txq;
- struct otx2_eth_txq *s_txq;
- struct otx2_eth_dev *dev;
- bool once = false;
- uint16_t sq, s_sq;
- bool user;
- int rc;
-
- dev = txq->dev;
- sq = txq->sq;
- user = !!(dev->tm_flags & NIX_TM_COMMITTED);
-
- /* Find the node for this SQ */
- tm_node = nix_tm_node_search(dev, sq, user);
- if (!tm_node) {
- otx2_err("Invalid node for sq %u", sq);
- return -EFAULT;
- }
-
- /* Enable all the siblings back */
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
-
- if (sibling->id == sq)
- continue;
-
- if (!(sibling->flags & NIX_TM_NODE_ENABLED))
- continue;
-
- s_sq = sibling->id;
- s_txq = dev->eth_dev->data->tx_queues[s_sq];
- if (!s_txq)
- continue;
-
- if (!once) {
- /* Enable back if any SQ is still present */
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- return rc;
- }
- once = true;
- }
-
- rc = otx2_nix_sq_sqb_aura_fc(s_txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
- return rc;
- }
- }
-
- return 0;
-}
-
-static int
-nix_sq_sched_data(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- bool rr_quantum_only)
-{
- struct rte_eth_dev *eth_dev = dev->eth_dev;
- struct otx2_mbox *mbox = dev->mbox;
- uint16_t sq = tm_node->id, smq;
- struct nix_aq_enq_req *req;
- uint64_t rr_quantum;
- int rc;
-
- smq = tm_node->parent->hw_id;
- rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- if (rr_quantum_only)
- otx2_tm_dbg("Update sq(%u) rr_quantum 0x%"PRIx64, sq, rr_quantum);
- else
- otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%"PRIx64,
- sq, smq, rr_quantum);
-
- if (sq > eth_dev->data->nb_tx_queues)
- return -EFAULT;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- req->qidx = sq;
- req->ctype = NIX_AQ_CTYPE_SQ;
- req->op = NIX_AQ_INSTOP_WRITE;
-
- /* smq update only when needed */
- if (!rr_quantum_only) {
- req->sq.smq = smq;
- req->sq_mask.smq = ~req->sq_mask.smq;
- }
- req->sq.smq_rr_quantum = rr_quantum;
- req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to set smq, rc=%d", rc);
- return rc;
-}
-
-int otx2_nix_sq_enable(void *_txq)
-{
- struct otx2_eth_txq *txq = _txq;
- int rc;
-
- /* Enable sqb_aura fc */
- rc = otx2_nix_sq_sqb_aura_fc(txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
- return rc;
- }
-
- return 0;
-}
-
-static int
-nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
- uint32_t flags, bool hw_only)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_nix_tm_node *tm_node, *next_node;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txsch_free_req *req;
- uint32_t profile_id;
- int rc = 0;
-
- next_node = TAILQ_FIRST(&dev->node_list);
- while (next_node) {
- tm_node = next_node;
- next_node = TAILQ_NEXT(tm_node, node);
-
- /* Check for only requested nodes */
- if ((tm_node->flags & flags_mask) != flags)
- continue;
-
- if (!nix_tm_is_leaf(dev, tm_node->lvl) &&
- tm_node->hw_lvl != NIX_TXSCH_LVL_TL1 &&
- tm_node->flags & NIX_TM_NODE_HWRES) {
- /* Free specific HW resource */
- otx2_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)",
- nix_hwlvl2str(tm_node->hw_lvl),
- tm_node->hw_id, tm_node->lvl,
- tm_node->id, tm_node);
-
- rc = nix_clear_path_xoff(dev, tm_node);
- if (rc)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
- req->flags = 0;
- req->schq_lvl = tm_node->hw_lvl;
- req->schq = tm_node->hw_id;
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
- tm_node->flags &= ~NIX_TM_NODE_HWRES;
- }
-
- /* Leave software elements if needed */
- if (hw_only)
- continue;
-
- otx2_tm_dbg("Free node lvl %u id %u (%p)",
- tm_node->lvl, tm_node->id, tm_node);
-
- profile_id = tm_node->params.shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile)
- profile->reference_count--;
-
- TAILQ_REMOVE(&dev->node_list, tm_node, node);
- rte_free(tm_node);
- }
-
- if (!flags_mask) {
- /* Free all hw resources */
- req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
- req->flags = TXSCHQ_FREE_ALL;
-
- return otx2_mbox_process(mbox);
- }
-
- return rc;
-}
-
-static uint8_t
-nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_rsp *rsp)
-{
- uint16_t schq;
- uint8_t lvl;
-
- for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
- for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) {
- dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq];
- dev->txschq_contig_list[lvl][schq] =
- rsp->schq_contig_list[lvl][schq];
- }
-
- dev->txschq[lvl] = rsp->schq[lvl];
- dev->txschq_contig[lvl] = rsp->schq_contig[lvl];
- }
- return 0;
-}
-
-static int
-nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *child,
- struct otx2_nix_tm_node *parent)
-{
- uint32_t hw_id, schq_con_index, prio_offset;
- uint32_t l_id, schq_index;
-
- otx2_tm_dbg("Assign hw id for child node %s lvl %u id %u (%p)",
- nix_hwlvl2str(child->hw_lvl), child->lvl, child->id, child);
-
- child->flags |= NIX_TM_NODE_HWRES;
-
- /* Process root nodes */
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
- child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
- int idx = 0;
- uint32_t tschq_con_index;
-
- l_id = child->hw_lvl;
- tschq_con_index = dev->txschq_contig_index[l_id];
- hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
- child->hw_id = hw_id;
- dev->txschq_contig_index[l_id]++;
- /* Update TL1 hw_id for its parent for config purpose */
- idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++;
- hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx];
- child->parent_hw_id = hw_id;
- return 0;
- }
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
- child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
- uint32_t tschq_con_index;
-
- l_id = child->hw_lvl;
- tschq_con_index = dev->txschq_index[l_id];
- hw_id = dev->txschq_list[l_id][tschq_con_index];
- child->hw_id = hw_id;
- dev->txschq_index[l_id]++;
- return 0;
- }
-
- /* Process children with parents */
- l_id = child->hw_lvl;
- schq_index = dev->txschq_index[l_id];
- schq_con_index = dev->txschq_contig_index[l_id];
-
- if (child->priority == parent->rr_prio) {
- hw_id = dev->txschq_list[l_id][schq_index];
- child->hw_id = hw_id;
- child->parent_hw_id = parent->hw_id;
- dev->txschq_index[l_id]++;
- } else {
- prio_offset = schq_con_index + child->priority;
- hw_id = dev->txschq_contig_list[l_id][prio_offset];
- child->hw_id = hw_id;
- }
- return 0;
-}
-
-static int
-nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *parent, *child;
- uint32_t child_hw_lvl, con_index_inc, i;
-
- for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
- TAILQ_FOREACH(parent, &dev->node_list, node) {
- child_hw_lvl = parent->hw_lvl - 1;
- if (parent->hw_lvl != i)
- continue;
- TAILQ_FOREACH(child, &dev->node_list, node) {
- if (!child->parent)
- continue;
- if (child->parent->id != parent->id)
- continue;
- nix_tm_assign_id_to_node(dev, child, parent);
- }
-
- con_index_inc = parent->max_prio + 1;
- dev->txschq_contig_index[child_hw_lvl] += con_index_inc;
-
- /*
- * Explicitly assign id to parent node if it
- * doesn't have a parent
- */
- if (parent->hw_lvl == dev->otx2_tm_root_lvl)
- nix_tm_assign_id_to_node(dev, parent, NULL);
- }
- }
- return 0;
-}
-
-static uint8_t
-nix_tm_count_req_schq(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_req *req, uint8_t lvl)
-{
- struct otx2_nix_tm_node *tm_node;
- uint8_t contig_count;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (lvl == tm_node->hw_lvl) {
- req->schq[lvl - 1] += tm_node->rr_num;
- if (tm_node->max_prio != UINT32_MAX) {
- contig_count = tm_node->max_prio + 1;
- req->schq_contig[lvl - 1] += contig_count;
- }
- }
- if (lvl == dev->otx2_tm_root_lvl &&
- dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
- tm_node->hw_lvl == dev->otx2_tm_root_lvl) {
- req->schq_contig[dev->otx2_tm_root_lvl]++;
- }
- }
-
- req->schq[NIX_TXSCH_LVL_TL1] = 1;
- req->schq_contig[NIX_TXSCH_LVL_TL1] = 0;
-
- return 0;
-}
-
-static int
-nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_req *req)
-{
- uint8_t i;
-
- for (i = NIX_TXSCH_LVL_TL1; i > 0; i--)
- nix_tm_count_req_schq(dev, req, i);
-
- for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
- dev->txschq_index[i] = 0;
- dev->txschq_contig_index[i] = 0;
- }
- return 0;
-}
-
-static int
-nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txsch_alloc_req *req;
- struct nix_txsch_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox);
-
- rc = nix_tm_prepare_txschq_req(dev, req);
- if (rc)
- return rc;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- nix_tm_copy_rsp_to_dev(dev, rsp);
- dev->link_cfg_lvl = rsp->link_cfg_lvl;
-
- nix_tm_assign_hw_id(dev);
- return 0;
-}
-
-static int
-nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- struct otx2_eth_txq *txq;
- uint16_t sq;
- int rc;
-
- nix_tm_update_parent_info(dev);
-
- rc = nix_tm_send_txsch_alloc_msg(dev);
- if (rc) {
- otx2_err("TM failed to alloc tm resources=%d", rc);
- return rc;
- }
-
- rc = nix_tm_txsch_reg_config(dev);
- if (rc) {
- otx2_err("TM failed to configure sched registers=%d", rc);
- return rc;
- }
-
- /* Trigger MTU recalculate as SMQ needs MTU conf */
- if (eth_dev->data->dev_started && eth_dev->data->nb_rx_queues) {
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc) {
- otx2_err("TM MTU update failed, rc=%d", rc);
- return rc;
- }
- }
-
- /* Mark all non-leaf's as enabled */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- }
-
- if (!xmit_enable)
- return 0;
-
- /* Update SQ Sched Data while SQ is idle */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- continue;
-
- rc = nix_sq_sched_data(dev, tm_node, false);
- if (rc) {
- otx2_err("SQ %u sched update failed, rc=%d",
- tm_node->id, rc);
- return rc;
- }
- }
-
- /* Finally XON all SMQ's */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- return rc;
- }
- }
-
- /* Enable xmit as all the topology is ready */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- continue;
-
- sq = tm_node->id;
- txq = eth_dev->data->tx_queues[sq];
-
- rc = otx2_nix_sq_enable(txq);
- if (rc) {
- otx2_err("TM sw xon failed on SQ %u, rc=%d",
- tm_node->id, rc);
- return rc;
- }
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- }
-
- return 0;
-}
-
-static int
-send_tm_reqval(struct otx2_mbox *mbox,
- struct nix_txschq_config *req,
- struct rte_tm_error *error)
-{
- int rc;
-
- if (!req->num_regs ||
- req->num_regs > MAX_REGS_PER_MBOX_MSG) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "invalid config";
- return -EIO;
- }
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- }
- return rc;
-}
-
-static uint16_t
-nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
-{
- if (nix_tm_have_tl1_access(dev)) {
- switch (lvl) {
- case OTX2_TM_LVL_ROOT:
- return NIX_TXSCH_LVL_TL1;
- case OTX2_TM_LVL_SCH1:
- return NIX_TXSCH_LVL_TL2;
- case OTX2_TM_LVL_SCH2:
- return NIX_TXSCH_LVL_TL3;
- case OTX2_TM_LVL_SCH3:
- return NIX_TXSCH_LVL_TL4;
- case OTX2_TM_LVL_SCH4:
- return NIX_TXSCH_LVL_SMQ;
- default:
- return NIX_TXSCH_LVL_CNT;
- }
- } else {
- switch (lvl) {
- case OTX2_TM_LVL_ROOT:
- return NIX_TXSCH_LVL_TL2;
- case OTX2_TM_LVL_SCH1:
- return NIX_TXSCH_LVL_TL3;
- case OTX2_TM_LVL_SCH2:
- return NIX_TXSCH_LVL_TL4;
- case OTX2_TM_LVL_SCH3:
- return NIX_TXSCH_LVL_SMQ;
- default:
- return NIX_TXSCH_LVL_CNT;
- }
- }
-}
-
-static uint16_t
-nix_max_prio(struct otx2_eth_dev *dev, uint16_t hw_lvl)
-{
- if (hw_lvl >= NIX_TXSCH_LVL_CNT)
- return 0;
-
- /* MDQ doesn't support SP */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- return 0;
-
- /* PF's TL1 with VF's enabled doesn't support SP */
- if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
- (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
- (dev->tm_flags & NIX_TM_TL1_NO_SP)))
- return 0;
-
- return TXSCH_TLX_SP_PRIO_MAX - 1;
-}
-
-
-static int
-validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
- uint32_t parent_id, uint32_t priority,
- struct rte_tm_error *error)
-{
- uint8_t priorities[TXSCH_TLX_SP_PRIO_MAX];
- struct otx2_nix_tm_node *tm_node;
- uint32_t rr_num = 0;
- int i;
-
- /* Validate priority against max */
- if (priority > nix_max_prio(dev, nix_tm_lvl2nix(dev, lvl - 1))) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "unsupported priority value";
- return -EINVAL;
- }
-
- if (parent_id == RTE_TM_NODE_ID_NULL)
- return 0;
-
- memset(priorities, 0, TXSCH_TLX_SP_PRIO_MAX);
- priorities[priority] = 1;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
-
- if (!(tm_node->flags & NIX_TM_NODE_USER))
- continue;
-
- if (tm_node->parent->id != parent_id)
- continue;
-
- priorities[tm_node->priority]++;
- }
-
- for (i = 0; i < TXSCH_TLX_SP_PRIO_MAX; i++)
- if (priorities[i] > 1)
- rr_num++;
-
- /* At max, one rr groups per parent */
- if (rr_num > 1) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "multiple DWRR node priority";
- return -EINVAL;
- }
-
- /* Check for previous priority to avoid holes in priorities */
- if (priority && !priorities[priority - 1]) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority not in order";
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int
-read_tm_reg(struct otx2_mbox *mbox, uint64_t reg,
- uint64_t *regval, uint32_t hw_lvl)
-{
- volatile struct nix_txschq_config *req;
- struct nix_txschq_config *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->read = 1;
- req->lvl = hw_lvl;
- req->reg[0] = reg;
- req->num_regs = 1;
-
- rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
- if (rc)
- return rc;
- *regval = rsp->regval[0];
- return 0;
-}
-
-/* Search for min rate in topology */
-static void
-nix_tm_shaper_profile_update_min(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- uint64_t rate_min = 1E9; /* 1 Gbps */
-
- TAILQ_FOREACH(profile, &dev->shaper_profile_list, shaper) {
- if (profile->params.peak.rate &&
- profile->params.peak.rate < rate_min)
- rate_min = profile->params.peak.rate;
-
- if (profile->params.committed.rate &&
- profile->params.committed.rate < rate_min)
- rate_min = profile->params.committed.rate;
- }
-
- dev->tm_rate_min = rate_min;
-}
-
-static int
-nix_xmit_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
- uint16_t sqb_cnt, head_off, tail_off;
- struct otx2_nix_tm_node *tm_node;
- struct otx2_eth_txq *txq;
- uint64_t wdata, val;
- int i, rc;
-
- otx2_tm_dbg("Disabling xmit on %s", eth_dev->data->name);
-
- /* Enable CGX RXTX to drain pkts */
- if (!eth_dev->data->dev_started) {
- otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
- rc = otx2_mbox_process(dev->mbox);
- if (rc)
- return rc;
- }
-
- /* XON all SMQ's */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- goto cleanup;
- }
- }
-
- /* Flush all tx queues */
- for (i = 0; i < sq_cnt; i++) {
- txq = eth_dev->data->tx_queues[i];
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
- goto cleanup;
- }
-
- /* Wait for sq entries to be flushed */
- rc = nix_txq_flush_sq_spin(txq);
- if (rc) {
- otx2_err("Failed to drain sq, rc=%d\n", rc);
- goto cleanup;
- }
- }
-
- /* XOFF & Flush all SMQ's. HRM mandates
- * all SQ's empty before SMQ flush is issued.
- */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, true);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- goto cleanup;
- }
- }
-
- /* Verify sanity of all tx queues */
- for (i = 0; i < sq_cnt; i++) {
- txq = eth_dev->data->tx_queues[i];
-
- wdata = ((uint64_t)txq->sq << 32);
- val = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS));
-
- sqb_cnt = val & 0xFFFF;
- head_off = (val >> 20) & 0x3F;
- tail_off = (val >> 28) & 0x3F;
-
- if (sqb_cnt > 1 || head_off != tail_off ||
- (*txq->fc_mem != txq->nb_sqb_bufs))
- otx2_err("Failed to gracefully flush sq %u", txq->sq);
- }
-
-cleanup:
- /* restore cgx state */
- if (!eth_dev->data->dev_started) {
- otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
- rc |= otx2_mbox_process(dev->mbox);
- }
-
- return rc;
-}
-
-static int
-otx2_nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
- int *is_leaf, struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
-
- if (is_leaf == NULL) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (node_id == RTE_TM_NODE_ID_NULL || !tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- return -EINVAL;
- }
- if (nix_tm_is_leaf(dev, tm_node->lvl))
- *is_leaf = true;
- else
- *is_leaf = false;
- return 0;
-}
-
-static int
-otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev,
- struct rte_tm_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- int rc, max_nr_nodes = 0, i;
- struct free_rsrcs_rsp *rsp;
-
- memset(cap, 0, sizeof(*cap));
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- for (i = 0; i < NIX_TXSCH_LVL_TL1; i++)
- max_nr_nodes += rsp->schq[i];
-
- cap->n_nodes_max = max_nr_nodes + dev->tm_leaf_cnt;
- /* TL1 level is reserved for PF */
- cap->n_levels_max = nix_tm_have_tl1_access(dev) ?
- OTX2_TM_LVL_MAX : OTX2_TM_LVL_MAX - 1;
- cap->non_leaf_nodes_identical = 1;
- cap->leaf_nodes_identical = 1;
-
- /* Shaper Capabilities */
- cap->shaper_private_n_max = max_nr_nodes;
- cap->shaper_n_max = max_nr_nodes;
- cap->shaper_private_dual_rate_n_max = max_nr_nodes;
- cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->shaper_private_packet_mode_supported = 1;
- cap->shaper_private_byte_mode_supported = 1;
- cap->shaper_pkt_length_adjust_min = NIX_LENGTH_ADJUST_MIN;
- cap->shaper_pkt_length_adjust_max = NIX_LENGTH_ADJUST_MAX;
-
- /* Schedule Capabilities */
- cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ];
- cap->sched_sp_n_priorities_max = TXSCH_TLX_SP_PRIO_MAX;
- cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max;
- cap->sched_wfq_n_groups_max = 1;
- cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->sched_wfq_packet_mode_supported = 1;
- cap->sched_wfq_byte_mode_supported = 1;
-
- cap->dynamic_update_mask =
- RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL |
- RTE_TM_UPDATE_NODE_SUSPEND_RESUME;
- cap->stats_mask =
- RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES |
- RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
-
- for (i = 0; i < RTE_COLORS; i++) {
- cap->mark_vlan_dei_supported[i] = false;
- cap->mark_ip_ecn_tcp_supported[i] = false;
- cap->mark_ip_dscp_supported[i] = false;
- }
-
- return 0;
-}
-
-static int
-otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl,
- struct rte_tm_level_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct free_rsrcs_rsp *rsp;
- uint16_t hw_lvl;
- int rc;
-
- memset(cap, 0, sizeof(*cap));
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- hw_lvl = nix_tm_lvl2nix(dev, lvl);
-
- if (nix_tm_is_leaf(dev, lvl)) {
- /* Leaf */
- cap->n_nodes_max = dev->tm_leaf_cnt;
- cap->n_nodes_leaf_max = dev->tm_leaf_cnt;
- cap->leaf_nodes_identical = 1;
- cap->leaf.stats_mask =
- RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES;
-
- } else if (lvl == OTX2_TM_LVL_ROOT) {
- /* Root node, aka TL2(vf)/TL1(pf) */
- cap->n_nodes_max = 1;
- cap->n_nodes_nonleaf_max = 1;
- cap->non_leaf_nodes_identical = 1;
-
- cap->nonleaf.shaper_private_supported = true;
- cap->nonleaf.shaper_private_dual_rate_supported =
- nix_tm_have_tl1_access(dev) ? false : true;
- cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_packet_mode_supported = 1;
- cap->nonleaf.shaper_private_byte_mode_supported = 1;
-
- cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
- cap->nonleaf.sched_sp_n_priorities_max =
- nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
-
- if (nix_tm_have_tl1_access(dev))
- cap->nonleaf.stats_mask =
- RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
- } else if ((lvl < OTX2_TM_LVL_MAX) &&
- (hw_lvl < NIX_TXSCH_LVL_CNT)) {
- /* TL2, TL3, TL4, MDQ */
- cap->n_nodes_max = rsp->schq[hw_lvl];
- cap->n_nodes_nonleaf_max = cap->n_nodes_max;
- cap->non_leaf_nodes_identical = 1;
-
- cap->nonleaf.shaper_private_supported = true;
- cap->nonleaf.shaper_private_dual_rate_supported = true;
- cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_packet_mode_supported = 1;
- cap->nonleaf.shaper_private_byte_mode_supported = 1;
-
- /* MDQ doesn't support Strict Priority */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
- else
- cap->nonleaf.sched_n_children_max =
- rsp->schq[hw_lvl - 1];
- cap->nonleaf.sched_sp_n_priorities_max =
- nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
- } else {
- /* unsupported level */
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- return rc;
- }
- return 0;
-}
-
-static int
-otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_node_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct free_rsrcs_rsp *rsp;
- int rc, hw_lvl, lvl;
-
- memset(cap, 0, sizeof(*cap));
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- hw_lvl = tm_node->hw_lvl;
- lvl = tm_node->lvl;
-
- /* Leaf node */
- if (nix_tm_is_leaf(dev, lvl)) {
- cap->stats_mask = RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES;
- return 0;
- }
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- /* Non Leaf Shaper */
- cap->shaper_private_supported = true;
- cap->shaper_private_dual_rate_supported =
- (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true;
- cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->shaper_private_packet_mode_supported = 1;
- cap->shaper_private_byte_mode_supported = 1;
-
- /* Non Leaf Scheduler */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
- else
- cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
-
- cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_children_per_group_max =
- cap->nonleaf.sched_n_children_max;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
-
- if (hw_lvl == NIX_TXSCH_LVL_TL1)
- cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
- return 0;
-}
-
-static int
-otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev,
- uint32_t profile_id,
- struct rte_tm_shaper_params *params,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile;
-
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID exist";
- return -EINVAL;
- }
-
- /* Committed rate and burst size can be enabled/disabled */
- if (params->committed.size || params->committed.rate) {
- if (params->committed.size < MIN_SHAPER_BURST ||
- params->committed.size > MAX_SHAPER_BURST) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
- return -EINVAL;
- } else if (!shaper_rate_to_nix(params->committed.rate * 8,
- NULL, NULL, NULL)) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
- error->message = "shaper committed rate invalid";
- return -EINVAL;
- }
- }
-
- /* Peak rate and burst size can be enabled/disabled */
- if (params->peak.size || params->peak.rate) {
- if (params->peak.size < MIN_SHAPER_BURST ||
- params->peak.size > MAX_SHAPER_BURST) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
- return -EINVAL;
- } else if (!shaper_rate_to_nix(params->peak.rate * 8,
- NULL, NULL, NULL)) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
- error->message = "shaper peak rate invalid";
- return -EINVAL;
- }
- }
-
- if (params->pkt_length_adjust < NIX_LENGTH_ADJUST_MIN ||
- params->pkt_length_adjust > NIX_LENGTH_ADJUST_MAX) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
- error->message = "length adjust invalid";
- return -EINVAL;
- }
-
- profile = rte_zmalloc("otx2_nix_tm_shaper_profile",
- sizeof(struct otx2_nix_tm_shaper_profile), 0);
- if (!profile)
- return -ENOMEM;
-
- profile->shaper_profile_id = profile_id;
- rte_memcpy(&profile->params, params,
- sizeof(struct rte_tm_shaper_params));
- TAILQ_INSERT_TAIL(&dev->shaper_profile_list, profile, shaper);
-
- otx2_tm_dbg("Added TM shaper profile %u, "
- " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64
- ", cbs %" PRIu64 " , adj %u, pkt mode %d",
- profile_id,
- params->peak.rate * 8,
- params->peak.size,
- params->committed.rate * 8,
- params->committed.size,
- params->pkt_length_adjust,
- params->packet_mode);
-
- /* Translate rate as bits per second */
- profile->params.peak.rate = profile->params.peak.rate * 8;
- profile->params.committed.rate = profile->params.committed.rate * 8;
- /* Always use PIR for single rate shaping */
- if (!params->peak.rate && params->committed.rate) {
- profile->params.peak = profile->params.committed;
- memset(&profile->params.committed, 0,
- sizeof(profile->params.committed));
- }
-
- /* update min rate */
- nix_tm_shaper_profile_update_min(dev);
- return 0;
-}
-
-static int
-otx2_nix_tm_shaper_profile_delete(struct rte_eth_dev *eth_dev,
- uint32_t profile_id,
- struct rte_tm_error *error)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- profile = nix_tm_shaper_profile_search(dev, profile_id);
-
- if (!profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID not exist";
- return -EINVAL;
- }
-
- if (profile->reference_count) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
- error->message = "shaper profile in use";
- return -EINVAL;
- }
-
- otx2_tm_dbg("Removing TM shaper profile %u", profile_id);
- TAILQ_REMOVE(&dev->shaper_profile_list, profile, shaper);
- rte_free(profile);
-
- /* update min rate */
- nix_tm_shaper_profile_update_min(dev);
- return 0;
-}
-
-static int
-otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
- uint32_t parent_node_id, uint32_t priority,
- uint32_t weight, uint32_t lvl,
- struct rte_tm_node_params *params,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile = NULL;
- struct otx2_nix_tm_node *parent_node;
- int rc, pkt_mode, clear_on_fail = 0;
- uint32_t exp_next_lvl, i;
- uint32_t profile_id;
- uint16_t hw_lvl;
-
- /* we don't support dynamic updates */
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "dynamic update not supported";
- return -EIO;
- }
-
- /* Leaf nodes have to be same priority */
- if (nix_tm_is_leaf(dev, lvl) && priority != 0) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "queue shapers must be priority 0";
- return -EIO;
- }
-
- parent_node = nix_tm_node_search(dev, parent_node_id, true);
-
- /* find the right level */
- if (lvl == RTE_TM_NODE_LEVEL_ID_ANY) {
- if (parent_node_id == RTE_TM_NODE_ID_NULL) {
- lvl = OTX2_TM_LVL_ROOT;
- } else if (parent_node) {
- lvl = parent_node->lvl + 1;
- } else {
- /* Neigher proper parent nor proper level id given */
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "invalid parent node id";
- return -ERANGE;
- }
- }
-
- /* Translate rte_tm level id's to nix hw level id's */
- hw_lvl = nix_tm_lvl2nix(dev, lvl);
- if (hw_lvl == NIX_TXSCH_LVL_CNT &&
- !nix_tm_is_leaf(dev, lvl)) {
- error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
- error->message = "invalid level id";
- return -ERANGE;
- }
-
- if (node_id < dev->tm_leaf_cnt)
- exp_next_lvl = NIX_TXSCH_LVL_SMQ;
- else
- exp_next_lvl = hw_lvl + 1;
-
- /* Check if there is no parent node yet */
- if (hw_lvl != dev->otx2_tm_root_lvl &&
- (!parent_node || parent_node->hw_lvl != exp_next_lvl)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "invalid parent node id";
- return -EINVAL;
- }
-
- /* Check if a node already exists */
- if (nix_tm_node_search(dev, node_id, true)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "node already exists";
- return -EINVAL;
- }
-
- if (!nix_tm_is_leaf(dev, lvl)) {
- /* Check if shaper profile exists for non leaf node */
- profile_id = params->shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && !profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "invalid shaper profile";
- return -EINVAL;
- }
-
- /* Minimum static priority count is 1 */
- if (!params->nonleaf.n_sp_priorities ||
- params->nonleaf.n_sp_priorities > TXSCH_TLX_SP_PRIO_MAX) {
- error->type =
- RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
- error->message = "invalid sp priorities";
- return -EINVAL;
- }
-
- pkt_mode = 0;
- /* Validate weight mode */
- for (i = 0; i < params->nonleaf.n_sp_priorities &&
- params->nonleaf.wfq_weight_mode; i++) {
- pkt_mode = !params->nonleaf.wfq_weight_mode[i];
- if (pkt_mode == !params->nonleaf.wfq_weight_mode[0])
- continue;
-
- error->type =
- RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
- error->message = "unsupported weight mode";
- return -EINVAL;
- }
-
- if (profile && params->nonleaf.n_sp_priorities &&
- pkt_mode != profile->params.packet_mode) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
- error->message = "shaper wfq packet mode mismatch";
- return -EINVAL;
- }
- }
-
- /* Check if there is second DWRR already in siblings or holes in prio */
- if (validate_prio(dev, lvl, parent_node_id, priority, error))
- return -EINVAL;
-
- if (weight > MAX_SCHED_WEIGHT) {
- error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "max weight exceeded";
- return -EINVAL;
- }
-
- rc = nix_tm_node_add_to_list(dev, node_id, parent_node_id,
- priority, weight, hw_lvl,
- lvl, true, params);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- /* cleanup user added nodes */
- if (clear_on_fail)
- nix_tm_free_resources(dev, NIX_TM_NODE_USER,
- NIX_TM_NODE_USER, false);
- error->message = "failed to add node";
- return rc;
- }
- error->type = RTE_TM_ERROR_TYPE_NONE;
- return 0;
-}
-
-static int
-otx2_nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node, *child_node;
- struct otx2_nix_tm_shaper_profile *profile;
- uint32_t profile_id;
-
- /* we don't support dynamic updates yet */
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "hierarchy exists";
- return -EIO;
- }
-
- if (node_id == RTE_TM_NODE_ID_NULL) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "invalid node id";
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- /* Check for any existing children */
- TAILQ_FOREACH(child_node, &dev->node_list, node) {
- if (child_node->parent == tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "children exist";
- return -EINVAL;
- }
- }
-
- /* Remove shaper profile reference */
- profile_id = tm_node->params.shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- profile->reference_count--;
-
- TAILQ_REMOVE(&dev->node_list, tm_node, node);
- rte_free(tm_node);
- return 0;
-}
-
-static int
-nix_tm_node_suspend_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error, bool suspend)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct nix_txschq_config *req;
- uint16_t flags;
- int rc;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy doesn't exist";
- return -EINVAL;
- }
-
- flags = tm_node->flags;
- flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) :
- (flags | NIX_TM_NODE_ENABLED);
-
- if (tm_node->flags == flags)
- return 0;
-
- /* send mbox for state change */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-
- req->lvl = tm_node->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node, suspend,
- req->reg, req->regval);
- rc = send_tm_reqval(mbox, req, error);
- if (!rc)
- tm_node->flags = flags;
- return rc;
-}
-
-static int
-otx2_nix_tm_node_suspend(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- return nix_tm_node_suspend_resume(eth_dev, node_id, error, true);
-}
-
-static int
-otx2_nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
-}
-
-static int
-otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
- int clear_on_fail,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- uint32_t leaf_cnt = 0;
- int rc;
-
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy exists";
- return -EINVAL;
- }
-
- /* Check if we have all the leaf nodes */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->flags & NIX_TM_NODE_USER &&
- tm_node->id < dev->tm_leaf_cnt)
- leaf_cnt++;
- }
-
- if (leaf_cnt != dev->tm_leaf_cnt) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "incomplete hierarchy";
- return -EINVAL;
- }
-
- /*
- * Disable xmit will be enabled when
- * new topology is available.
- */
- rc = nix_xmit_disable(eth_dev);
- if (rc) {
- otx2_err("failed to disable TX, rc=%d", rc);
- return -EIO;
- }
-
- /* Delete default/ratelimit tree */
- if (dev->tm_flags & (NIX_TM_DEFAULT_TREE | NIX_TM_RATE_LIMIT_TREE)) {
- rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "failed to free default resources";
- return rc;
- }
- dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE |
- NIX_TM_RATE_LIMIT_TREE);
- }
-
- /* Free up user alloc'ed resources */
- rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER,
- NIX_TM_NODE_USER, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "failed to free user resources";
- return rc;
- }
-
- rc = nix_tm_alloc_resources(eth_dev, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "alloc resources failed";
- /* TODO should we restore default config ? */
- if (clear_on_fail)
- nix_tm_free_resources(dev, 0, 0, false);
- return rc;
- }
-
- error->type = RTE_TM_ERROR_TYPE_NONE;
- dev->tm_flags |= NIX_TM_COMMITTED;
- return 0;
-}
-
-static int
-otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev,
- uint32_t node_id,
- uint32_t profile_id,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile = NULL;
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct nix_txschq_config *req;
- uint8_t k;
- int rc;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node || nix_tm_is_leaf(dev, tm_node->lvl)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "invalid node";
- return -EINVAL;
- }
-
- if (profile_id == tm_node->params.shaper_profile_id)
- return 0;
-
- if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (!profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID not exist";
- return -EINVAL;
- }
- }
-
- if (profile && profile->params.packet_mode != tm_node->pkt_mode) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile pkt mode mismatch";
- return -EINVAL;
- }
-
- tm_node->params.shaper_profile_id = profile_id;
-
- /* Nothing to do if not yet committed */
- if (!(dev->tm_flags & NIX_TM_COMMITTED))
- return 0;
-
- tm_node->flags &= ~NIX_TM_NODE_ENABLED;
-
- /* Flush the specific node with SW_XOFF */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = tm_node->hw_lvl;
- k = prepare_tm_sw_xoff(tm_node, true, req->reg, req->regval);
- req->num_regs = k;
-
- rc = send_tm_reqval(mbox, req, error);
- if (rc)
- return rc;
-
- shaper_default_red_algo(dev, tm_node, profile);
-
- /* Update the PIR/CIR and clear SW XOFF */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = prepare_tm_shaper_reg(tm_node, profile, req->reg, req->regval);
-
- k += prepare_tm_sw_xoff(tm_node, false, &req->reg[k], &req->regval[k]);
-
- req->num_regs = k;
- rc = send_tm_reqval(mbox, req, error);
- if (!rc)
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- return rc;
-}
-
-static int
-otx2_nix_tm_node_parent_update(struct rte_eth_dev *eth_dev,
- uint32_t node_id, uint32_t new_parent_id,
- uint32_t priority, uint32_t weight,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_nix_tm_node *new_parent;
- struct nix_txschq_config *req;
- uint8_t k;
- int rc;
-
- if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy doesn't exist";
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- /* Parent id valid only for non root nodes */
- if (tm_node->hw_lvl != dev->otx2_tm_root_lvl) {
- new_parent = nix_tm_node_search(dev, new_parent_id, true);
- if (!new_parent) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "no such parent node";
- return -EINVAL;
- }
-
- /* Current support is only for dynamic weight update */
- if (tm_node->parent != new_parent ||
- tm_node->priority != priority) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "only weight update supported";
- return -EINVAL;
- }
- }
-
- /* Skip if no change */
- if (tm_node->weight == weight)
- return 0;
-
- tm_node->weight = weight;
-
- /* For leaf nodes, SQ CTX needs update */
- if (nix_tm_is_leaf(dev, tm_node->lvl)) {
- /* Update SQ quantum data on the fly */
- rc = nix_sq_sched_data(dev, tm_node, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "sq sched data update failed";
- return rc;
- }
- } else {
- /* XOFF Parent node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->parent->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node->parent, true,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XOFF this node and all other siblings */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = 0;
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- k += prepare_tm_sw_xoff(sibling, true, &req->reg[k],
- &req->regval[k]);
- }
- req->num_regs = k;
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* Update new weight for current node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
- req->num_regs = prepare_tm_sched_reg(dev, tm_node,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XON this node and all other siblings */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = 0;
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- k += prepare_tm_sw_xoff(sibling, false, &req->reg[k],
- &req->regval[k]);
- }
- req->num_regs = k;
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XON Parent node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->parent->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node->parent, false,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
- }
- return 0;
-}
-
-static int
-otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_node_stats *stats,
- uint64_t *stats_mask, int clear,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- uint64_t reg, val;
- int64_t *addr;
- int rc = 0;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- if (!(tm_node->flags & NIX_TM_NODE_HWRES)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "HW resources not allocated";
- return -EINVAL;
- }
-
- /* Stats support only for leaf node or TL1 root */
- if (nix_tm_is_leaf(dev, tm_node->lvl)) {
- reg = (((uint64_t)tm_node->id) << 32);
-
- /* Packets */
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->n_pkts = val - tm_node->last_pkts;
-
- /* Bytes */
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->n_bytes = val - tm_node->last_bytes;
-
- if (clear) {
- tm_node->last_pkts = stats->n_pkts;
- tm_node->last_bytes = stats->n_bytes;
- }
-
- *stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES;
-
- } else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "stats read error";
-
- /* RED Drop packets */
- reg = NIX_AF_TL1X_DROPPED_PACKETS(tm_node->hw_id);
- rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
- if (rc)
- goto exit;
- stats->leaf.n_pkts_dropped[RTE_COLOR_RED] =
- val - tm_node->last_pkts;
-
- /* RED Drop bytes */
- reg = NIX_AF_TL1X_DROPPED_BYTES(tm_node->hw_id);
- rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
- if (rc)
- goto exit;
- stats->leaf.n_bytes_dropped[RTE_COLOR_RED] =
- val - tm_node->last_bytes;
-
- /* Clear stats */
- if (clear) {
- tm_node->last_pkts =
- stats->leaf.n_pkts_dropped[RTE_COLOR_RED];
- tm_node->last_bytes =
- stats->leaf.n_bytes_dropped[RTE_COLOR_RED];
- }
-
- *stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
-
- } else {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "unsupported node";
- rc = -EINVAL;
- }
-
-exit:
- return rc;
-}
-
-const struct rte_tm_ops otx2_tm_ops = {
- .node_type_get = otx2_nix_tm_node_type_get,
-
- .capabilities_get = otx2_nix_tm_capa_get,
- .level_capabilities_get = otx2_nix_tm_level_capa_get,
- .node_capabilities_get = otx2_nix_tm_node_capa_get,
-
- .shaper_profile_add = otx2_nix_tm_shaper_profile_add,
- .shaper_profile_delete = otx2_nix_tm_shaper_profile_delete,
-
- .node_add = otx2_nix_tm_node_add,
- .node_delete = otx2_nix_tm_node_delete,
- .node_suspend = otx2_nix_tm_node_suspend,
- .node_resume = otx2_nix_tm_node_resume,
- .hierarchy_commit = otx2_nix_tm_hierarchy_commit,
-
- .node_shaper_update = otx2_nix_tm_node_shaper_update,
- .node_parent_update = otx2_nix_tm_node_parent_update,
- .node_stats_read = otx2_nix_tm_node_stats_read,
-};
-
-static int
-nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t def = eth_dev->data->nb_tx_queues;
- struct rte_tm_node_params params;
- uint32_t leaf_parent, i;
- int rc = 0, leaf_level;
-
- /* Default params */
- memset(¶ms, 0, sizeof(params));
- params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
-
- if (nix_tm_have_tl1_access(dev)) {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL1,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto exit;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH4, false, ¶ms);
- if (rc)
- goto exit;
-
- leaf_parent = def + 4;
- leaf_level = OTX2_TM_LVL_QUEUE;
- } else {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto exit;
-
- leaf_parent = def + 3;
- leaf_level = OTX2_TM_LVL_SCH4;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- leaf_level, false, ¶ms);
- if (rc)
- break;
- }
-
-exit:
- return rc;
-}
-
-void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- TAILQ_INIT(&dev->node_list);
- TAILQ_INIT(&dev->shaper_profile_list);
- dev->tm_rate_min = 1E9; /* 1Gbps */
-}
-
-int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
- int rc;
-
- /* Free up all resources already held */
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc) {
- otx2_err("Failed to freeup existing resources,rc=%d", rc);
- return rc;
- }
-
- /* Clear shaper profiles */
- nix_tm_clear_shaper_profiles(dev);
- dev->tm_flags = NIX_TM_DEFAULT_TREE;
-
- /* Disable TL1 Static Priority when VF's are enabled
- * as otherwise VF's TL2 reallocation will be needed
- * runtime to support a specific topology of PF.
- */
- if (pci_dev->max_vfs)
- dev->tm_flags |= NIX_TM_TL1_NO_SP;
-
- rc = nix_tm_prepare_default_tree(eth_dev);
- if (rc != 0)
- return rc;
-
- rc = nix_tm_alloc_resources(eth_dev, false);
- if (rc != 0)
- return rc;
- dev->tm_leaf_cnt = sq_cnt;
-
- return 0;
-}
-
-static int
-nix_tm_prepare_rate_limited_tree(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t def = eth_dev->data->nb_tx_queues;
- struct rte_tm_node_params params;
- uint32_t leaf_parent, i, rc = 0;
-
- memset(¶ms, 0, sizeof(params));
-
- if (nix_tm_have_tl1_access(dev)) {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL1,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto error;
- leaf_parent = def + 3;
-
- /* Add per queue SMQ nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
- leaf_parent,
- 0, DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH4,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i,
- leaf_parent + 1 + i, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- OTX2_TM_LVL_QUEUE,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- return 0;
- }
-
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto error;
- leaf_parent = def + 2;
-
- /* Add per queue SMQ nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
- leaf_parent,
- 0, DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH3,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i, leaf_parent + 1 + i, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- OTX2_TM_LVL_SCH4,
- false, ¶ms);
- if (rc)
- break;
- }
-error:
- return rc;
-}
-
-static int
-otx2_nix_tm_rate_limit_mdq(struct rte_eth_dev *eth_dev,
- struct otx2_nix_tm_node *tm_node,
- uint64_t tx_rate)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile profile;
- struct otx2_mbox *mbox = dev->mbox;
- volatile uint64_t *reg, *regval;
- struct nix_txschq_config *req;
- uint16_t flags;
- uint8_t k = 0;
- int rc;
-
- flags = tm_node->flags;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_MDQ;
- reg = req->reg;
- regval = req->regval;
-
- if (tx_rate == 0) {
- k += prepare_tm_sw_xoff(tm_node, true, ®[k], ®val[k]);
- flags &= ~NIX_TM_NODE_ENABLED;
- goto exit;
- }
-
- if (!(flags & NIX_TM_NODE_ENABLED)) {
- k += prepare_tm_sw_xoff(tm_node, false, ®[k], ®val[k]);
- flags |= NIX_TM_NODE_ENABLED;
- }
-
- /* Use only PIR for rate limit */
- memset(&profile, 0, sizeof(profile));
- profile.params.peak.rate = tx_rate;
- /* Minimum burst of ~4us Bytes of Tx */
- profile.params.peak.size = RTE_MAX(NIX_MAX_HW_FRS,
- (4ull * tx_rate) / (1E6 * 8));
- if (!dev->tm_rate_min || dev->tm_rate_min > tx_rate)
- dev->tm_rate_min = tx_rate;
-
- k += prepare_tm_shaper_reg(tm_node, &profile, ®[k], ®val[k]);
-exit:
- req->num_regs = k;
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- tm_node->flags = flags;
- return 0;
-}
-
-int
-otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
- uint16_t queue_idx, uint16_t tx_rate_mbps)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6;
- struct otx2_nix_tm_node *tm_node;
- int rc;
-
- /* Check for supported revisions */
- if (otx2_dev_is_95xx_Ax(dev) ||
- otx2_dev_is_96xx_Ax(dev))
- return -EINVAL;
-
- if (queue_idx >= eth_dev->data->nb_tx_queues)
- return -EINVAL;
-
- if (!(dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
- !(dev->tm_flags & NIX_TM_RATE_LIMIT_TREE))
- goto error;
-
- if ((dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
- eth_dev->data->nb_tx_queues > 1) {
- /* For TM topology change ethdev needs to be stopped */
- if (eth_dev->data->dev_started)
- return -EBUSY;
-
- /*
- * Disable xmit will be enabled when
- * new topology is available.
- */
- rc = nix_xmit_disable(eth_dev);
- if (rc) {
- otx2_err("failed to disable TX, rc=%d", rc);
- return -EIO;
- }
-
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc < 0) {
- otx2_tm_dbg("failed to free default resources, rc %d",
- rc);
- return -EIO;
- }
-
- rc = nix_tm_prepare_rate_limited_tree(eth_dev);
- if (rc < 0) {
- otx2_tm_dbg("failed to prepare tm tree, rc=%d", rc);
- return rc;
- }
-
- rc = nix_tm_alloc_resources(eth_dev, true);
- if (rc != 0) {
- otx2_tm_dbg("failed to allocate tm tree, rc=%d", rc);
- return rc;
- }
-
- dev->tm_flags &= ~NIX_TM_DEFAULT_TREE;
- dev->tm_flags |= NIX_TM_RATE_LIMIT_TREE;
- }
-
- tm_node = nix_tm_node_search(dev, queue_idx, false);
-
- /* check if we found a valid leaf node */
- if (!tm_node ||
- !nix_tm_is_leaf(dev, tm_node->lvl) ||
- !tm_node->parent ||
- tm_node->parent->hw_id == UINT32_MAX)
- return -EIO;
-
- return otx2_nix_tm_rate_limit_mdq(eth_dev, tm_node->parent, tx_rate);
-error:
- otx2_tm_dbg("Unsupported TM tree 0x%0x", dev->tm_flags);
- return -EINVAL;
-}
-
-int
-otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *arg)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (!arg)
- return -EINVAL;
-
- /* Check for supported revisions */
- if (otx2_dev_is_95xx_Ax(dev) ||
- otx2_dev_is_96xx_Ax(dev))
- return -EINVAL;
-
- *(const void **)arg = &otx2_tm_ops;
-
- return 0;
-}
-
-int
-otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
-
- /* Xmit is assumed to be disabled */
- /* Free up resources already held */
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc) {
- otx2_err("Failed to freeup existing resources,rc=%d", rc);
- return rc;
- }
-
- /* Clear shaper profiles */
- nix_tm_clear_shaper_profiles(dev);
-
- dev->tm_flags = 0;
- return 0;
-}
-
-int
-otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
- uint32_t *rr_quantum, uint16_t *smq)
-{
- struct otx2_nix_tm_node *tm_node;
- int rc;
-
- /* 0..sq_cnt-1 are leaf nodes */
- if (sq >= dev->tm_leaf_cnt)
- return -EINVAL;
-
- /* Search for internal node first */
- tm_node = nix_tm_node_search(dev, sq, false);
- if (!tm_node)
- tm_node = nix_tm_node_search(dev, sq, true);
-
- /* Check if we found a valid leaf node */
- if (!tm_node || !nix_tm_is_leaf(dev, tm_node->lvl) ||
- !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
- return -EIO;
- }
-
- /* Get SMQ Id of leaf node's parent */
- *smq = tm_node->parent->hw_id;
- *rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc)
- return rc;
- tm_node->flags |= NIX_TM_NODE_ENABLED;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
deleted file mode 100644
index db44d4891f..0000000000
--- a/drivers/net/octeontx2/otx2_tm.h
+++ /dev/null
@@ -1,176 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TM_H__
-#define __OTX2_TM_H__
-
-#include <stdbool.h>
-
-#include <rte_tm_driver.h>
-
-#define NIX_TM_DEFAULT_TREE BIT_ULL(0)
-#define NIX_TM_COMMITTED BIT_ULL(1)
-#define NIX_TM_RATE_LIMIT_TREE BIT_ULL(2)
-#define NIX_TM_TL1_NO_SP BIT_ULL(3)
-
-struct otx2_eth_dev;
-
-void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops);
-int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
- uint32_t *rr_quantum, uint16_t *smq);
-int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
- uint16_t queue_idx, uint16_t tx_rate);
-int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
-int otx2_nix_sq_flush_post(void *_txq);
-int otx2_nix_sq_enable(void *_txq);
-int otx2_nix_get_link(struct otx2_eth_dev *dev);
-int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
-
-struct otx2_nix_tm_node {
- TAILQ_ENTRY(otx2_nix_tm_node) node;
- uint32_t id;
- uint32_t hw_id;
- uint32_t priority;
- uint32_t weight;
- uint16_t lvl;
- uint16_t hw_lvl;
- uint32_t rr_prio;
- uint32_t rr_num;
- uint32_t max_prio;
- uint32_t parent_hw_id;
- uint32_t flags:16;
-#define NIX_TM_NODE_HWRES BIT_ULL(0)
-#define NIX_TM_NODE_ENABLED BIT_ULL(1)
-#define NIX_TM_NODE_USER BIT_ULL(2)
-#define NIX_TM_NODE_RED_DISCARD BIT_ULL(3)
- /* Shaper algorithm for RED state @NIX_REDALG_E */
- uint32_t red_algo:2;
- uint32_t pkt_mode:1;
-
- struct otx2_nix_tm_node *parent;
- struct rte_tm_node_params params;
-
- /* Last stats */
- uint64_t last_pkts;
- uint64_t last_bytes;
-};
-
-struct otx2_nix_tm_shaper_profile {
- TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
- uint32_t shaper_profile_id;
- uint32_t reference_count;
- struct rte_tm_shaper_params params; /* Rate in bits/sec */
-};
-
-struct shaper_params {
- uint64_t burst_exponent;
- uint64_t burst_mantissa;
- uint64_t div_exp;
- uint64_t exponent;
- uint64_t mantissa;
- uint64_t burst;
- uint64_t rate;
-};
-
-TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node);
-TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
-
-#define MAX_SCHED_WEIGHT ((uint8_t)~0)
-#define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1)
-#define NIX_TM_WEIGHT_TO_RR_QUANTUM(__weight) \
- ((((__weight) & MAX_SCHED_WEIGHT) * \
- NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
-
-/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */
-/* = NIX_MAX_HW_MTU */
-#define DEFAULT_RR_WEIGHT 71
-
-/** NIX rate limits */
-#define MAX_RATE_DIV_EXP 12
-#define MAX_RATE_EXPONENT 0xf
-#define MAX_RATE_MANTISSA 0xff
-
-#define NIX_SHAPER_RATE_CONST ((uint64_t)2E6)
-
-/* NIX rate calculation in Bits/Sec
- * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
- * << NIX_*_PIR[RATE_EXPONENT]) / 256
- * PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
- *
- * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
- * << NIX_*_CIR[RATE_EXPONENT]) / 256
- * CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
- */
-#define SHAPER_RATE(exponent, mantissa, div_exp) \
- ((NIX_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent)))\
- / (((1ull << (div_exp)) * 256)))
-
-/* 96xx rate limits in Bits/Sec */
-#define MIN_SHAPER_RATE \
- SHAPER_RATE(0, 0, MAX_RATE_DIV_EXP)
-
-#define MAX_SHAPER_RATE \
- SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0)
-
-/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */
-#define NIX_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1)
-#define NIX_LENGTH_ADJUST_MAX 255
-
-/** TM Shaper - low level operations */
-
-/** NIX burst limits */
-#define MAX_BURST_EXPONENT 0xf
-#define MAX_BURST_MANTISSA 0xff
-
-/* NIX burst calculation
- * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA])
- * << (NIX_*_PIR[BURST_EXPONENT] + 1))
- * / 256
- *
- * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA])
- * << (NIX_*_CIR[BURST_EXPONENT] + 1))
- * / 256
- */
-#define SHAPER_BURST(exponent, mantissa) \
- (((256 + (mantissa)) << ((exponent) + 1)) / 256)
-
-/** Shaper burst limits */
-#define MIN_SHAPER_BURST \
- SHAPER_BURST(0, 0)
-
-#define MAX_SHAPER_BURST \
- SHAPER_BURST(MAX_BURST_EXPONENT,\
- MAX_BURST_MANTISSA)
-
-/* Default TL1 priority and Quantum from AF */
-#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1)
-#define TXSCH_TL1_DFLT_RR_PRIO 1
-
-#define TXSCH_TLX_SP_PRIO_MAX 10
-
-static inline const char *
-nix_hwlvl2str(uint32_t hw_lvl)
-{
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_MDQ:
- return "SMQ/MDQ";
- case NIX_TXSCH_LVL_TL4:
- return "TL4";
- case NIX_TXSCH_LVL_TL3:
- return "TL3";
- case NIX_TXSCH_LVL_TL2:
- return "TL2";
- case NIX_TXSCH_LVL_TL1:
- return "TL1";
- default:
- break;
- }
-
- return "???";
-}
-
-#endif /* __OTX2_TM_H__ */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
deleted file mode 100644
index e95184632f..0000000000
--- a/drivers/net/octeontx2/otx2_tx.c
+++ /dev/null
@@ -1,1077 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_vect.h>
-
-#include "otx2_ethdev.h"
-
-#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \
- /* Cached value is low, Update the fc_cache_pkts */ \
- if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
- /* Multiply with sqe_per_sqb to express in pkts */ \
- (txq)->fc_cache_pkts = \
- ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \
- (txq)->sqes_per_sqb_log2; \
- /* Check it again for the room */ \
- if (unlikely((txq)->fc_cache_pkts < (pkts))) \
- return 0; \
- } \
-} while (0)
-
-
-static __rte_always_inline uint16_t
-nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- struct otx2_eth_txq *txq = tx_queue; uint16_t i;
- const rte_iova_t io_addr = txq->io_addr;
- void *lmt_addr = txq->lmt_addr;
- uint64_t lso_tun_fmt;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
-
- /* Perform header writes before barrier for TSO */
- if (flags & NIX_TX_OFFLOAD_TSO_F) {
- lso_tun_fmt = txq->lso_tun_fmt;
- for (i = 0; i < pkts; i++)
- otx2_nix_xmit_prepare_tso(tx_pkts[i], flags);
- }
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- for (i = 0; i < pkts; i++) {
- otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt);
- /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- tx_pkts[i]->ol_flags, 4, flags);
- otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
- }
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- return pkts;
-}
-
-static __rte_always_inline uint16_t
-nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- struct otx2_eth_txq *txq = tx_queue; uint64_t i;
- const rte_iova_t io_addr = txq->io_addr;
- void *lmt_addr = txq->lmt_addr;
- uint64_t lso_tun_fmt;
- uint16_t segdw;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
-
- /* Perform header writes before barrier for TSO */
- if (flags & NIX_TX_OFFLOAD_TSO_F) {
- lso_tun_fmt = txq->lso_tun_fmt;
- for (i = 0; i < pkts; i++)
- otx2_nix_xmit_prepare_tso(tx_pkts[i], flags);
- }
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- for (i = 0; i < pkts; i++) {
- otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt);
- segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags);
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- tx_pkts[i]->ol_flags, segdw,
- flags);
- otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
- }
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- return pkts;
-}
-
-#if defined(RTE_ARCH_ARM64)
-
-#define NIX_DESCS_PER_LOOP 4
-static __rte_always_inline uint16_t
-nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
- uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
- uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3;
- uint64x2_t senddesc01_w0, senddesc23_w0;
- uint64x2_t senddesc01_w1, senddesc23_w1;
- uint64x2_t sgdesc01_w0, sgdesc23_w0;
- uint64x2_t sgdesc01_w1, sgdesc23_w1;
- struct otx2_eth_txq *txq = tx_queue;
- uint64_t *lmt_addr = txq->lmt_addr;
- rte_iova_t io_addr = txq->io_addr;
- uint64x2_t ltypes01, ltypes23;
- uint64x2_t xtmp128, ytmp128;
- uint64x2_t xmask01, xmask23;
- uint64x2_t cmd00, cmd01;
- uint64x2_t cmd10, cmd11;
- uint64x2_t cmd20, cmd21;
- uint64x2_t cmd30, cmd31;
- uint64_t lmt_status, i;
- uint16_t pkts_left;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
- pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]);
- senddesc23_w0 = senddesc01_w0;
- senddesc01_w1 = vdupq_n_u64(0);
- senddesc23_w1 = senddesc01_w1;
- sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]);
- sgdesc23_w0 = sgdesc01_w0;
-
- for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
- /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
- senddesc01_w0 = vbicq_u64(senddesc01_w0,
- vdupq_n_u64(0xFFFFFFFF));
- sgdesc01_w0 = vbicq_u64(sgdesc01_w0,
- vdupq_n_u64(0xFFFFFFFF));
-
- senddesc23_w0 = senddesc01_w0;
- sgdesc23_w0 = sgdesc01_w0;
-
- /* Move mbufs to iova */
- mbuf0 = (uint64_t *)tx_pkts[0];
- mbuf1 = (uint64_t *)tx_pkts[1];
- mbuf2 = (uint64_t *)tx_pkts[2];
- mbuf3 = (uint64_t *)tx_pkts[3];
-
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mbuf, buf_iova));
- /*
- * Get mbuf's, olflags, iova, pktlen, dataoff
- * dataoff_iovaX.D[0] = iova,
- * dataoff_iovaX.D[1](15:0) = mbuf->dataoff
- * len_olflagsX.D[0] = ol_flags,
- * len_olflagsX.D[1](63:32) = mbuf->pkt_len
- */
- dataoff_iova0 = vld1q_u64(mbuf0);
- len_olflags0 = vld1q_u64(mbuf0 + 2);
- dataoff_iova1 = vld1q_u64(mbuf1);
- len_olflags1 = vld1q_u64(mbuf1 + 2);
- dataoff_iova2 = vld1q_u64(mbuf2);
- len_olflags2 = vld1q_u64(mbuf2 + 2);
- dataoff_iova3 = vld1q_u64(mbuf3);
- len_olflags3 = vld1q_u64(mbuf3 + 2);
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- struct rte_mbuf *mbuf;
- /* Set don't free bit if reference count > 1 */
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
- offsetof(struct rte_mbuf, buf_iova));
-
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask01, 0);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask01, 1);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask23, 0);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask23, 1);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Ensuring mbuf fields which got updated in
- * otx2_nix_prefree_seg are written before LMTST.
- */
- rte_io_wmb();
- } else {
- struct rte_mbuf *mbuf;
- /* Mark mempool object as "put" since
- * it is freed by NIX
- */
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
- RTE_SET_USED(mbuf);
- }
-
- /* Move mbufs to point pool */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
-
- if (flags &
- (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
- /* Get tx_offload for ol2, ol3, l2, l3 lengths */
- /*
- * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
- * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
- */
-
- asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" :
- [a]"+w"(senddesc01_w1) :
- [in]"r"(mbuf0 + 2) : "memory");
-
- asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" :
- [a]"+w"(senddesc01_w1) :
- [in]"r"(mbuf1 + 2) : "memory");
-
- asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" :
- [b]"+w"(senddesc23_w1) :
- [in]"r"(mbuf2 + 2) : "memory");
-
- asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" :
- [b]"+w"(senddesc23_w1) :
- [in]"r"(mbuf3 + 2) : "memory");
-
- /* Get pool pointer alone */
- mbuf0 = (uint64_t *)*mbuf0;
- mbuf1 = (uint64_t *)*mbuf1;
- mbuf2 = (uint64_t *)*mbuf2;
- mbuf3 = (uint64_t *)*mbuf3;
- } else {
- /* Get pool pointer alone */
- mbuf0 = (uint64_t *)*mbuf0;
- mbuf1 = (uint64_t *)*mbuf1;
- mbuf2 = (uint64_t *)*mbuf2;
- mbuf3 = (uint64_t *)*mbuf3;
- }
-
- const uint8x16_t shuf_mask2 = {
- 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- xtmp128 = vzip2q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip2q_u64(len_olflags2, len_olflags3);
-
- /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */
- const uint64x2_t and_mask0 = {
- 0xFFFFFFFFFFFFFFFF,
- 0x000000000000FFFF,
- };
-
- dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0);
- dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0);
- dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0);
- dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0);
-
- /*
- * Pick only 16 bits of pktlen preset at bits 63:32
- * and place them at bits 15:0.
- */
- xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2);
- ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2);
-
- /* Add pairwise to get dataoff + iova in sgdesc_w1 */
- sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1);
- sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3);
-
- /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of
- * pktlen at 15:0 position.
- */
- sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128);
- sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128);
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128);
-
- if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /*
- * Lookup table to translate ol_flags to
- * il3/il4 types. But we still use ol3/ol4 types in
- * senddesc_w1 as only one header processing is enabled.
- */
- const uint8x16_t tbl = {
- /* [0-15] = il4type:il3type */
- 0x04, /* none (IPv6 assumed) */
- 0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
- 0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
- 0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
- 0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
- 0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
- 0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
- 0x02, /* RTE_MBUF_F_TX_IPV4 */
- 0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
- 0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
- 0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
- 0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- };
-
- /* Extract olflags to translate to iltypes */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(47):L3_LEN(9):L2_LEN(7+z)
- * E(47):L3_LEN(9):L2_LEN(7+z)
- */
- senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1);
- senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1);
-
- /* Move OLFLAGS bits 55:52 to 51:48
- * with zeros preprended on the byte and rest
- * don't care
- */
- xtmp128 = vshrq_n_u8(xtmp128, 4);
- ytmp128 = vshrq_n_u8(ytmp128, 4);
- /*
- * E(48):L3_LEN(8):L2_LEN(z+7)
- * E(48):L3_LEN(8):L2_LEN(z+7)
- */
- const int8x16_t tshft3 = {
- -1, 0, 8, 8, 8, 8, 8, 8,
- -1, 0, 8, 8, 8, 8, 8, 8,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Do the lookup */
- ltypes01 = vqtbl1q_u8(tbl, xtmp128);
- ltypes23 = vqtbl1q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 48:55 of iltype
- * and place it in ol3/ol4type of senddesc_w1
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
- * a [E(32):E(16):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E(32):E(16):(OL3+OL2):OL2]
- * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u16(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u16(senddesc23_w1, 8));
-
- /* Create first half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
-
- } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /*
- * Lookup table to translate ol_flags to
- * ol3/ol4 types.
- */
-
- const uint8x16_t tbl = {
- /* [0-15] = ol4type:ol3type */
- 0x00, /* none */
- 0x03, /* OUTER_IP_CKSUM */
- 0x02, /* OUTER_IPV4 */
- 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
- 0x04, /* OUTER_IPV6 */
- 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM */
- 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */
- 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */
- 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- };
-
- /* Extract olflags to translate to iltypes */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(47):OL3_LEN(9):OL2_LEN(7+z)
- * E(47):OL3_LEN(9):OL2_LEN(7+z)
- */
- const uint8x16_t shuf_mask5 = {
- 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
- senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
-
- /* Extract outer ol flags only */
- const uint64x2_t o_cksum_mask = {
- 0x1C00020000000000,
- 0x1C00020000000000,
- };
-
- xtmp128 = vandq_u64(xtmp128, o_cksum_mask);
- ytmp128 = vandq_u64(ytmp128, o_cksum_mask);
-
- /* Extract OUTER_UDP_CKSUM bit 41 and
- * move it to bit 61
- */
-
- xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
- ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
-
- /* Shift oltype by 2 to start nibble from BIT(56)
- * instead of BIT(58)
- */
- xtmp128 = vshrq_n_u8(xtmp128, 2);
- ytmp128 = vshrq_n_u8(ytmp128, 2);
- /*
- * E(48):L3_LEN(8):L2_LEN(z+7)
- * E(48):L3_LEN(8):L2_LEN(z+7)
- */
- const int8x16_t tshft3 = {
- -1, 0, 8, 8, 8, 8, 8, 8,
- -1, 0, 8, 8, 8, 8, 8, 8,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Do the lookup */
- ltypes01 = vqtbl1q_u8(tbl, xtmp128);
- ltypes23 = vqtbl1q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 56:63 of oltype
- * and place it in ol3/ol4type of senddesc_w1
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
- * a [E(32):E(16):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E(32):E(16):(OL3+OL2):OL2]
- * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u16(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u16(senddesc23_w1, 8));
-
- /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
-
- } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /* Lookup table to translate ol_flags to
- * ol4type, ol3type, il4type, il3type of senddesc_w1
- */
- const uint8x16x2_t tbl = {
- {
- {
- /* [0-15] = il4type:il3type */
- 0x04, /* none (IPv6) */
- 0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
- 0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
- 0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
- 0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- 0x02, /* RTE_MBUF_F_TX_IPV4 */
- 0x12, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x22, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x32, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- 0x03, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_IP_CKSUM
- */
- 0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- },
-
- {
- /* [16-31] = ol4type:ol3type */
- 0x00, /* none */
- 0x03, /* OUTER_IP_CKSUM */
- 0x02, /* OUTER_IPV4 */
- 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
- 0x04, /* OUTER_IPV6 */
- 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM */
- 0x33, /* OUTER_UDP_CKSUM |
- * OUTER_IP_CKSUM
- */
- 0x32, /* OUTER_UDP_CKSUM |
- * OUTER_IPV4
- */
- 0x33, /* OUTER_UDP_CKSUM |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- 0x34, /* OUTER_UDP_CKSUM |
- * OUTER_IPV6
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- },
- }
- };
-
- /* Extract olflags to translate to oltype & iltype */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
- * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
- */
- const uint32x4_t tshft_4 = {
- 1, 0,
- 1, 0,
- };
- senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4);
- senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4);
-
- /*
- * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
- * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
- */
- const uint8x16_t shuf_mask5 = {
- 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
- senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
-
- /* Extract outer and inner header ol_flags */
- const uint64x2_t oi_cksum_mask = {
- 0x1CF0020000000000,
- 0x1CF0020000000000,
- };
-
- xtmp128 = vandq_u64(xtmp128, oi_cksum_mask);
- ytmp128 = vandq_u64(ytmp128, oi_cksum_mask);
-
- /* Extract OUTER_UDP_CKSUM bit 41 and
- * move it to bit 61
- */
-
- xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
- ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
-
- /* Shift right oltype by 2 and iltype by 4
- * to start oltype nibble from BIT(58)
- * instead of BIT(56) and iltype nibble from BIT(48)
- * instead of BIT(52).
- */
- const int8x16_t tshft5 = {
- 8, 8, 8, 8, 8, 8, -4, -2,
- 8, 8, 8, 8, 8, 8, -4, -2,
- };
-
- xtmp128 = vshlq_u8(xtmp128, tshft5);
- ytmp128 = vshlq_u8(ytmp128, tshft5);
- /*
- * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
- * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
- */
- const int8x16_t tshft3 = {
- -1, 0, -1, 0, 0, 0, 0, 0,
- -1, 0, -1, 0, 0, 0, 0, 0,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Mark Bit(4) of oltype */
- const uint64x2_t oi_cksum_mask2 = {
- 0x1000000000000000,
- 0x1000000000000000,
- };
-
- xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2);
- ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2);
-
- /* Do the lookup */
- ltypes01 = vqtbl2q_u8(tbl, xtmp128);
- ltypes23 = vqtbl2q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 48:55 of iltype and
- * Bit 56:63 of oltype and place it in corresponding
- * place in senddesc_w1.
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from
- * l3len, l2len, ol3len, ol2len.
- * a [E(32):L3(8):L2(8):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2]
- * a = a + (a << 16)
- * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2]
- * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u32(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u32(senddesc23_w1, 8));
-
- /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u32(senddesc01_w1, 16));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u32(senddesc23_w1, 16));
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
- } else {
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
-
- /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
- }
-
- do {
- vst1q_u64(lmt_addr, cmd00);
- vst1q_u64(lmt_addr + 2, cmd01);
- vst1q_u64(lmt_addr + 4, cmd10);
- vst1q_u64(lmt_addr + 6, cmd11);
- vst1q_u64(lmt_addr + 8, cmd20);
- vst1q_u64(lmt_addr + 10, cmd21);
- vst1q_u64(lmt_addr + 12, cmd30);
- vst1q_u64(lmt_addr + 14, cmd31);
- lmt_status = otx2_lmt_submit(io_addr);
-
- } while (lmt_status == 0);
- tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
- }
-
- if (unlikely(pkts_left))
- pkts += nix_xmit_pkts(tx_queue, tx_pkts, pkts_left, cmd, flags);
-
- return pkts;
-}
-
-#else
-static __rte_always_inline uint16_t
-nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- RTE_SET_USED(tx_queue);
- RTE_SET_USED(tx_pkts);
- RTE_SET_USED(pkts);
- RTE_SET_USED(cmd);
- RTE_SET_USED(flags);
- return 0;
-}
-#endif
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[sz]; \
- \
- /* For TSO inner checksum is a must */ \
- if (((flags) & NIX_TX_OFFLOAD_TSO_F) && \
- !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \
- return 0; \
- return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- \
- /* For TSO inner checksum is a must */ \
- if (((flags) & NIX_TX_OFFLOAD_TSO_F) && \
- !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \
- return 0; \
- return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \
- (flags) | NIX_TX_MULTI_SEG_F); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[sz]; \
- \
- /* VLAN, TSTMP, TSO is not supported by vec */ \
- if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \
- (flags) & NIX_TX_OFFLOAD_TSTAMP_F || \
- (flags) & NIX_TX_OFFLOAD_TSO_F) \
- return 0; \
- return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, cmd, (flags)); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-static inline void
-pick_tx_func(struct rte_eth_dev *eth_dev,
- const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* [SEC] [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
- eth_dev->tx_pkt_burst = tx_burst
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
-}
-
-void
-otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- if (dev->scalar_ena ||
- (dev->tx_offload_flags &
- (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F |
- NIX_TX_OFFLOAD_TSO_F)))
- pick_tx_func(eth_dev, nix_eth_tx_burst);
- else
- pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-
- if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
-
- rte_mb();
-}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
deleted file mode 100644
index 4bbd5a390f..0000000000
--- a/drivers/net/octeontx2/otx2_tx.h
+++ /dev/null
@@ -1,791 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TX_H__
-#define __OTX2_TX_H__
-
-#define NIX_TX_OFFLOAD_NONE (0)
-#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0)
-#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
-#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2)
-#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3)
-#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4)
-#define NIX_TX_OFFLOAD_TSO_F BIT(5)
-#define NIX_TX_OFFLOAD_SECURITY_F BIT(6)
-
-/* Flags to control xmit_prepare function.
- * Defining it from backwards to denote its been
- * not used as offload flags to pick function
- */
-#define NIX_TX_MULTI_SEG_F BIT(15)
-
-#define NIX_TX_NEED_SEND_HDR_W1 \
- (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \
- NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)
-
-#define NIX_TX_NEED_EXT_HDR \
- (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F | \
- NIX_TX_OFFLOAD_TSO_F)
-
-#define NIX_UDP_TUN_BITMASK \
- ((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
- (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
-
-#define NIX_LSO_FORMAT_IDX_TSOV4 (0)
-#define NIX_LSO_FORMAT_IDX_TSOV6 (1)
-
-/* Function to determine no of tx subdesc required in case ext
- * sub desc is enabled.
- */
-static __rte_always_inline int
-otx2_nix_tx_ext_subs(const uint16_t flags)
-{
- return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 :
- ((flags & (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)) ?
- 1 : 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
- const uint64_t ol_flags, const uint16_t no_segdw,
- const uint16_t flags)
-{
- if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
- struct nix_send_mem_s *send_mem;
- uint16_t off = (no_segdw - 1) << 1;
- const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
-
- send_mem = (struct nix_send_mem_s *)(cmd + off);
- if (flags & NIX_TX_MULTI_SEG_F) {
- /* Retrieving the default desc values */
- cmd[off] = send_mem_desc[6];
-
- /* Using compiler barier to avoid voilation of C
- * aliasing rules.
- */
- rte_compiler_barrier();
- }
-
- /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
- * should not be recorded, hence changing the alg type to
- * NIX_SENDMEMALG_SET and also changing send mem addr field to
- * next 8 bytes as it corrpt the actual tx tstamp registered
- * address.
- */
- send_mem->alg = NIX_SENDMEMALG_SETTSTMP - (is_ol_tstamp);
-
- send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] +
- (is_ol_tstamp));
- }
-}
-
-static __rte_always_inline uint64_t
-otx2_pktmbuf_detach(struct rte_mbuf *m)
-{
- struct rte_mempool *mp = m->pool;
- uint32_t mbuf_size, buf_len;
- struct rte_mbuf *md;
- uint16_t priv_size;
- uint16_t refcount;
-
- /* Update refcount of direct mbuf */
- md = rte_mbuf_from_indirect(m);
- refcount = rte_mbuf_refcnt_update(md, -1);
-
- priv_size = rte_pktmbuf_priv_size(mp);
- mbuf_size = (uint32_t)(sizeof(struct rte_mbuf) + priv_size);
- buf_len = rte_pktmbuf_data_room_size(mp);
-
- m->priv_size = priv_size;
- m->buf_addr = (char *)m + mbuf_size;
- m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size;
- m->buf_len = (uint16_t)buf_len;
- rte_pktmbuf_reset_headroom(m);
- m->data_len = 0;
- m->ol_flags = 0;
- m->next = NULL;
- m->nb_segs = 1;
-
- /* Now indirect mbuf is safe to free */
- rte_pktmbuf_free(m);
-
- if (refcount == 0) {
- rte_mbuf_refcnt_set(md, 1);
- md->data_len = 0;
- md->ol_flags = 0;
- md->next = NULL;
- md->nb_segs = 1;
- return 0;
- } else {
- return 1;
- }
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_prefree_seg(struct rte_mbuf *m)
-{
- if (likely(rte_mbuf_refcnt_read(m) == 1)) {
- if (!RTE_MBUF_DIRECT(m))
- return otx2_pktmbuf_detach(m);
-
- m->next = NULL;
- m->nb_segs = 1;
- return 0;
- } else if (rte_mbuf_refcnt_update(m, -1) == 0) {
- if (!RTE_MBUF_DIRECT(m))
- return otx2_pktmbuf_detach(m);
-
- rte_mbuf_refcnt_set(m, 1);
- m->next = NULL;
- m->nb_segs = 1;
- return 0;
- }
-
- /* Mbuf is having refcount more than 1 so need not to be freed */
- return 1;
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
-{
- uint64_t mask, ol_flags = m->ol_flags;
-
- if (flags & NIX_TX_OFFLOAD_TSO_F &&
- (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
- uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
- uint16_t *iplen, *oiplen, *oudplen;
- uint16_t lso_sb, paylen;
-
- mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
- lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
- m->l2_len + m->l3_len + m->l4_len;
-
- /* Reduce payload len from base headers */
- paylen = m->pkt_len - lso_sb;
-
- /* Get iplen position assuming no tunnel hdr */
- iplen = (uint16_t *)(mdata + m->l2_len +
- (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
- /* Handle tunnel tso */
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
- const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
- ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
-
- oiplen = (uint16_t *)(mdata + m->outer_l2_len +
- (2 << !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)));
- *oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
- paylen);
-
- /* Update format for UDP tunneled packet */
- if (is_udp_tun) {
- oudplen = (uint16_t *)(mdata + m->outer_l2_len +
- m->outer_l3_len + 4);
- *oudplen =
- rte_cpu_to_be_16(rte_be_to_cpu_16(*oudplen) -
- paylen);
- }
-
- /* Update iplen position to inner ip hdr */
- iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
- m->l4_len + (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
- }
-
- *iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
- }
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
- const uint64_t lso_tun_fmt)
-{
- struct nix_send_ext_s *send_hdr_ext;
- struct nix_send_hdr_s *send_hdr;
- uint64_t ol_flags = 0, mask;
- union nix_send_hdr_w1_u w1;
- union nix_send_sg_s *sg;
-
- send_hdr = (struct nix_send_hdr_s *)cmd;
- if (flags & NIX_TX_NEED_EXT_HDR) {
- send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
- sg = (union nix_send_sg_s *)(cmd + 4);
- /* Clear previous markings */
- send_hdr_ext->w0.lso = 0;
- send_hdr_ext->w1.u = 0;
- } else {
- sg = (union nix_send_sg_s *)(cmd + 2);
- }
-
- if (flags & NIX_TX_NEED_SEND_HDR_W1) {
- ol_flags = m->ol_flags;
- w1.u = 0;
- }
-
- if (!(flags & NIX_TX_MULTI_SEG_F)) {
- send_hdr->w0.total = m->data_len;
- send_hdr->w0.aura =
- npa_lf_aura_handle_to_aura(m->pool->pool_id);
- }
-
- /*
- * L3type: 2 => IPV4
- * 3 => IPV4 with csum
- * 4 => IPV6
- * L3type and L3ptr needs to be set for either
- * L3 csum or L4 csum or LSO
- *
- */
-
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
- const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
- const uint8_t ol3type =
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
-
- /* Outer L3 */
- w1.ol3type = ol3type;
- mask = 0xffffull << ((!!ol3type) << 4);
- w1.ol3ptr = ~mask & m->outer_l2_len;
- w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len);
-
- /* Outer L4 */
- w1.ol4type = csum + (csum << 1);
-
- /* Inner L3 */
- w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
- w1.il3ptr = w1.ol4ptr + m->l2_len;
- w1.il4ptr = w1.il3ptr + m->l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
-
- /* Inner L4 */
- w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
-
- /* In case of no tunnel header use only
- * shift IL3/IL4 fields a bit to use
- * OL3/OL4 for header checksum
- */
- mask = !ol3type;
- w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) |
- ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
-
- } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
- const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
- const uint8_t outer_l2_len = m->outer_l2_len;
-
- /* Outer L3 */
- w1.ol3ptr = outer_l2_len;
- w1.ol4ptr = outer_l2_len + m->outer_l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
-
- /* Outer L4 */
- w1.ol4type = csum + (csum << 1);
-
- } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) {
- const uint8_t l2_len = m->l2_len;
-
- /* Always use OLXPTR and OLXTYPE when only
- * when one header is present
- */
-
- /* Inner L3 */
- w1.ol3ptr = l2_len;
- w1.ol4ptr = l2_len + m->l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
-
- /* Inner L4 */
- w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
- }
-
- if (flags & NIX_TX_NEED_EXT_HDR &&
- flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
- send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
- /* HW will update ptr after vlan0 update */
- send_hdr_ext->w1.vlan1_ins_ptr = 12;
- send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
-
- send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
- /* 2B before end of l2 header */
- send_hdr_ext->w1.vlan0_ins_ptr = 12;
- send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
- }
-
- if (flags & NIX_TX_OFFLOAD_TSO_F &&
- (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
- uint16_t lso_sb;
- uint64_t mask;
-
- mask = -(!w1.il3type);
- lso_sb = (mask & w1.ol4ptr) + (~mask & w1.il4ptr) + m->l4_len;
-
- send_hdr_ext->w0.lso_sb = lso_sb;
- send_hdr_ext->w0.lso = 1;
- send_hdr_ext->w0.lso_mps = m->tso_segsz;
- send_hdr_ext->w0.lso_format =
- NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
- w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
-
- /* Handle tunnel tso */
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
- const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
- ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
- uint8_t shift = is_udp_tun ? 32 : 0;
-
- shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
- shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
-
- w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
- w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
- /* Update format for UDP tunneled packet */
- send_hdr_ext->w0.lso_format = (lso_tun_fmt >> shift);
- }
- }
-
- if (flags & NIX_TX_NEED_SEND_HDR_W1)
- send_hdr->w1.u = w1.u;
-
- if (!(flags & NIX_TX_MULTI_SEG_F)) {
- sg->seg1_size = m->data_len;
- *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m);
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- /* DF bit = 1 if refcount of current mbuf or parent mbuf
- * is greater than 1
- * DF bit = 0 otherwise
- */
- send_hdr->w0.df = otx2_nix_prefree_seg(m);
- /* Ensuring mbuf fields which got updated in
- * otx2_nix_prefree_seg are written before LMTST.
- */
- rte_io_wmb();
- }
- /* Mark mempool object as "put" since it is freed by NIX */
- if (!send_hdr->w0.df)
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
- }
-}
-
-
-static __rte_always_inline void
-otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
- const rte_iova_t io_addr, const uint32_t flags)
-{
- uint64_t lmt_status;
-
- do {
- otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prep_lmt(uint64_t *cmd, void *lmt_addr, const uint32_t flags)
-{
- otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_xmit_submit_lmt(const rte_iova_t io_addr)
-{
- return otx2_lmt_submit(io_addr);
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_xmit_submit_lmt_release(const rte_iova_t io_addr)
-{
- return otx2_lmt_submit_release(io_addr);
-}
-
-static __rte_always_inline uint16_t
-otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
-{
- struct nix_send_hdr_s *send_hdr;
- union nix_send_sg_s *sg;
- struct rte_mbuf *m_next;
- uint64_t *slist, sg_u;
- uint64_t nb_segs;
- uint64_t segdw;
- uint8_t off, i;
-
- send_hdr = (struct nix_send_hdr_s *)cmd;
- send_hdr->w0.total = m->pkt_len;
- send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
-
- if (flags & NIX_TX_NEED_EXT_HDR)
- off = 2;
- else
- off = 0;
-
- sg = (union nix_send_sg_s *)&cmd[2 + off];
- /* Clear sg->u header before use */
- sg->u &= 0xFC00000000000000;
- sg_u = sg->u;
- slist = &cmd[3 + off];
-
- i = 0;
- nb_segs = m->nb_segs;
-
- /* Fill mbuf segments */
- do {
- m_next = m->next;
- sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
- *slist = rte_mbuf_data_iova(m);
- /* Set invert df if buffer is not to be freed by H/W */
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- sg_u |= (otx2_nix_prefree_seg(m) << (i + 55));
- /* Commit changes to mbuf */
- rte_io_wmb();
- }
- /* Mark mempool object as "put" since it is freed by NIX */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
- if (!(sg_u & (1ULL << (i + 55))))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
- rte_io_wmb();
-#endif
- slist++;
- i++;
- nb_segs--;
- if (i > 2 && nb_segs) {
- i = 0;
- /* Next SG subdesc */
- *(uint64_t *)slist = sg_u & 0xFC00000000000000;
- sg->u = sg_u;
- sg->segs = 3;
- sg = (union nix_send_sg_s *)slist;
- sg_u = sg->u;
- slist++;
- }
- m = m_next;
- } while (nb_segs);
-
- sg->u = sg_u;
- sg->segs = i;
- segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
- /* Roundup extra dwords to multiple of 2 */
- segdw = (segdw >> 1) + (segdw & 0x1);
- /* Default dwords */
- segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
- send_hdr->w0.sizem1 = segdw - 1;
-
- return segdw;
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_prep_lmt(uint64_t *cmd, void *lmt_addr, uint16_t segdw)
-{
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr,
- rte_iova_t io_addr, uint16_t segdw)
-{
- uint64_t lmt_status;
-
- do {
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_one_release(uint64_t *cmd, void *lmt_addr,
- rte_iova_t io_addr, uint16_t segdw)
-{
- uint64_t lmt_status;
-
- rte_io_wmb();
- do {
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
-#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
-#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F
-#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F
-#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F
-#define TSO_F NIX_TX_OFFLOAD_TSO_F
-#define TX_SEC_F NIX_TX_OFFLOAD_SECURITY_F
-
-/* [SEC] [TSO] [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
-#define NIX_TX_FASTPATH_MODES \
-T(no_offload, 0, 0, 0, 0, 0, 0, 0, 4, \
- NIX_TX_OFFLOAD_NONE) \
-T(l3l4csum, 0, 0, 0, 0, 0, 0, 1, 4, \
- L3L4CSUM_F) \
-T(ol3ol4csum, 0, 0, 0, 0, 0, 1, 0, 4, \
- OL3OL4CSUM_F) \
-T(ol3ol4csum_l3l4csum, 0, 0, 0, 0, 0, 1, 1, 4, \
- OL3OL4CSUM_F | L3L4CSUM_F) \
-T(vlan, 0, 0, 0, 0, 1, 0, 0, 6, \
- VLAN_F) \
-T(vlan_l3l4csum, 0, 0, 0, 0, 1, 0, 1, 6, \
- VLAN_F | L3L4CSUM_F) \
-T(vlan_ol3ol4csum, 0, 0, 0, 0, 1, 1, 0, 6, \
- VLAN_F | OL3OL4CSUM_F) \
-T(vlan_ol3ol4csum_l3l4csum, 0, 0, 0, 0, 1, 1, 1, 6, \
- VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(noff, 0, 0, 0, 1, 0, 0, 0, 4, \
- NOFF_F) \
-T(noff_l3l4csum, 0, 0, 0, 1, 0, 0, 1, 4, \
- NOFF_F | L3L4CSUM_F) \
-T(noff_ol3ol4csum, 0, 0, 0, 1, 0, 1, 0, 4, \
- NOFF_F | OL3OL4CSUM_F) \
-T(noff_ol3ol4csum_l3l4csum, 0, 0, 0, 1, 0, 1, 1, 4, \
- NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(noff_vlan, 0, 0, 0, 1, 1, 0, 0, 6, \
- NOFF_F | VLAN_F) \
-T(noff_vlan_l3l4csum, 0, 0, 0, 1, 1, 0, 1, 6, \
- NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(noff_vlan_ol3ol4csum, 0, 0, 0, 1, 1, 1, 0, 6, \
- NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(noff_vlan_ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 1, 1, 6, \
- NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts, 0, 0, 1, 0, 0, 0, 0, 8, \
- TSP_F) \
-T(ts_l3l4csum, 0, 0, 1, 0, 0, 0, 1, 8, \
- TSP_F | L3L4CSUM_F) \
-T(ts_ol3ol4csum, 0, 0, 1, 0, 0, 1, 0, 8, \
- TSP_F | OL3OL4CSUM_F) \
-T(ts_ol3ol4csum_l3l4csum, 0, 0, 1, 0, 0, 1, 1, 8, \
- TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_vlan, 0, 0, 1, 0, 1, 0, 0, 8, \
- TSP_F | VLAN_F) \
-T(ts_vlan_l3l4csum, 0, 0, 1, 0, 1, 0, 1, 8, \
- TSP_F | VLAN_F | L3L4CSUM_F) \
-T(ts_vlan_ol3ol4csum, 0, 0, 1, 0, 1, 1, 0, 8, \
- TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(ts_vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 0, 1, 1, 1, 8, \
- TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_noff, 0, 0, 1, 1, 0, 0, 0, 8, \
- TSP_F | NOFF_F) \
-T(ts_noff_l3l4csum, 0, 0, 1, 1, 0, 0, 1, 8, \
- TSP_F | NOFF_F | L3L4CSUM_F) \
-T(ts_noff_ol3ol4csum, 0, 0, 1, 1, 0, 1, 0, 8, \
- TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(ts_noff_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 0, 1, 1, 8, \
- TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_noff_vlan, 0, 0, 1, 1, 1, 0, 0, 8, \
- TSP_F | NOFF_F | VLAN_F) \
-T(ts_noff_vlan_l3l4csum, 0, 0, 1, 1, 1, 0, 1, 8, \
- TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(ts_noff_vlan_ol3ol4csum, 0, 0, 1, 1, 1, 1, 0, 8, \
- TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(ts_noff_vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 1, 1, 8, \
- TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
- \
-T(tso, 0, 1, 0, 0, 0, 0, 0, 6, \
- TSO_F) \
-T(tso_l3l4csum, 0, 1, 0, 0, 0, 0, 1, 6, \
- TSO_F | L3L4CSUM_F) \
-T(tso_ol3ol4csum, 0, 1, 0, 0, 0, 1, 0, 6, \
- TSO_F | OL3OL4CSUM_F) \
-T(tso_ol3ol4csum_l3l4csum, 0, 1, 0, 0, 0, 1, 1, 6, \
- TSO_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_vlan, 0, 1, 0, 0, 1, 0, 0, 6, \
- TSO_F | VLAN_F) \
-T(tso_vlan_l3l4csum, 0, 1, 0, 0, 1, 0, 1, 6, \
- TSO_F | VLAN_F | L3L4CSUM_F) \
-T(tso_vlan_ol3ol4csum, 0, 1, 0, 0, 1, 1, 0, 6, \
- TSO_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_vlan_ol3ol4csum_l3l4csum, 0, 1, 0, 0, 1, 1, 1, 6, \
- TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_noff, 0, 1, 0, 1, 0, 0, 0, 6, \
- TSO_F | NOFF_F) \
-T(tso_noff_l3l4csum, 0, 1, 0, 1, 0, 0, 1, 6, \
- TSO_F | NOFF_F | L3L4CSUM_F) \
-T(tso_noff_ol3ol4csum, 0, 1, 0, 1, 0, 1, 0, 6, \
- TSO_F | NOFF_F | OL3OL4CSUM_F) \
-T(tso_noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 0, 1, 1, 6, \
- TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_noff_vlan, 0, 1, 0, 1, 1, 0, 0, 6, \
- TSO_F | NOFF_F | VLAN_F) \
-T(tso_noff_vlan_l3l4csum, 0, 1, 0, 1, 1, 0, 1, 6, \
- TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(tso_noff_vlan_ol3ol4csum, 0, 1, 0, 1, 1, 1, 0, 6, \
- TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 1, 1, 6, \
- TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts, 0, 1, 1, 0, 0, 0, 0, 8, \
- TSO_F | TSP_F) \
-T(tso_ts_l3l4csum, 0, 1, 1, 0, 0, 0, 1, 8, \
- TSO_F | TSP_F | L3L4CSUM_F) \
-T(tso_ts_ol3ol4csum, 0, 1, 1, 0, 0, 1, 0, 8, \
- TSO_F | TSP_F | OL3OL4CSUM_F) \
-T(tso_ts_ol3ol4csum_l3l4csum, 0, 1, 1, 0, 0, 1, 1, 8, \
- TSO_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_vlan, 0, 1, 1, 0, 1, 0, 0, 8, \
- TSO_F | TSP_F | VLAN_F) \
-T(tso_ts_vlan_l3l4csum, 0, 1, 1, 0, 1, 0, 1, 8, \
- TSO_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(tso_ts_vlan_ol3ol4csum, 0, 1, 1, 0, 1, 1, 0, 8, \
- TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_ts_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 0, 1, 1, 1, 8, \
- TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_noff, 0, 1, 1, 1, 0, 0, 0, 8, \
- TSO_F | TSP_F | NOFF_F) \
-T(tso_ts_noff_l3l4csum, 0, 1, 1, 1, 0, 0, 1, 8, \
- TSO_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(tso_ts_noff_ol3ol4csum, 0, 1, 1, 1, 0, 1, 0, 8, \
- TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(tso_ts_noff_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 0, 1, 1, 8, \
- TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_noff_vlan, 0, 1, 1, 1, 1, 0, 0, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F) \
-T(tso_ts_noff_vlan_l3l4csum, 0, 1, 1, 1, 1, 0, 1, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(tso_ts_noff_vlan_ol3ol4csum, 0, 1, 1, 1, 1, 1, 0, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_ts_noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 1, 1, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec, 1, 0, 0, 0, 0, 0, 0, 8, \
- TX_SEC_F) \
-T(sec_l3l4csum, 1, 0, 0, 0, 0, 0, 1, 8, \
- TX_SEC_F | L3L4CSUM_F) \
-T(sec_ol3ol4csum, 1, 0, 0, 0, 0, 1, 0, 8, \
- TX_SEC_F | OL3OL4CSUM_F) \
-T(sec_ol3ol4csum_l3l4csum, 1, 0, 0, 0, 0, 1, 1, 8, \
- TX_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_vlan, 1, 0, 0, 0, 1, 0, 0, 8, \
- TX_SEC_F | VLAN_F) \
-T(sec_vlan_l3l4csum, 1, 0, 0, 0, 1, 0, 1, 8, \
- TX_SEC_F | VLAN_F | L3L4CSUM_F) \
-T(sec_vlan_ol3ol4csum, 1, 0, 0, 0, 1, 1, 0, 8, \
- TX_SEC_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_vlan_ol3ol4csum_l3l4csum, 1, 0, 0, 0, 1, 1, 1, 8, \
- TX_SEC_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_noff, 1, 0, 0, 1, 0, 0, 0, 8, \
- TX_SEC_F | NOFF_F) \
-T(sec_noff_l3l4csum, 1, 0, 0, 1, 0, 0, 1, 8, \
- TX_SEC_F | NOFF_F | L3L4CSUM_F) \
-T(sec_noff_ol3ol4csum, 1, 0, 0, 1, 0, 1, 0, 8, \
- TX_SEC_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_noff_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 0, 1, 1, 8, \
- TX_SEC_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_noff_vlan, 1, 0, 0, 1, 1, 0, 0, 8, \
- TX_SEC_F | NOFF_F | VLAN_F) \
-T(sec_noff_vlan_l3l4csum, 1, 0, 0, 1, 1, 0, 1, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_noff_vlan_ol3ol4csum, 1, 0, 0, 1, 1, 1, 0, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 1, 1, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts, 1, 0, 1, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSP_F) \
-T(sec_ts_l3l4csum, 1, 0, 1, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSP_F | L3L4CSUM_F) \
-T(sec_ts_ol3ol4csum, 1, 0, 1, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSP_F | OL3OL4CSUM_F) \
-T(sec_ts_ol3ol4csum_l3l4csum, 1, 0, 1, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_vlan, 1, 0, 1, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSP_F | VLAN_F) \
-T(sec_ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(sec_ts_vlan_ol3ol4csum, 1, 0, 1, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_noff, 1, 0, 1, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F) \
-T(sec_ts_noff_l3l4csum, 1, 0, 1, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(sec_ts_noff_ol3ol4csum, 1, 0, 1, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_ts_noff_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_noff_vlan, 1, 0, 1, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F) \
-T(sec_ts_noff_vlan_l3l4csum, 1, 0, 1, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_ts_noff_vlan_ol3ol4csum, 1, 0, 1, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso, 1, 1, 0, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F) \
-T(sec_tso_l3l4csum, 1, 1, 0, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | L3L4CSUM_F) \
-T(sec_tso_ol3ol4csum, 1, 1, 0, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | OL3OL4CSUM_F) \
-T(sec_tso_ol3ol4csum_l3l4csum, 1, 1, 0, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_vlan, 1, 1, 0, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | VLAN_F) \
-T(sec_tso_vlan_l3l4csum, 1, 1, 0, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_vlan_ol3ol4csum, 1, 1, 0, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_vlan_ol3ol4csum_l3l4csum, 1, 1, 0, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_noff, 1, 1, 0, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F) \
-T(sec_tso_noff_l3l4csum, 1, 1, 0, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F) \
-T(sec_tso_noff_ol3ol4csum, 1, 1, 0, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_tso_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_noff_vlan, 1, 1, 0, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F) \
-T(sec_tso_noff_vlan_l3l4csum, 1, 1, 0, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_noff_vlan_ol3ol4csum, 1, 1, 0, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, \
- 1, 1, 0, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts, 1, 1, 1, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F) \
-T(sec_tso_ts_l3l4csum, 1, 1, 1, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | L3L4CSUM_F) \
-T(sec_tso_ts_ol3ol4csum, 1, 1, 1, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_ol3ol4csum_l3l4csum, 1, 1, 1, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_ts_vlan, 1, 1, 1, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F) \
-T(sec_tso_ts_vlan_l3l4csum, 1, 1, 1, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_ts_vlan_ol3ol4csum, 1, 1, 1, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts_noff, 1, 1, 1, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F) \
-T(sec_tso_ts_noff_l3l4csum, 1, 1, 1, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(sec_tso_ts_noff_ol3ol4csum, 1, 1, 1, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_noff_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts_noff_vlan, 1, 1, 1, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F) \
-T(sec_tso_ts_noff_vlan_l3l4csum, 1, 1, 1, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)\
-T(sec_tso_ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | \
- OL3OL4CSUM_F) \
-T(sec_tso_ts_noff_vlan_ol3ol4csum_l3l4csum, \
- 1, 1, 1, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | \
- OL3OL4CSUM_F | L3L4CSUM_F)
-#endif /* __OTX2_TX_H__ */
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
deleted file mode 100644
index cce643b7b5..0000000000
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ /dev/null
@@ -1,1035 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_malloc.h>
-#include <rte_tailq.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-
-#define VLAN_ID_MATCH 0x1
-#define VTAG_F_MATCH 0x2
-#define MAC_ADDR_MATCH 0x4
-#define QINQ_F_MATCH 0x8
-#define VLAN_DROP 0x10
-#define DEF_F_ENTRY 0x20
-
-enum vtag_cfg_dir {
- VTAG_TX,
- VTAG_RX
-};
-
-static int
-nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
- uint32_t entry, const int enable)
-{
- struct npc_mcam_ena_dis_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- if (enable)
- req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox);
- else
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
-
- req->entry = entry;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- return rc;
-}
-
-static void
-nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry, bool qinq, bool drop)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int pcifunc = otx2_pfvf_func(dev->pf, dev->vf);
- uint64_t action = 0, vtag_action = 0;
-
- action = NIX_RX_ACTIONOP_UCAST;
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
- action = NIX_RX_ACTIONOP_RSS;
- action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
- }
-
- action |= (uint64_t)pcifunc << 4;
- entry->action = action;
-
- if (drop) {
- entry->action &= ~((uint64_t)0xF);
- entry->action |= NIX_RX_ACTIONOP_DROP;
- return;
- }
-
- if (!qinq) {
- /* VTAG0 fields denote CTAG in single vlan case */
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- vtag_action |= (NPC_LID_LB << 8);
- vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
- } else {
- /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- vtag_action |= (NPC_LID_LB << 8);
- vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR;
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47);
- vtag_action |= ((uint64_t)(NPC_LID_LB) << 40);
- vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32);
- }
-
- entry->vtag_action = vtag_action;
-}
-
-static void
-nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
- int vtag_index)
-{
- union {
- uint64_t reg;
- struct nix_tx_vtag_action_s act;
- } vtag_action;
-
- uint64_t action;
-
- action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
-
- /*
- * Take offset from LA since in case of untagged packet,
- * lbptr is zero.
- */
- if (type == RTE_ETH_VLAN_TYPE_OUTER) {
- vtag_action.act.vtag0_def = vtag_index;
- vtag_action.act.vtag0_lid = NPC_LID_LA;
- vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
- vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR;
- } else {
- vtag_action.act.vtag1_def = vtag_index;
- vtag_action.act.vtag1_lid = NPC_LID_LA;
- vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT;
- vtag_action.act.vtag1_relptr = NIX_TX_VTAGACTION_VTAG1_RELPTR;
- }
-
- entry->action = action;
- entry->vtag_action = vtag_action.reg;
-}
-
-static int
-nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
-{
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- return rc;
-}
-
-static int
-nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
- struct mcam_entry *entry, uint8_t intf, uint8_t ena)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_write_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct msghdr *rsp;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
-
- req->entry = ent_idx;
- req->intf = intf;
- req->enable_entry = ena;
- memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- return rc;
-}
-
-static int
-nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry,
- uint8_t intf, bool drop)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_and_write_entry_req *req;
- struct npc_mcam_alloc_and_write_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox);
-
- if (intf == NPC_MCAM_RX) {
- if (!drop && dev->vlan_info.def_rx_mcam_idx) {
- req->priority = NPC_MCAM_HIGHER_PRIO;
- req->ref_entry = dev->vlan_info.def_rx_mcam_idx;
- } else if (drop && dev->vlan_info.qinq_mcam_idx) {
- req->priority = NPC_MCAM_LOWER_PRIO;
- req->ref_entry = dev->vlan_info.qinq_mcam_idx;
- } else {
- req->priority = NPC_MCAM_ANY_PRIO;
- req->ref_entry = 0;
- }
- } else {
- req->priority = NPC_MCAM_ANY_PRIO;
- req->ref_entry = 0;
- }
-
- req->intf = intf;
- req->enable_entry = 1;
- memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- return rsp->entry;
-}
-
-static void
-nix_vlan_update_mac(struct rte_eth_dev *eth_dev, int mcam_index,
- int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- volatile uint8_t *key_data, *key_mask;
- struct npc_mcam_read_entry_req *req;
- struct npc_mcam_read_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint64_t mcam_data, mcam_mask;
- struct mcam_entry entry;
- uint8_t intf, mcam_ena;
- int idx, rc = -EINVAL;
- uint8_t *mac_addr;
-
- memset(&entry, 0, sizeof(struct mcam_entry));
-
- /* Read entry first */
- req = otx2_mbox_alloc_msg_npc_mcam_read_entry(mbox);
-
- req->entry = mcam_index;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read entry %d", mcam_index);
- return;
- }
-
- entry = rsp->entry_data;
- intf = rsp->intf;
- mcam_ena = rsp->enable;
-
- /* Update mcam address */
- key_data = (volatile uint8_t *)entry.kw;
- key_mask = (volatile uint8_t *)entry.kw_mask;
-
- if (enable) {
- mcam_mask = 0;
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
-
- } else {
- mcam_data = 0ULL;
- mac_addr = dev->mac_addr;
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- mcam_mask = BIT_ULL(48) - 1;
-
- otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
- &mcam_data, mkex->la_xtract.len + 1);
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
- }
-
- /* Write back the mcam entry */
- rc = nix_vlan_mcam_write(eth_dev, mcam_index,
- &entry, intf, mcam_ena);
- if (rc) {
- otx2_err("Failed to write entry %d", mcam_index);
- return;
- }
-}
-
-void
-otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
-
- /* Already in required mode */
- if (enable == vlan->promisc_on)
- return;
-
- /* Update default rx entry */
- if (vlan->def_rx_mcam_idx)
- nix_vlan_update_mac(eth_dev, vlan->def_rx_mcam_idx, enable);
-
- /* Update all other rx filter entries */
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next)
- nix_vlan_update_mac(eth_dev, entry->mcam_idx, enable);
-
- vlan->promisc_on = enable;
-}
-
-/* Configure mcam entry with required MCAM search rules */
-static int
-nix_vlan_mcam_config(struct rte_eth_dev *eth_dev,
- uint16_t vlan_id, uint16_t flags)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- volatile uint8_t *key_data, *key_mask;
- uint64_t mcam_data, mcam_mask;
- struct mcam_entry entry;
- uint8_t *mac_addr;
- int idx, kwi = 0;
-
- memset(&entry, 0, sizeof(struct mcam_entry));
- key_data = (volatile uint8_t *)entry.kw;
- key_mask = (volatile uint8_t *)entry.kw_mask;
-
- /* Channel base extracted to KW0[11:0] */
- entry.kw[kwi] = dev->rx_chan_base;
- entry.kw_mask[kwi] = BIT_ULL(12) - 1;
-
- /* Adds vlan_id & LB CTAG flag to MCAM KW */
- if (flags & VLAN_ID_MATCH) {
- entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG_QINQ)
- << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |=
- (0xF & ~(NPC_LT_LB_CTAG ^ NPC_LT_LB_STAG_QINQ))
- << mkex->lb_lt_offset;
-
- mcam_data = (uint16_t)vlan_id;
- mcam_mask = (BIT_ULL(16) - 1);
- otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off,
- &mcam_data, mkex->lb_xtract.len);
- otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off,
- &mcam_mask, mkex->lb_xtract.len);
- }
-
- /* Adds LB STAG flag to MCAM KW */
- if (flags & QINQ_F_MATCH) {
- entry.kw[kwi] |= NPC_LT_LB_STAG_QINQ << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
- }
-
- /* Adds LB CTAG & LB STAG flags to MCAM KW */
- if (flags & VTAG_F_MATCH) {
- entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG_QINQ)
- << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |=
- (0xF & ~(NPC_LT_LB_CTAG ^ NPC_LT_LB_STAG_QINQ))
- << mkex->lb_lt_offset;
- }
-
- /* Adds port MAC address to MCAM KW */
- if (flags & MAC_ADDR_MATCH) {
- mcam_data = 0ULL;
- mac_addr = dev->mac_addr;
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- mcam_mask = BIT_ULL(48) - 1;
- otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
- &mcam_data, mkex->la_xtract.len + 1);
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
- }
-
- /* VLAN_DROP: for drop action for all vlan packets when filter is on.
- * For QinQ, enable vtag action for both outer & inner tags
- */
- if (flags & VLAN_DROP)
- nix_set_rx_vlan_action(eth_dev, &entry, false, true);
- else if (flags & QINQ_F_MATCH)
- nix_set_rx_vlan_action(eth_dev, &entry, true, false);
- else
- nix_set_rx_vlan_action(eth_dev, &entry, false, false);
-
- if (flags & DEF_F_ENTRY)
- dev->vlan_info.def_rx_mcam_ent = entry;
-
- return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX,
- flags & VLAN_DROP);
-}
-
-/* Installs/Removes/Modifies default rx entry */
-static int
-nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
- bool filter, bool enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- uint16_t flags = 0;
- int mcam_idx, rc;
-
- /* Use default mcam entry to either drop vlan traffic when
- * vlan filter is on or strip vtag when strip is enabled.
- * Allocate default entry which matches port mac address
- * and vtag(ctag/stag) flags with drop action.
- */
- if (!vlan->def_rx_mcam_idx) {
- if (!eth_dev->data->promiscuous)
- flags = MAC_ADDR_MATCH;
-
- if (filter && enable)
- flags |= VTAG_F_MATCH | VLAN_DROP;
- else if (strip && enable)
- flags |= VTAG_F_MATCH;
- else
- return 0;
-
- flags |= DEF_F_ENTRY;
-
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags);
- if (mcam_idx < 0) {
- otx2_err("Failed to config vlan mcam");
- return -mcam_idx;
- }
-
- vlan->def_rx_mcam_idx = mcam_idx;
- return 0;
- }
-
- /* Filter is already enabled, so packets would be dropped anyways. No
- * processing needed for enabling strip wrt mcam entry.
- */
-
- /* Filter disable request */
- if (vlan->filter_on && filter && !enable) {
- vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
-
- /* Free default rx entry only when
- * 1. strip is not on and
- * 2. qinq entry is allocated before default entry.
- */
- if (vlan->strip_on ||
- (vlan->qinq_on && !vlan->qinq_before_def)) {
- if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- RTE_ETH_MQ_RX_RSS)
- vlan->def_rx_mcam_ent.action |=
- NIX_RX_ACTIONOP_RSS;
- else
- vlan->def_rx_mcam_ent.action |=
- NIX_RX_ACTIONOP_UCAST;
- return nix_vlan_mcam_write(eth_dev,
- vlan->def_rx_mcam_idx,
- &vlan->def_rx_mcam_ent,
- NIX_INTF_RX, 1);
- } else {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
- }
-
- /* Filter enable request */
- if (!vlan->filter_on && filter && enable) {
- vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
- vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP;
- return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx,
- &vlan->def_rx_mcam_ent, NIX_INTF_RX, 1);
- }
-
- /* Strip disable request */
- if (vlan->strip_on && strip && !enable) {
- if (!vlan->filter_on &&
- !(vlan->qinq_on && !vlan->qinq_before_def)) {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
- }
-
- return 0;
-}
-
-/* Installs/Removes default tx entry */
-static int
-nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, int vtag_index,
- int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct mcam_entry entry;
- uint16_t pf_func;
- int rc;
-
- if (!vlan->def_tx_mcam_idx && enable) {
- memset(&entry, 0, sizeof(struct mcam_entry));
-
- /* Only pf_func is matched, swap it's bytes */
- pf_func = (dev->pf_func & 0xff) << 8;
- pf_func |= (dev->pf_func >> 8) & 0xff;
-
- /* PF Func extracted to KW1[47:32] */
- entry.kw[0] = (uint64_t)pf_func << 32;
- entry.kw_mask[0] = (BIT_ULL(16) - 1) << 32;
-
- nix_set_tx_vlan_action(&entry, type, vtag_index);
- vlan->def_tx_mcam_ent = entry;
-
- return nix_vlan_mcam_alloc_and_write(eth_dev, &entry,
- NIX_INTF_TX, 0);
- }
-
- if (vlan->def_tx_mcam_idx && !enable) {
- rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
-
- return 0;
-}
-
-/* Configure vlan stripping on or off */
-static int
-nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- int rc = -EINVAL;
-
- rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable);
- if (rc) {
- otx2_err("Failed to config default rx entry");
- return rc;
- }
-
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- /* cfg_type = 1 for rx vlan cfg */
- vtag_cfg->cfg_type = VTAG_RX;
-
- if (enable)
- vtag_cfg->rx.strip_vtag = 1;
- else
- vtag_cfg->rx.strip_vtag = 0;
-
- /* Always capture */
- vtag_cfg->rx.capture_vtag = 1;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- /* Use rx vtag type index[0] for now */
- vtag_cfg->rx.vtag_type = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- dev->vlan_info.strip_on = enable;
- return rc;
-}
-
-/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */
-static int
-nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
- uint16_t vlan_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int rc = -EINVAL;
-
- if (!vlan_id && enable) {
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
- enable);
- if (rc) {
- otx2_err("Failed to config vlan mcam");
- return rc;
- }
- dev->vlan_info.filter_on = enable;
- return 0;
- }
-
- /* Enable/disable existing vlan filter entries */
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (vlan_id) {
- if (entry->vlan_id == vlan_id) {
- rc = nix_vlan_mcam_enb_dis(dev,
- entry->mcam_idx,
- enable);
- if (rc)
- return rc;
- }
- } else {
- rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx,
- enable);
- if (rc)
- return rc;
- }
- }
-
- if (!vlan_id && !enable) {
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
- enable);
- if (rc) {
- otx2_err("Failed to config vlan mcam");
- return rc;
- }
- dev->vlan_info.filter_on = enable;
- return 0;
- }
-
- return 0;
-}
-
-/* Enable/disable vlan filtering for the given vlan_id */
-int
-otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
- int on)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int entry_exists = 0;
- int rc = -EINVAL;
- int mcam_idx;
-
- if (!vlan_id) {
- otx2_err("Vlan Id can't be zero");
- return rc;
- }
-
- if (!vlan->def_rx_mcam_idx) {
- otx2_err("Vlan Filtering is disabled, enable it first");
- return rc;
- }
-
- if (on) {
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (entry->vlan_id == vlan_id) {
- /* Vlan entry already exists */
- entry_exists = 1;
- /* Mcam entry already allocated */
- if (entry->mcam_idx) {
- rc = nix_vlan_hw_filter(eth_dev, on,
- vlan_id);
- return rc;
- }
- break;
- }
- }
-
- if (!entry_exists) {
- entry = rte_zmalloc("otx2_nix_vlan_entry",
- sizeof(struct vlan_entry), 0);
- if (!entry) {
- otx2_err("Failed to allocate memory");
- return -ENOMEM;
- }
- }
-
- /* Enables vlan_id & mac address based filtering */
- if (eth_dev->data->promiscuous)
- mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
- VLAN_ID_MATCH);
- else
- mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
- VLAN_ID_MATCH |
- MAC_ADDR_MATCH);
- if (mcam_idx < 0) {
- otx2_err("Failed to config vlan mcam");
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- return mcam_idx;
- }
-
- entry->mcam_idx = mcam_idx;
- if (!entry_exists) {
- entry->vlan_id = vlan_id;
- TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next);
- }
- } else {
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (entry->vlan_id == vlan_id) {
- rc = nix_vlan_mcam_free(dev, entry->mcam_idx);
- if (rc)
- return rc;
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- break;
- }
- }
- }
- return 0;
-}
-
-/* Configure double vlan(qinq) on or off */
-static int
-otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
- const uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan_info;
- int mcam_idx;
- int rc;
-
- vlan_info = &dev->vlan_info;
-
- if (!enable) {
- if (!vlan_info->qinq_mcam_idx)
- return 0;
-
- rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx);
- if (rc)
- return rc;
-
- vlan_info->qinq_mcam_idx = 0;
- dev->vlan_info.qinq_on = 0;
- vlan_info->qinq_before_def = 0;
- return 0;
- }
-
- if (eth_dev->data->promiscuous)
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH);
- else
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0,
- QINQ_F_MATCH | MAC_ADDR_MATCH);
- if (mcam_idx < 0)
- return mcam_idx;
-
- if (!vlan_info->def_rx_mcam_idx)
- vlan_info->qinq_before_def = 1;
-
- vlan_info->qinq_mcam_idx = mcam_idx;
- dev->vlan_info.qinq_on = 1;
- return 0;
-}
-
-int
-otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t offloads = dev->rx_offloads;
- struct rte_eth_rxmode *rxmode;
- int rc = 0;
-
- rxmode = ð_dev->data->dev_conf.rxmode;
-
- if (mask & RTE_ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
- rc = nix_vlan_hw_strip(eth_dev, true);
- } else {
- offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
- rc = nix_vlan_hw_strip(eth_dev, false);
- }
- if (rc)
- goto done;
- }
-
- if (mask & RTE_ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- rc = nix_vlan_hw_filter(eth_dev, true, 0);
- } else {
- offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- rc = nix_vlan_hw_filter(eth_dev, false, 0);
- }
- if (rc)
- goto done;
- }
-
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
- if (!dev->vlan_info.qinq_on) {
- offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
- rc = otx2_nix_config_double_vlan(eth_dev, true);
- if (rc)
- goto done;
- }
- } else {
- if (dev->vlan_info.qinq_on) {
- offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
- rc = otx2_nix_config_double_vlan(eth_dev, false);
- if (rc)
- goto done;
- }
- }
-
- if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
- dev->rx_offloads |= offloads;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(eth_dev);
- }
-
-done:
- return rc;
-}
-
-int
-otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, uint16_t tpid)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct nix_set_vlan_tpid *tpid_cfg;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
-
- tpid_cfg->tpid = tpid;
- if (type == RTE_ETH_VLAN_TYPE_OUTER)
- tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
- else
- tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- if (type == RTE_ETH_VLAN_TYPE_OUTER)
- dev->vlan_info.outer_vlan_tpid = tpid;
- else
- dev->vlan_info.inner_vlan_tpid = tpid;
- return 0;
-}
-
-int
-otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev);
- struct otx2_mbox *mbox = otx2_dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- struct nix_vtag_config_rsp *rsp;
- struct otx2_vlan_info *vlan;
- int rc, rc1, vtag_index = 0;
-
- if (vlan_id == 0) {
- otx2_err("vlan id can't be zero");
- return -EINVAL;
- }
-
- vlan = &otx2_dev->vlan_info;
-
- if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) {
- otx2_err("pvid %d is already enabled", vlan_id);
- return -EINVAL;
- }
-
- if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) {
- otx2_err("another pvid is enabled, disable that first");
- return -EINVAL;
- }
-
- /* No pvid active */
- if (!on && !vlan->pvid_insert_on)
- return 0;
-
- /* Given pvid already disabled */
- if (!on && vlan->pvid != vlan_id)
- return 0;
-
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
-
- if (on) {
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
-
- if (vlan->outer_vlan_tpid)
- vtag_cfg->tx.vtag0 = ((uint32_t)vlan->outer_vlan_tpid
- << 16) | vlan_id;
- else
- vtag_cfg->tx.vtag0 =
- ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id);
- vtag_cfg->tx.cfg_vtag0 = 1;
- } else {
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
-
- vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx;
- vtag_cfg->tx.free_vtag0 = 1;
- }
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (on) {
- vtag_index = rsp->vtag0_idx;
- } else {
- vlan->pvid = 0;
- vlan->pvid_insert_on = 0;
- vlan->outer_vlan_idx = 0;
- }
-
- rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
- vtag_index, on);
- if (rc < 0) {
- printf("Default tx entry failed with rc %d\n", rc);
- vtag_cfg->tx.vtag0_idx = vtag_index;
- vtag_cfg->tx.free_vtag0 = 1;
- vtag_cfg->tx.cfg_vtag0 = 0;
-
- rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc1)
- otx2_err("Vtag free failed");
-
- return rc;
- }
-
- if (on) {
- vlan->pvid = vlan_id;
- vlan->pvid_insert_on = 1;
- vlan->outer_vlan_idx = vtag_index;
- }
-
- return 0;
-}
-
-void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
- __rte_unused uint16_t queue,
- __rte_unused int on)
-{
- otx2_err("Not Supported");
-}
-
-static int
-nix_vlan_rx_mkex_offset(uint64_t mask)
-{
- int nib_count = 0;
-
- while (mask) {
- nib_count += mask & 1;
- mask >>= 1;
- }
-
- return nib_count * 4;
-}
-
-static int
-nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
-{
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- struct npc_xtract_info *x_info = NULL;
- uint64_t rx_keyx;
- otx2_dxcfg_t *p;
- int rc = -EINVAL;
-
- if (npc == NULL) {
- otx2_err("Missing npc mkex configuration");
- return rc;
- }
-
-#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL
-#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL
-#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL
-
- rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX];
- if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA)
- return rc;
-
- if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) !=
- NPC_KEX_LB_LTYPE_NIBBLE_ENA)
- return rc;
-
- mkex->lb_lt_offset =
- nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK);
-
- p = &npc->prx_dxcfg;
- x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
- memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info));
- x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0];
- memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info));
-
- return 0;
-}
-
-static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_entry *entry;
- int rc;
-
- /* VLAN filters can't be set without setting filtern on */
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true);
- if (rc) {
- otx2_err("Failed to reinstall vlan filters");
- return;
- }
-
- TAILQ_FOREACH(entry, &dev->vlan_info.fltr_tbl, next) {
- rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true);
- if (rc)
- otx2_err("Failed to reinstall filter for vlan:%d",
- entry->vlan_id);
- }
-}
-
-int
-otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, mask;
-
- /* Port initialized for first time or restarted */
- if (!dev->configured) {
- rc = nix_vlan_get_mkex_info(dev);
- if (rc) {
- otx2_err("Failed to get vlan mkex info rc=%d", rc);
- return rc;
- }
-
- TAILQ_INIT(&dev->vlan_info.fltr_tbl);
- } else {
- /* Reinstall all mcam entries now if filter offload is set */
- if (eth_dev->data->dev_conf.rxmode.offloads &
- RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
- nix_vlan_reinstall_vlan_filters(eth_dev);
- }
-
- mask =
- RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
- rc = otx2_nix_vlan_offload_set(eth_dev, mask);
- if (rc) {
- otx2_err("Failed to set vlan offload rc=%d", rc);
- return rc;
- }
-
- return 0;
-}
-
-int
-otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int rc;
-
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (!dev->configured) {
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- } else {
- /* MCAM entries freed by flow_fini & lf_free on
- * port stop.
- */
- entry->mcam_idx = 0;
- }
- }
-
- if (!dev->configured) {
- if (vlan->def_rx_mcam_idx) {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- }
- }
-
- otx2_nix_config_double_vlan(eth_dev, false);
- vlan->def_rx_mcam_idx = 0;
- return 0;
-}
diff --git a/drivers/net/octeontx2/version.map b/drivers/net/octeontx2/version.map
deleted file mode 100644
index c2e0723b4c..0000000000
--- a/drivers/net/octeontx2/version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_22 {
- local: *;
-};
diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h
index 9326925025..dc720368ab 100644
--- a/drivers/net/octeontx_ep/otx2_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.h
@@ -113,7 +113,7 @@
#define otx2_read64(addr) rte_read64_relaxed((void *)(addr))
#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr))
-#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */
+#define PCI_DEVID_CN9K_EP_NET_VF 0xB203 /* OCTEON 9 EP mode */
#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103
int
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index fd5e8ed263..8a59a1a194 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -150,7 +150,7 @@ struct otx_ep_iq_config {
/** The instruction (input) queue.
* The input queue is used to post raw (instruction) mode data or packet data
- * to OCTEON TX2 device from the host. Each IQ of a OTX_EP EP VF device has one
+ * to OCTEON 9 device from the host. Each IQ of a OTX_EP EP VF device has one
* such structure to represent it.
*/
struct otx_ep_instr_queue {
@@ -170,12 +170,12 @@ struct otx_ep_instr_queue {
/* Input ring index, where the driver should write the next packet */
uint32_t host_write_index;
- /* Input ring index, where the OCTEON TX2 should read the next packet */
+ /* Input ring index, where the OCTEON 9 should read the next packet */
uint32_t otx_read_index;
uint32_t reset_instr_cnt;
- /** This index aids in finding the window in the queue where OCTEON TX2
+ /** This index aids in finding the window in the queue where OCTEON 9
* has read the commands.
*/
uint32_t flush_index;
@@ -195,7 +195,7 @@ struct otx_ep_instr_queue {
/* OTX_EP instruction count register for this ring. */
void *inst_cnt_reg;
- /* Number of instructions pending to be posted to OCTEON TX2. */
+ /* Number of instructions pending to be posted to OCTEON 9. */
uint32_t fill_cnt;
/* Statistics for this input queue. */
@@ -230,8 +230,8 @@ union otx_ep_rh {
};
#define OTX_EP_RH_SIZE (sizeof(union otx_ep_rh))
-/** Information about packet DMA'ed by OCTEON TX2.
- * The format of the information available at Info Pointer after OCTEON TX2
+/** Information about packet DMA'ed by OCTEON 9.
+ * The format of the information available at Info Pointer after OCTEON 9
* has posted a packet. Not all descriptors have valid information. Only
* the Info field of the first descriptor for a packet has information
* about the packet.
@@ -295,7 +295,7 @@ struct otx_ep_droq {
/* Driver should read the next packet at this index */
uint32_t read_idx;
- /* OCTEON TX2 will write the next packet at this index */
+ /* OCTEON 9 will write the next packet at this index */
uint32_t write_idx;
/* At this index, the driver will refill the descriptor's buffer */
@@ -326,7 +326,7 @@ struct otx_ep_droq {
*/
void *pkts_credit_reg;
- /** Pointer to the mapped packet sent register. OCTEON TX2 writes the
+ /** Pointer to the mapped packet sent register. OCTEON 9 writes the
* number of packets DMA'ed to host memory in this register.
*/
void *pkts_sent_reg;
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index c3cec6d833..806add246b 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -102,7 +102,7 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
ret = otx_ep_vf_setup_device(otx_epvf);
otx_epvf->fn_list.disable_io_queues(otx_epvf);
break;
- case PCI_DEVID_OCTEONTX2_EP_NET_VF:
+ case PCI_DEVID_CN9K_EP_NET_VF:
case PCI_DEVID_CN98XX_EP_NET_VF:
otx_epvf->chip_id = dev_id;
ret = otx2_ep_vf_setup_device(otx_epvf);
@@ -137,7 +137,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
otx_epvf->eth_dev->rx_pkt_burst = &otx_ep_recv_pkts;
if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF)
otx_epvf->eth_dev->tx_pkt_burst = &otx_ep_xmit_pkts;
- else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX2_EP_NET_VF ||
+ else if (otx_epvf->chip_id == PCI_DEVID_CN9K_EP_NET_VF ||
otx_epvf->chip_id == PCI_DEVID_CN98XX_EP_NET_VF)
otx_epvf->eth_dev->tx_pkt_burst = &otx2_ep_xmit_pkts;
ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf);
@@ -422,7 +422,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
otx_epvf->pdev = pdev;
otx_epdev_init(otx_epvf);
- if (pdev->id.device_id == PCI_DEVID_OCTEONTX2_EP_NET_VF)
+ if (pdev->id.device_id == PCI_DEVID_CN9K_EP_NET_VF)
otx_epvf->pkind = SDP_OTX2_PKIND;
else
otx_epvf->pkind = SDP_PKIND;
@@ -450,7 +450,7 @@ otx_ep_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
/* Set of PCI devices this driver supports */
static const struct rte_pci_id pci_id_otx_ep_map[] = {
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX_EP_VF) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_EP_NET_VF) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN9K_EP_NET_VF) },
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN98XX_EP_NET_VF) },
{ .vendor_id = 0, /* sentinel */ }
};
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index 9338b30672..59df6ad857 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -85,7 +85,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq = otx_ep->instr_queue[iq_no];
q_size = conf->iq.instr_type * num_descs;
- /* IQ memory creation for Instruction submission to OCTEON TX2 */
+ /* IQ memory creation for Instruction submission to OCTEON 9 */
iq->iq_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev,
"instr_queue", iq_no, q_size,
OTX_EP_PCI_RING_ALIGN,
@@ -106,8 +106,8 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq->nb_desc = num_descs;
/* Create a IQ request list to hold requests that have been
- * posted to OCTEON TX2. This list will be used for freeing the IQ
- * data buffer(s) later once the OCTEON TX2 fetched the requests.
+ * posted to OCTEON 9. This list will be used for freeing the IQ
+ * data buffer(s) later once the OCTEON 9 fetched the requests.
*/
iq->req_list = rte_zmalloc_socket("request_list",
(iq->nb_desc * OTX_EP_IQREQ_LIST_SIZE),
@@ -450,7 +450,7 @@ post_iqcmd(struct otx_ep_instr_queue *iq, uint8_t *iqcmd)
uint8_t *iqptr, cmdsize;
/* This ensures that the read index does not wrap around to
- * the same position if queue gets full before OCTEON TX2 could
+ * the same position if queue gets full before OCTEON 9 could
* fetch any instr.
*/
if (iq->instr_pending > (iq->nb_desc - 1))
@@ -979,7 +979,7 @@ otx_ep_check_droq_pkts(struct otx_ep_droq *droq)
return new_pkts;
}
-/* Check for response arrival from OCTEON TX2
+/* Check for response arrival from OCTEON 9
* returns number of requests completed
*/
uint16_t
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 6cea732228..ace4627218 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -65,11 +65,11 @@
intel_ntb_icx = {'Class': '06', 'Vendor': '8086', 'Device': '347e',
'SVendor': None, 'SDevice': None}
-octeontx2_sso = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f9,a0fa',
+cnxk_sso = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f9,a0fa',
'SVendor': None, 'SDevice': None}
-octeontx2_npa = {'Class': '08', 'Vendor': '177d', 'Device': 'a0fb,a0fc',
+cnxk_npa = {'Class': '08', 'Vendor': '177d', 'Device': 'a0fb,a0fc',
'SVendor': None, 'SDevice': None}
-octeontx2_ree = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f4',
+cn9k_ree = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f4',
'SVendor': None, 'SDevice': None}
network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class]
@@ -77,10 +77,10 @@
crypto_devices = [encryption_class, intel_processor_class]
dma_devices = [cnxk_dma, hisilicon_dma,
intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx]
-eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, octeontx2_sso]
-mempool_devices = [cavium_fpa, octeontx2_npa]
+eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, cnxk_sso]
+mempool_devices = [cavium_fpa, cnxk_npa]
compress_devices = [cavium_zip]
-regex_devices = [octeontx2_ree]
+regex_devices = [cn9k_ree]
misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev,
intel_ntb_skx, intel_ntb_icx]
--
2.34.1
^ permalink raw reply [relevance 1%]
* [PATCH 2/2] doc: update LTS release cadence
@ 2021-12-13 16:48 5% ` Kevin Traynor
0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2021-12-13 16:48 UTC (permalink / raw)
To: dev, christian.ehrhardt, xuemingl; +Cc: bluca, Kevin Traynor
Regular LTS releases have previously aligned to DPDK main branch
releases so that fixes being backported have already gone through
DPDK main branch release validation.
Now that DPDK main branch has moved to 3 releases per year, the LTS
releases should continue to align with it and follow a similar release
cadence.
Update stable docs to reflect this.
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
---
doc/guides/contributing/stable.rst | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/doc/guides/contributing/stable.rst b/doc/guides/contributing/stable.rst
index 69d8312b47..9ee7b4b7cc 100644
--- a/doc/guides/contributing/stable.rst
+++ b/doc/guides/contributing/stable.rst
@@ -39,5 +39,5 @@ A Stable Release is used to backport fixes from an ``N`` release back to an
``N-1`` release, for example, from 16.11 to 16.07.
-The duration of a stable is one complete release cycle (3 months). It can be
+The duration of a stable is one complete release cycle (4 months). It can be
longer, up to 1 year, if a maintainer continues to support the stable branch,
or if users supply backported fixes, however the explicit commitment should be
@@ -62,6 +62,8 @@ A LTS release may align with the declaration of a new major ABI version,
please read the :doc:`abi_policy` for more information.
-It is anticipated that there will be at least 4 releases per year of the LTS
-or approximately 1 every 3 months. However, the cadence can be shorter or
+It is anticipated that there will be at least 3 releases per year of the LTS
+or approximately 1 every 4 months. This is done to align with the DPDK main
+branch releases so that fixes have already gone through validation as part of
+the DPDK main branch release validation. However, the cadence can be shorter or
longer depending on the number and criticality of the backported
fixes. Releases should be coordinated with the validation engineers to ensure
--
2.31.1
^ permalink raw reply [relevance 5%]
* Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
2021-12-03 11:38 3% ` [PATCH v4 1/2] net: add " Xiaoyun Li
@ 2021-12-15 11:33 0% ` Singh, Aman Deep
2022-01-04 15:18 0% ` Li, Xiaoyun
0 siblings, 1 reply; 200+ results
From: Singh, Aman Deep @ 2021-12-15 11:33 UTC (permalink / raw)
To: Xiaoyun Li, ferruh.yigit, olivier.matz, mb, konstantin.ananyev,
stephen, vladimir.medvedkin
Cc: dev
On 12/3/2021 5:08 PM, Xiaoyun Li wrote:
> Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
> UDP/TCP checksum in mbuf which can be over multi-segments.
>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> ---
> doc/guides/rel_notes/release_22_03.rst | 10 ++
> lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
> lib/net/version.map | 10 ++
> 3 files changed, 206 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index 6d99d1eaa9..7a082c4427 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -55,6 +55,13 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Added functions to calculate UDP/TCP checksum in mbuf.**
> + * Added the following functions to calculate UDP/TCP checksum of packets
> + which can be over multi-segments:
> + - ``rte_ipv4_udptcp_cksum_mbuf()``
> + - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
> + - ``rte_ipv6_udptcp_cksum_mbuf()``
> + - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
>
> Removed Items
> -------------
> @@ -84,6 +91,9 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
> + ``rte_ipv4_udptcp_cksum_mbuf_verify()``, ``rte_ipv6_udptcp_cksum_mbuf()``,
> + ``rte_ipv6_udptcp_cksum_mbuf_verify()``
>
> ABI Changes
> -----------
> diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
> index c575250852..534f401d26 100644
> --- a/lib/net/rte_ip.h
> +++ b/lib/net/rte_ip.h
> @@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr *ipv4_hdr, const void *l4_hdr)
> return cksum;
> }
>
> +/**
> + * @internal Calculate the non-complemented IPv4 L4 checksum of a packet
> + */
> +static inline uint16_t
> +__rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> + const struct rte_ipv4_hdr *ipv4_hdr,
> + uint16_t l4_off)
> +{
> + uint16_t raw_cksum;
> + uint32_t cksum;
> +
> + if (l4_off > m->pkt_len)
> + return 0;
> +
> + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
> + return 0;
> +
> + cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
> +
> + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
At times, even after above operation "cksum" might stay above 16-bits,
ex "cksum = 0x1FFFF" to start with.
Can we consider using "return __rte_raw_cksum_reduce(cksum);"
> +
> + return (uint16_t)cksum;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Compute the IPv4 UDP/TCP checksum of a packet.
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @param ipv4_hdr
> + * The pointer to the contiguous IPv4 header.
> + * @param l4_off
> + * The offset in bytes to start L4 checksum.
> + * @return
> + * The complemented checksum to set in the L4 header.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> + const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
> +{
> + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
> +
> + cksum = ~cksum;
> +
> + /*
> + * Per RFC 768: If the computed checksum is zero for UDP,
> + * it is transmitted as all ones
> + * (the equivalent in one's complement arithmetic).
> + */
> + if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
> + cksum = 0xffff;
> +
> + return cksum;
> +}
> +
> /**
> * Validate the IPv4 UDP or TCP checksum.
> *
> @@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct rte_ipv4_hdr *ipv4_hdr,
> return 0;
> }
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Verify the IPv4 UDP/TCP checksum of a packet.
> + *
> + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0
> + * (i.e. no checksum).
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @param ipv4_hdr
> + * The pointer to the contiguous IPv4 header.
> + * @param l4_off
> + * The offset in bytes to start L4 checksum.
> + * @return
> + * Return 0 if the checksum is correct, else -1.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> + const struct rte_ipv4_hdr *ipv4_hdr,
> + uint16_t l4_off)
> +{
> + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
> +
> + if (cksum != 0xffff)
> + return -1;
cksum other than 0xffff, should return error. Is that the intent or I am
missing something obvious.
> +
> + return 0;
> +}
> +
> /**
> * IPv6 Header
> */
> @@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr *ipv6_hdr, const void *l4_hdr)
> return cksum;
> }
>
> +/**
> + * @internal Calculate the non-complemented IPv6 L4 checksum of a packet
> + */
> +static inline uint16_t
> +__rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> + const struct rte_ipv6_hdr *ipv6_hdr,
> + uint16_t l4_off)
> +{
> + uint16_t raw_cksum;
> + uint32_t cksum;
> +
> + if (l4_off > m->pkt_len)
> + return 0;
> +
> + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
> + return 0;
> +
> + cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
> +
> + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
Same, please check if we can opt for __rte_raw_cksum_reduce(cksum)
> +
> + return (uint16_t)cksum;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Process the IPv6 UDP or TCP checksum of a packet.
> + *
> + * The IPv6 header must not be followed by extension headers. The layer 4
> + * checksum must be set to 0 in the L4 header by the caller.
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @param ipv6_hdr
> + * The pointer to the contiguous IPv6 header.
> + * @param l4_off
> + * The offset in bytes to start L4 checksum.
> + * @return
> + * The complemented checksum to set in the L4 header.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> + const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
> +{
> + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
> +
> + cksum = ~cksum;
> +
> + /*
> + * Per RFC 768: If the computed checksum is zero for UDP,
> + * it is transmitted as all ones
> + * (the equivalent in one's complement arithmetic).
> + */
> + if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
> + cksum = 0xffff;
> +
> + return cksum;
> +}
> +
> /**
> * Validate the IPv6 UDP or TCP checksum.
> *
> @@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct rte_ipv6_hdr *ipv6_hdr,
> return 0;
> }
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Validate the IPv6 UDP or TCP checksum of a packet.
> + *
> + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
> + * this is either invalid or means no checksum in some situations. See 8.1
> + * (Upper-Layer Checksums) in RFC 8200.
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @param ipv6_hdr
> + * The pointer to the contiguous IPv6 header.
> + * @param l4_off
> + * The offset in bytes to start L4 checksum.
> + * @return
> + * Return 0 if the checksum is correct, else -1.
> + */
> +__rte_experimental
> +static inline int
> +rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> + const struct rte_ipv6_hdr *ipv6_hdr,
> + uint16_t l4_off)
> +{
> + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
> +
> + if (cksum != 0xffff)
> + return -1;
> +
> + return 0;
> +}
> +
> /** IPv6 fragment extension header. */
> #define RTE_IPV6_EHDR_MF_SHIFT 0
> #define RTE_IPV6_EHDR_MF_MASK 1
> diff --git a/lib/net/version.map b/lib/net/version.map
> index 4f4330d1c4..0f2aacdef8 100644
> --- a/lib/net/version.map
> +++ b/lib/net/version.map
> @@ -12,3 +12,13 @@ DPDK_22 {
>
> local: *;
> };
> +
> +EXPERIMENTAL {
> + global:
> +
> + # added in 22.03
> + rte_ipv4_udptcp_cksum_mbuf;
> + rte_ipv4_udptcp_cksum_mbuf_verify;
> + rte_ipv6_udptcp_cksum_mbuf;
> + rte_ipv6_udptcp_cksum_mbuf_verify;
> +};
^ permalink raw reply [relevance 0%]
* RE: [RFC] cryptodev: asymmetric crypto random number source
2021-12-13 9:27 0% ` Ramkumar Balu
@ 2021-12-17 15:26 0% ` Kusztal, ArkadiuszX
0 siblings, 0 replies; 200+ results
From: Kusztal, ArkadiuszX @ 2021-12-17 15:26 UTC (permalink / raw)
To: Ramkumar Balu, Akhil Goyal, Anoob Joseph, Zhang, Roy Fan; +Cc: dev
> -----Original Message-----
> From: Ramkumar Balu <rbalu@marvell.com>
> Sent: Monday, December 13, 2021 10:27 AM
> To: Akhil Goyal <gakhil@marvell.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Anoob Joseph <anoobj@marvell.com>; Zhang,
> Roy Fan <roy.fan.zhang@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [RFC] cryptodev: asymmetric crypto random number source
>
> > ++Ram for openssl
> >
> > > ECDSA op:
> > > rte_crypto_param k;
> > > /**< The ECDSA per-message secret number, which is an
> > >integer
> > > * in the interval (1, n-1)
> > > */
> > > DSA op:
> > > No 'k'.
> > >
> > > This one I think have described some time ago:
> > > Only PMD that verifies ECDSA is OCTEON which apparently needs 'k' provided
> by user.
> > > Only PMD that verifies DSA is OpenSSL PMD which will generate its own
> random number internally.
> > >
> > > So in case PMD supports one of these options (or especially when supports
> both) we need to give some information here.
>
> We can have a standard way to represent if a particular rte_crypto_param is set
> by the application or not. Then, it is up to the PMD to perform the op or return
> error code if unable to proceed.
>
> > >
> > > The most obvious option would be to change rte_crypto_param k ->
> > > rte_crypto_param *k In case (k == NULL) PMD should generate it itself if
> possible, otherwise it should push crypto_op to the response ring with
> appropriate error code.
>
> This case could occur for other params as well. Having a few as nested variables
> and others as pointers could be confusing for memory alloc/dealloc. However,
> the rte_crypto_param already has a data pointer inside it which can be used in
> same manner. For example, in this case (k.data == NULL), PMD should generate
> random number if possible or push to response ring with error code. This can be
> done without breaking backward compatibility.
> This can be the standard way for PMDs to find if a particular rte_crypto_param is
> valid or NULL.
[Arek] Agree, let keep it as easy as possible, and agree it could be useful elsewhere not necessarily in random number cases.
>
> > >
> > > Another options would be:
> > > - Extend rte_cryptodev_config and rte_cryptodev_info with
> > > information about random number generator for specific device
> > > (though it would be ABI breakage)
> > > - Provide some kind of callback to get random number from user
> > > (which could be useful for other things like RSA padding as well)
>
> I think the previous solution itself is more straightforward and simpler unless we
> want to have functionality to configure random number generator for each
> device.
>
> Thanks,
> Ramkumar Balu
>
^ permalink raw reply [relevance 0%]
* [PATCH v4 00/25] Net/SPNIC: support SPNIC into DPDK 22.03
@ 2021-12-25 11:28 2% Yanling Song
0 siblings, 0 replies; 200+ results
From: Yanling Song @ 2021-12-25 11:28 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, xuyun, ferruh.yigit
The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
Ramaxel Memory Technology is a company which supply a lot of electric products:
storage, communication, PCB...
SPNxxx is a serial PCIE interface NIC cards:
SPN110: 2 PORTs *25G
SPN120: 4 PORTs *25G
SPN130: 2 PORTs *100G
The following is main features of our SPNIC:
- TSO
- LRO
- Flow control
- SR-IOV(Partially supported)
- VLAN offload
- VLAN filter
- CRC offload
- Promiscuous mode
- RSS
v3->v4:
1. Fix ABI test failure;
2. Remove some descriptions in spnic.rst.
v2->v3:
1. Fix clang compiling failure.
v1->v2:
1. Fix coding style issues and compiling failures;
2. Only support linux in meson.build;
3. Use CLOCK_MONOTONIC_COARSE instead of CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW;
4. Fix time_before();
5. Remove redundant checks in spnic_dev_configure();
Yanling Song (25):
drivers/net: introduce a new PMD driver
net/spnic: initialize the HW interface
net/spnic: add mbox message channel
net/spnic: introduce event queue
net/spnic: add mgmt module
net/spnic: add cmdq and work queue
net/spnic: add interface handling cmdq message
net/spnic: add hardware info initialization
net/spnic: support MAC and link event handling
net/spnic: add function info initialization
net/spnic: add queue pairs context initialization
net/spnic: support mbuf handling of Tx/Rx
net/spnic: support Rx congfiguration
net/spnic: add port/vport enable
net/spnic: support IO packets handling
net/spnic: add device configure/version/info
net/spnic: support RSS configuration update and get
net/spnic: support VLAN filtering and offloading
net/spnic: support promiscuous and allmulticast Rx modes
net/spnic: support flow control
net/spnic: support getting Tx/Rx queues info
net/spnic: net/spnic: support xstats statistics
net/spnic: support VFIO interrupt
net/spnic: support Tx/Rx queue start/stop
net/spnic: add doc infrastructure
MAINTAINERS | 6 +
doc/guides/nics/features/spnic.ini | 39 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/spnic.rst | 55 +
drivers/net/meson.build | 1 +
drivers/net/spnic/base/meson.build | 37 +
drivers/net/spnic/base/spnic_cmd.h | 222 ++
drivers/net/spnic/base/spnic_cmdq.c | 875 ++++++
drivers/net/spnic/base/spnic_cmdq.h | 248 ++
drivers/net/spnic/base/spnic_compat.h | 184 ++
drivers/net/spnic/base/spnic_csr.h | 104 +
drivers/net/spnic/base/spnic_eqs.c | 661 +++++
drivers/net/spnic/base/spnic_eqs.h | 102 +
drivers/net/spnic/base/spnic_hw_cfg.c | 212 ++
drivers/net/spnic/base/spnic_hw_cfg.h | 125 +
drivers/net/spnic/base/spnic_hw_comm.c | 485 ++++
drivers/net/spnic/base/spnic_hw_comm.h | 204 ++
drivers/net/spnic/base/spnic_hwdev.c | 514 ++++
drivers/net/spnic/base/spnic_hwdev.h | 143 +
drivers/net/spnic/base/spnic_hwif.c | 774 ++++++
drivers/net/spnic/base/spnic_hwif.h | 155 ++
drivers/net/spnic/base/spnic_mbox.c | 1194 ++++++++
drivers/net/spnic/base/spnic_mbox.h | 202 ++
drivers/net/spnic/base/spnic_mgmt.c | 367 +++
drivers/net/spnic/base/spnic_mgmt.h | 110 +
drivers/net/spnic/base/spnic_nic_cfg.c | 1348 +++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 1110 ++++++++
drivers/net/spnic/base/spnic_nic_event.c | 185 ++
drivers/net/spnic/base/spnic_nic_event.h | 24 +
drivers/net/spnic/base/spnic_wq.c | 139 +
drivers/net/spnic/base/spnic_wq.h | 123 +
drivers/net/spnic/meson.build | 20 +
drivers/net/spnic/spnic_ethdev.c | 3212 ++++++++++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 95 +
drivers/net/spnic/spnic_io.c | 738 +++++
drivers/net/spnic/spnic_io.h | 154 ++
drivers/net/spnic/spnic_rx.c | 937 +++++++
drivers/net/spnic/spnic_rx.h | 326 +++
drivers/net/spnic/spnic_tx.c | 858 ++++++
drivers/net/spnic/spnic_tx.h | 297 ++
drivers/net/spnic/version.map | 3 +
41 files changed, 16589 insertions(+)
create mode 100644 doc/guides/nics/features/spnic.ini
create mode 100644 doc/guides/nics/spnic.rst
create mode 100644 drivers/net/spnic/base/meson.build
create mode 100644 drivers/net/spnic/base/spnic_cmd.h
create mode 100644 drivers/net/spnic/base/spnic_cmdq.c
create mode 100644 drivers/net/spnic/base/spnic_cmdq.h
create mode 100644 drivers/net/spnic/base/spnic_compat.h
create mode 100644 drivers/net/spnic/base/spnic_csr.h
create mode 100644 drivers/net/spnic/base/spnic_eqs.c
create mode 100644 drivers/net/spnic/base/spnic_eqs.h
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.c
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.h
create mode 100644 drivers/net/spnic/base/spnic_hwdev.c
create mode 100644 drivers/net/spnic/base/spnic_hwdev.h
create mode 100644 drivers/net/spnic/base/spnic_hwif.c
create mode 100644 drivers/net/spnic/base/spnic_hwif.h
create mode 100644 drivers/net/spnic/base/spnic_mbox.c
create mode 100644 drivers/net/spnic/base/spnic_mbox.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.c
create mode 100644 drivers/net/spnic/base/spnic_mgmt.h
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_nic_event.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.h
create mode 100644 drivers/net/spnic/base/spnic_wq.c
create mode 100644 drivers/net/spnic/base/spnic_wq.h
create mode 100644 drivers/net/spnic/meson.build
create mode 100644 drivers/net/spnic/spnic_ethdev.c
create mode 100644 drivers/net/spnic/spnic_ethdev.h
create mode 100644 drivers/net/spnic/spnic_io.c
create mode 100644 drivers/net/spnic/spnic_io.h
create mode 100644 drivers/net/spnic/spnic_rx.c
create mode 100644 drivers/net/spnic/spnic_rx.h
create mode 100644 drivers/net/spnic/spnic_tx.c
create mode 100644 drivers/net/spnic/spnic_tx.h
create mode 100644 drivers/net/spnic/version.map
--
2.32.0
^ permalink raw reply [relevance 2%]
* [PATCH v5 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
@ 2021-12-29 13:37 2% Yanling Song
0 siblings, 0 replies; 200+ results
From: Yanling Song @ 2021-12-29 13:37 UTC (permalink / raw)
To: dev
Cc: songyl, yanling.song, yanggan, xuyun, ferruh.yigit, stephen, lihuisong
The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
Ramaxel Memory Technology is a company which supply a lot of electric products:
storage, communication, PCB...
SPNxxx is a serial PCIE interface NIC cards:
SPN110: 2 PORTs *25G
SPN120: 4 PORTs *25G
SPN130: 2 PORTs *100G
The following is main features of our SPNIC:
- TSO
- LRO
- Flow control
- SR-IOV(Partially supported)
- VLAN offload
- VLAN filter
- CRC offload
- Promiscuous mode
- RSS
v5->v4:
1. Add prefix "spinc_" for external functions;
2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
3. Do not use void* for keeping the type information
v3->v4:
1. Fix ABI test failure;
2. Remove some descriptions in spnic.rst.
v2->v3:
1. Fix clang compiling failure.
v1->v2:
1. Fix coding style issues and compiling failures;
2. Only support linux in meson.build;
3. Use CLOCK_MONOTONIC_COARSE instead of CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW;
4. Fix time_before();
5. Remove redundant checks in spnic_dev_configure();
Yanling Song (26):
drivers/net: introduce a new PMD driver
net/spnic: initialize the HW interface
net/spnic: add mbox message channel
net/spnic: introduce event queue
net/spnic: add mgmt module
net/spnic: add cmdq and work queue
net/spnic: add interface handling cmdq message
net/spnic: add hardware info initialization
net/spnic: support MAC and link event handling
net/spnic: add function info initialization
net/spnic: add queue pairs context initialization
net/spnic: support mbuf handling of Tx/Rx
net/spnic: support Rx congfiguration
net/spnic: add port/vport enable
net/spnic: support IO packets handling
net/spnic: add device configure/version/info
net/spnic: support RSS configuration update and get
net/spnic: support VLAN filtering and offloading
net/spnic: support promiscuous and allmulticast Rx modes
net/spnic: support flow control
net/spnic: support getting Tx/Rx queues info
net/spnic: net/spnic: support xstats statistics
net/spnic: support VFIO interrupt
net/spnic: support Tx/Rx queue start/stop
net/spnic: add doc infrastructure
net/spnic: Fix reviewers comments
MAINTAINERS | 6 +
doc/guides/nics/features/spnic.ini | 39 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/spnic.rst | 55 +
drivers/net/meson.build | 1 +
drivers/net/spnic/base/meson.build | 37 +
drivers/net/spnic/base/spnic_cmd.h | 222 ++
drivers/net/spnic/base/spnic_cmdq.c | 875 ++++++
drivers/net/spnic/base/spnic_cmdq.h | 248 ++
drivers/net/spnic/base/spnic_compat.h | 184 ++
drivers/net/spnic/base/spnic_csr.h | 104 +
drivers/net/spnic/base/spnic_eqs.c | 661 +++++
drivers/net/spnic/base/spnic_eqs.h | 102 +
drivers/net/spnic/base/spnic_hw_cfg.c | 201 ++
drivers/net/spnic/base/spnic_hw_cfg.h | 125 +
drivers/net/spnic/base/spnic_hw_comm.c | 483 ++++
drivers/net/spnic/base/spnic_hw_comm.h | 204 ++
drivers/net/spnic/base/spnic_hwdev.c | 514 ++++
drivers/net/spnic/base/spnic_hwdev.h | 143 +
drivers/net/spnic/base/spnic_hwif.c | 770 ++++++
drivers/net/spnic/base/spnic_hwif.h | 155 ++
drivers/net/spnic/base/spnic_mbox.c | 1194 ++++++++
drivers/net/spnic/base/spnic_mbox.h | 202 ++
drivers/net/spnic/base/spnic_mgmt.c | 366 +++
drivers/net/spnic/base/spnic_mgmt.h | 110 +
drivers/net/spnic/base/spnic_nic_cfg.c | 1348 +++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 1110 ++++++++
drivers/net/spnic/base/spnic_nic_event.c | 183 ++
drivers/net/spnic/base/spnic_nic_event.h | 24 +
drivers/net/spnic/base/spnic_wq.c | 138 +
drivers/net/spnic/base/spnic_wq.h | 123 +
drivers/net/spnic/meson.build | 20 +
drivers/net/spnic/spnic_ethdev.c | 3212 ++++++++++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 95 +
drivers/net/spnic/spnic_io.c | 728 +++++
drivers/net/spnic/spnic_io.h | 154 ++
drivers/net/spnic/spnic_rx.c | 937 +++++++
drivers/net/spnic/spnic_rx.h | 326 +++
drivers/net/spnic/spnic_tx.c | 858 ++++++
drivers/net/spnic/spnic_tx.h | 297 ++
drivers/net/spnic/version.map | 3 +
41 files changed, 16558 insertions(+)
create mode 100644 doc/guides/nics/features/spnic.ini
create mode 100644 doc/guides/nics/spnic.rst
create mode 100644 drivers/net/spnic/base/meson.build
create mode 100644 drivers/net/spnic/base/spnic_cmd.h
create mode 100644 drivers/net/spnic/base/spnic_cmdq.c
create mode 100644 drivers/net/spnic/base/spnic_cmdq.h
create mode 100644 drivers/net/spnic/base/spnic_compat.h
create mode 100644 drivers/net/spnic/base/spnic_csr.h
create mode 100644 drivers/net/spnic/base/spnic_eqs.c
create mode 100644 drivers/net/spnic/base/spnic_eqs.h
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.c
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.h
create mode 100644 drivers/net/spnic/base/spnic_hwdev.c
create mode 100644 drivers/net/spnic/base/spnic_hwdev.h
create mode 100644 drivers/net/spnic/base/spnic_hwif.c
create mode 100644 drivers/net/spnic/base/spnic_hwif.h
create mode 100644 drivers/net/spnic/base/spnic_mbox.c
create mode 100644 drivers/net/spnic/base/spnic_mbox.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.c
create mode 100644 drivers/net/spnic/base/spnic_mgmt.h
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_nic_event.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.h
create mode 100644 drivers/net/spnic/base/spnic_wq.c
create mode 100644 drivers/net/spnic/base/spnic_wq.h
create mode 100644 drivers/net/spnic/meson.build
create mode 100644 drivers/net/spnic/spnic_ethdev.c
create mode 100644 drivers/net/spnic/spnic_ethdev.h
create mode 100644 drivers/net/spnic/spnic_io.c
create mode 100644 drivers/net/spnic/spnic_io.h
create mode 100644 drivers/net/spnic/spnic_rx.c
create mode 100644 drivers/net/spnic/spnic_rx.h
create mode 100644 drivers/net/spnic/spnic_tx.c
create mode 100644 drivers/net/spnic/spnic_tx.h
create mode 100644 drivers/net/spnic/version.map
--
2.32.0
^ permalink raw reply [relevance 2%]
* [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
@ 2021-12-30 6:08 2% Yanling Song
2022-01-19 16:56 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Yanling Song @ 2021-12-30 6:08 UTC (permalink / raw)
To: dev
Cc: songyl, yanling.song, yanggan, xuyun, ferruh.yigit, stephen, lihuisong
The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
Ramaxel Memory Technology is a company which supply a lot of electric products:
storage, communication, PCB...
SPNxxx is a serial PCIE interface NIC cards:
SPN110: 2 PORTs *25G
SPN120: 4 PORTs *25G
SPN130: 2 PORTs *100G
The following is main features of our SPNIC:
- TSO
- LRO
- Flow control
- SR-IOV(Partially supported)
- VLAN offload
- VLAN filter
- CRC offload
- Promiscuous mode
- RSS
v6->v5, No real changes:
1. Move the fix of RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS from patch 26 to patch 2;
2. Change the description of patch 26.
v5->v4:
1. Add prefix "spinc_" for external functions;
2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
3. Do not use void* for keeping the type information
v3->v4:
1. Fix ABI test failure;
2. Remove some descriptions in spnic.rst.
v2->v3:
1. Fix clang compiling failure.
v1->v2:
1. Fix coding style issues and compiling failures;
2. Only support linux in meson.build;
3. Use CLOCK_MONOTONIC_COARSE instead of CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW;
4. Fix time_before();
5. Remove redundant checks in spnic_dev_configure();
Yanling Song (26):
drivers/net: introduce a new PMD driver
net/spnic: initialize the HW interface
net/spnic: add mbox message channel
net/spnic: introduce event queue
net/spnic: add mgmt module
net/spnic: add cmdq and work queue
net/spnic: add interface handling cmdq message
net/spnic: add hardware info initialization
net/spnic: support MAC and link event handling
net/spnic: add function info initialization
net/spnic: add queue pairs context initialization
net/spnic: support mbuf handling of Tx/Rx
net/spnic: support Rx congfiguration
net/spnic: add port/vport enable
net/spnic: support IO packets handling
net/spnic: add device configure/version/info
net/spnic: support RSS configuration update and get
net/spnic: support VLAN filtering and offloading
net/spnic: support promiscuous and allmulticast Rx modes
net/spnic: support flow control
net/spnic: support getting Tx/Rx queues info
net/spnic: net/spnic: support xstats statistics
net/spnic: support VFIO interrupt
net/spnic: support Tx/Rx queue start/stop
net/spnic: add doc infrastructure
net/spnic: fixes unsafe C style code
MAINTAINERS | 6 +
doc/guides/nics/features/spnic.ini | 39 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/spnic.rst | 55 +
drivers/net/meson.build | 1 +
drivers/net/spnic/base/meson.build | 37 +
drivers/net/spnic/base/spnic_cmd.h | 222 ++
drivers/net/spnic/base/spnic_cmdq.c | 875 ++++++
drivers/net/spnic/base/spnic_cmdq.h | 248 ++
drivers/net/spnic/base/spnic_compat.h | 184 ++
drivers/net/spnic/base/spnic_csr.h | 104 +
drivers/net/spnic/base/spnic_eqs.c | 661 +++++
drivers/net/spnic/base/spnic_eqs.h | 102 +
drivers/net/spnic/base/spnic_hw_cfg.c | 201 ++
drivers/net/spnic/base/spnic_hw_cfg.h | 125 +
drivers/net/spnic/base/spnic_hw_comm.c | 483 ++++
drivers/net/spnic/base/spnic_hw_comm.h | 204 ++
drivers/net/spnic/base/spnic_hwdev.c | 514 ++++
drivers/net/spnic/base/spnic_hwdev.h | 143 +
drivers/net/spnic/base/spnic_hwif.c | 770 ++++++
drivers/net/spnic/base/spnic_hwif.h | 155 ++
drivers/net/spnic/base/spnic_mbox.c | 1194 ++++++++
drivers/net/spnic/base/spnic_mbox.h | 202 ++
drivers/net/spnic/base/spnic_mgmt.c | 366 +++
drivers/net/spnic/base/spnic_mgmt.h | 110 +
drivers/net/spnic/base/spnic_nic_cfg.c | 1348 +++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 1110 ++++++++
drivers/net/spnic/base/spnic_nic_event.c | 183 ++
drivers/net/spnic/base/spnic_nic_event.h | 24 +
drivers/net/spnic/base/spnic_wq.c | 138 +
drivers/net/spnic/base/spnic_wq.h | 123 +
drivers/net/spnic/meson.build | 20 +
drivers/net/spnic/spnic_ethdev.c | 3211 ++++++++++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 95 +
drivers/net/spnic/spnic_io.c | 728 +++++
drivers/net/spnic/spnic_io.h | 154 ++
drivers/net/spnic/spnic_rx.c | 937 +++++++
drivers/net/spnic/spnic_rx.h | 326 +++
drivers/net/spnic/spnic_tx.c | 858 ++++++
drivers/net/spnic/spnic_tx.h | 297 ++
drivers/net/spnic/version.map | 3 +
41 files changed, 16557 insertions(+)
create mode 100644 doc/guides/nics/features/spnic.ini
create mode 100644 doc/guides/nics/spnic.rst
create mode 100644 drivers/net/spnic/base/meson.build
create mode 100644 drivers/net/spnic/base/spnic_cmd.h
create mode 100644 drivers/net/spnic/base/spnic_cmdq.c
create mode 100644 drivers/net/spnic/base/spnic_cmdq.h
create mode 100644 drivers/net/spnic/base/spnic_compat.h
create mode 100644 drivers/net/spnic/base/spnic_csr.h
create mode 100644 drivers/net/spnic/base/spnic_eqs.c
create mode 100644 drivers/net/spnic/base/spnic_eqs.h
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.c
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.h
create mode 100644 drivers/net/spnic/base/spnic_hwdev.c
create mode 100644 drivers/net/spnic/base/spnic_hwdev.h
create mode 100644 drivers/net/spnic/base/spnic_hwif.c
create mode 100644 drivers/net/spnic/base/spnic_hwif.h
create mode 100644 drivers/net/spnic/base/spnic_mbox.c
create mode 100644 drivers/net/spnic/base/spnic_mbox.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.c
create mode 100644 drivers/net/spnic/base/spnic_mgmt.h
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_nic_event.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.h
create mode 100644 drivers/net/spnic/base/spnic_wq.c
create mode 100644 drivers/net/spnic/base/spnic_wq.h
create mode 100644 drivers/net/spnic/meson.build
create mode 100644 drivers/net/spnic/spnic_ethdev.c
create mode 100644 drivers/net/spnic/spnic_ethdev.h
create mode 100644 drivers/net/spnic/spnic_io.c
create mode 100644 drivers/net/spnic/spnic_io.h
create mode 100644 drivers/net/spnic/spnic_rx.c
create mode 100644 drivers/net/spnic/spnic_rx.h
create mode 100644 drivers/net/spnic/spnic_tx.c
create mode 100644 drivers/net/spnic/spnic_tx.h
create mode 100644 drivers/net/spnic/version.map
--
2.32.0
^ permalink raw reply [relevance 2%]
* RE: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
2021-12-15 11:33 0% ` Singh, Aman Deep
@ 2022-01-04 15:18 0% ` Li, Xiaoyun
2022-01-04 15:40 0% ` Li, Xiaoyun
0 siblings, 1 reply; 200+ results
From: Li, Xiaoyun @ 2022-01-04 15:18 UTC (permalink / raw)
To: Singh, Aman Deep, Yigit, Ferruh, olivier.matz, mb, Ananyev,
Konstantin, stephen, Medvedkin, Vladimir
Cc: dev
Hi
> -----Original Message-----
> From: Singh, Aman Deep <aman.deep.singh@intel.com>
> Sent: Wednesday, December 15, 2021 11:34
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; olivier.matz@6wind.com;
> mb@smartsharesystems.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; stephen@networkplumber.org;
> Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in
> mbuf
>
>
> On 12/3/2021 5:08 PM, Xiaoyun Li wrote:
> > Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6 UDP/TCP
> > checksum in mbuf which can be over multi-segments.
> >
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > ---
> > doc/guides/rel_notes/release_22_03.rst | 10 ++
> > lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
> > lib/net/version.map | 10 ++
> > 3 files changed, 206 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_22_03.rst
> > b/doc/guides/rel_notes/release_22_03.rst
> > index 6d99d1eaa9..7a082c4427 100644
> > --- a/doc/guides/rel_notes/release_22_03.rst
> > +++ b/doc/guides/rel_notes/release_22_03.rst
> > @@ -55,6 +55,13 @@ New Features
> > Also, make sure to start the actual text at the margin.
> > =======================================================
> >
> > +* **Added functions to calculate UDP/TCP checksum in mbuf.**
> > + * Added the following functions to calculate UDP/TCP checksum of
> packets
> > + which can be over multi-segments:
> > + - ``rte_ipv4_udptcp_cksum_mbuf()``
> > + - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
> > + - ``rte_ipv6_udptcp_cksum_mbuf()``
> > + - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
> >
> > Removed Items
> > -------------
> > @@ -84,6 +91,9 @@ API Changes
> > Also, make sure to start the actual text at the margin.
> > =======================================================
> >
> > +* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
> > + ``rte_ipv4_udptcp_cksum_mbuf_verify()``,
> > +``rte_ipv6_udptcp_cksum_mbuf()``,
> > + ``rte_ipv6_udptcp_cksum_mbuf_verify()``
> >
> > ABI Changes
> > -----------
> > diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h index
> > c575250852..534f401d26 100644
> > --- a/lib/net/rte_ip.h
> > +++ b/lib/net/rte_ip.h
> > @@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr
> *ipv4_hdr, const void *l4_hdr)
> > return cksum;
> > }
> >
> > +/**
> > + * @internal Calculate the non-complemented IPv4 L4 checksum of a
> > +packet */ static inline uint16_t __rte_ipv4_udptcp_cksum_mbuf(const
> > +struct rte_mbuf *m,
> > + const struct rte_ipv4_hdr *ipv4_hdr,
> > + uint16_t l4_off)
> > +{
> > + uint16_t raw_cksum;
> > + uint32_t cksum;
> > +
> > + if (l4_off > m->pkt_len)
> > + return 0;
> > +
> > + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
> &raw_cksum))
> > + return 0;
> > +
> > + cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
> > +
> > + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
> At times, even after above operation "cksum" might stay above 16-bits, ex
> "cksum = 0x1FFFF" to start with.
> Can we consider using "return __rte_raw_cksum_reduce(cksum);"
Will use it in next version. Thanks.
Also, not related to this patch. It means that __rte_ipv4_udptcp_cksum and __rte_ipv6_udptcp_cksum have the same issue, right?
Should anyone fix that?
> > +
> > + return (uint16_t)cksum;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Compute the IPv4 UDP/TCP checksum of a packet.
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @param ipv4_hdr
> > + * The pointer to the contiguous IPv4 header.
> > + * @param l4_off
> > + * The offset in bytes to start L4 checksum.
> > + * @return
> > + * The complemented checksum to set in the L4 header.
> > + */
> > +__rte_experimental
> > +static inline uint16_t
> > +rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> > + const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
> {
> > + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
> l4_off);
> > +
> > + cksum = ~cksum;
> > +
> > + /*
> > + * Per RFC 768: If the computed checksum is zero for UDP,
> > + * it is transmitted as all ones
> > + * (the equivalent in one's complement arithmetic).
> > + */
> > + if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
> > + cksum = 0xffff;
> > +
> > + return cksum;
> > +}
> > +
> > /**
> > * Validate the IPv4 UDP or TCP checksum.
> > *
> > @@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct
> rte_ipv4_hdr *ipv4_hdr,
> > return 0;
> > }
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Verify the IPv4 UDP/TCP checksum of a packet.
> > + *
> > + * In case of UDP, the caller must first check if
> > +udp_hdr->dgram_cksum is 0
> > + * (i.e. no checksum).
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @param ipv4_hdr
> > + * The pointer to the contiguous IPv4 header.
> > + * @param l4_off
> > + * The offset in bytes to start L4 checksum.
> > + * @return
> > + * Return 0 if the checksum is correct, else -1.
> > + */
> > +__rte_experimental
> > +static inline uint16_t
> > +rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> > + const struct rte_ipv4_hdr *ipv4_hdr,
> > + uint16_t l4_off)
> > +{
> > + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
> l4_off);
> > +
> > + if (cksum != 0xffff)
> > + return -1;
> cksum other than 0xffff, should return error. Is that the intent or I am
> missing something obvious.
This is the intent. This function is to verify if the cksum in the packet is correct.
It's different from calling rte_ipv4/6_udptcp_cksum_mbuf(). When calling rte_ipv4/6_udptcp_cksum_mbuf(), you need to set the cksum in udp/tcp header as 0. Then calculate the cksum.
But here, user should directly call this function with the original packet. Then if the udp/tcp cksum is correct, after the calculation (please note that, this is calling __rte_ipv4_udptcp_cksum_mbuf(), so the result needs to be ~), it should be 0xffff, namely, ~cksum = 0 which means cksum is correct. You can see rte_ipv4/6_udptcp_cksum_verify() is doing the same.
> > +
> > + return 0;
> > +}
> > +
> > /**
> > * IPv6 Header
> > */
> > @@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr
> *ipv6_hdr, const void *l4_hdr)
> > return cksum;
> > }
> >
> > +/**
> > + * @internal Calculate the non-complemented IPv6 L4 checksum of a
> > +packet */ static inline uint16_t __rte_ipv6_udptcp_cksum_mbuf(const
> > +struct rte_mbuf *m,
> > + const struct rte_ipv6_hdr *ipv6_hdr,
> > + uint16_t l4_off)
> > +{
> > + uint16_t raw_cksum;
> > + uint32_t cksum;
> > +
> > + if (l4_off > m->pkt_len)
> > + return 0;
> > +
> > + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
> &raw_cksum))
> > + return 0;
> > +
> > + cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
> > +
> > + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
> Same, please check if we can opt for __rte_raw_cksum_reduce(cksum)
> > +
> > + return (uint16_t)cksum;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Process the IPv6 UDP or TCP checksum of a packet.
> > + *
> > + * The IPv6 header must not be followed by extension headers. The
> > +layer 4
> > + * checksum must be set to 0 in the L4 header by the caller.
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @param ipv6_hdr
> > + * The pointer to the contiguous IPv6 header.
> > + * @param l4_off
> > + * The offset in bytes to start L4 checksum.
> > + * @return
> > + * The complemented checksum to set in the L4 header.
> > + */
> > +__rte_experimental
> > +static inline uint16_t
> > +rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> > + const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
> {
> > + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
> l4_off);
> > +
> > + cksum = ~cksum;
> > +
> > + /*
> > + * Per RFC 768: If the computed checksum is zero for UDP,
> > + * it is transmitted as all ones
> > + * (the equivalent in one's complement arithmetic).
> > + */
> > + if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
> > + cksum = 0xffff;
> > +
> > + return cksum;
> > +}
> > +
> > /**
> > * Validate the IPv6 UDP or TCP checksum.
> > *
> > @@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct
> rte_ipv6_hdr *ipv6_hdr,
> > return 0;
> > }
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Validate the IPv6 UDP or TCP checksum of a packet.
> > + *
> > + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
> > + * this is either invalid or means no checksum in some situations.
> > +See 8.1
> > + * (Upper-Layer Checksums) in RFC 8200.
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @param ipv6_hdr
> > + * The pointer to the contiguous IPv6 header.
> > + * @param l4_off
> > + * The offset in bytes to start L4 checksum.
> > + * @return
> > + * Return 0 if the checksum is correct, else -1.
> > + */
> > +__rte_experimental
> > +static inline int
> > +rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> > + const struct rte_ipv6_hdr *ipv6_hdr,
> > + uint16_t l4_off)
> > +{
> > + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
> l4_off);
> > +
> > + if (cksum != 0xffff)
> > + return -1;
> > +
> > + return 0;
> > +}
> > +
> > /** IPv6 fragment extension header. */
> > #define RTE_IPV6_EHDR_MF_SHIFT 0
> > #define RTE_IPV6_EHDR_MF_MASK 1
> > diff --git a/lib/net/version.map b/lib/net/version.map index
> > 4f4330d1c4..0f2aacdef8 100644
> > --- a/lib/net/version.map
> > +++ b/lib/net/version.map
> > @@ -12,3 +12,13 @@ DPDK_22 {
> >
> > local: *;
> > };
> > +
> > +EXPERIMENTAL {
> > + global:
> > +
> > + # added in 22.03
> > + rte_ipv4_udptcp_cksum_mbuf;
> > + rte_ipv4_udptcp_cksum_mbuf_verify;
> > + rte_ipv6_udptcp_cksum_mbuf;
> > + rte_ipv6_udptcp_cksum_mbuf_verify;
> > +};
^ permalink raw reply [relevance 0%]
* RE: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
2022-01-04 15:18 0% ` Li, Xiaoyun
@ 2022-01-04 15:40 0% ` Li, Xiaoyun
2022-01-06 12:56 0% ` Singh, Aman Deep
0 siblings, 1 reply; 200+ results
From: Li, Xiaoyun @ 2022-01-04 15:40 UTC (permalink / raw)
To: Li, Xiaoyun, Singh, Aman Deep, Yigit, Ferruh, olivier.matz, mb,
Ananyev, Konstantin, stephen, Medvedkin, Vladimir
Cc: dev
Hi
> -----Original Message-----
> From: Li, Xiaoyun <xiaoyun.li@intel.com>
> Sent: Tuesday, January 4, 2022 15:19
> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; olivier.matz@6wind.com;
> mb@smartsharesystems.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; stephen@networkplumber.org;
> Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in
> mbuf
>
> Hi
>
> > -----Original Message-----
> > From: Singh, Aman Deep <aman.deep.singh@intel.com>
> > Sent: Wednesday, December 15, 2021 11:34
> > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; olivier.matz@6wind.com;
> > mb@smartsharesystems.com; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; stephen@networkplumber.org;
> Medvedkin,
> > Vladimir <vladimir.medvedkin@intel.com>
> > Cc: dev@dpdk.org
> > Subject: Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP
> > cksum in mbuf
> >
> >
> > On 12/3/2021 5:08 PM, Xiaoyun Li wrote:
> > > Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
> > > UDP/TCP checksum in mbuf which can be over multi-segments.
> > >
> > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > > ---
> > > doc/guides/rel_notes/release_22_03.rst | 10 ++
> > > lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
> > > lib/net/version.map | 10 ++
> > > 3 files changed, 206 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/release_22_03.rst
> > > b/doc/guides/rel_notes/release_22_03.rst
> > > index 6d99d1eaa9..7a082c4427 100644
> > > --- a/doc/guides/rel_notes/release_22_03.rst
> > > +++ b/doc/guides/rel_notes/release_22_03.rst
> > > @@ -55,6 +55,13 @@ New Features
> > > Also, make sure to start the actual text at the margin.
> > > =======================================================
> > >
> > > +* **Added functions to calculate UDP/TCP checksum in mbuf.**
> > > + * Added the following functions to calculate UDP/TCP checksum of
> > packets
> > > + which can be over multi-segments:
> > > + - ``rte_ipv4_udptcp_cksum_mbuf()``
> > > + - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
> > > + - ``rte_ipv6_udptcp_cksum_mbuf()``
> > > + - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
> > >
> > > Removed Items
> > > -------------
> > > @@ -84,6 +91,9 @@ API Changes
> > > Also, make sure to start the actual text at the margin.
> > > =======================================================
> > >
> > > +* net: added experimental functions
> > > +``rte_ipv4_udptcp_cksum_mbuf()``,
> > > + ``rte_ipv4_udptcp_cksum_mbuf_verify()``,
> > > +``rte_ipv6_udptcp_cksum_mbuf()``,
> > > + ``rte_ipv6_udptcp_cksum_mbuf_verify()``
> > >
> > > ABI Changes
> > > -----------
> > > diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h index
> > > c575250852..534f401d26 100644
> > > --- a/lib/net/rte_ip.h
> > > +++ b/lib/net/rte_ip.h
> > > @@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct
> rte_ipv4_hdr
> > *ipv4_hdr, const void *l4_hdr)
> > > return cksum;
> > > }
> > >
> > > +/**
> > > + * @internal Calculate the non-complemented IPv4 L4 checksum of a
> > > +packet */ static inline uint16_t
> > > +__rte_ipv4_udptcp_cksum_mbuf(const
> > > +struct rte_mbuf *m,
> > > + const struct rte_ipv4_hdr *ipv4_hdr,
> > > + uint16_t l4_off)
> > > +{
> > > + uint16_t raw_cksum;
> > > + uint32_t cksum;
> > > +
> > > + if (l4_off > m->pkt_len)
> > > + return 0;
> > > +
> > > + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
> > &raw_cksum))
> > > + return 0;
> > > +
> > > + cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
> > > +
> > > + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
> > At times, even after above operation "cksum" might stay above 16-bits,
> > ex "cksum = 0x1FFFF" to start with.
> > Can we consider using "return __rte_raw_cksum_reduce(cksum);"
>
> Will use it in next version. Thanks.
>
> Also, not related to this patch. It means that __rte_ipv4_udptcp_cksum and
> __rte_ipv6_udptcp_cksum have the same issue, right?
> Should anyone fix that?
Forgot the intent here.
rte_raw_cksum_mbuf() already calls __rte_raw_cksum_reduce().
So actually, it's a result of uint16_t + uint16_t. So it's impossible of your case. There's no need to call __rte_raw_cksum_reduce().
>
> > > +
> > > + return (uint16_t)cksum;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Compute the IPv4 UDP/TCP checksum of a packet.
> > > + *
> > > + * @param m
> > > + * The pointer to the mbuf.
> > > + * @param ipv4_hdr
> > > + * The pointer to the contiguous IPv4 header.
> > > + * @param l4_off
> > > + * The offset in bytes to start L4 checksum.
> > > + * @return
> > > + * The complemented checksum to set in the L4 header.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> > > + const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
> > {
> > > + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
> > l4_off);
> > > +
> > > + cksum = ~cksum;
> > > +
> > > + /*
> > > + * Per RFC 768: If the computed checksum is zero for UDP,
> > > + * it is transmitted as all ones
> > > + * (the equivalent in one's complement arithmetic).
> > > + */
> > > + if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
> > > + cksum = 0xffff;
> > > +
> > > + return cksum;
> > > +}
> > > +
> > > /**
> > > * Validate the IPv4 UDP or TCP checksum.
> > > *
> > > @@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct
> > rte_ipv4_hdr *ipv4_hdr,
> > > return 0;
> > > }
> > >
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Verify the IPv4 UDP/TCP checksum of a packet.
> > > + *
> > > + * In case of UDP, the caller must first check if
> > > +udp_hdr->dgram_cksum is 0
> > > + * (i.e. no checksum).
> > > + *
> > > + * @param m
> > > + * The pointer to the mbuf.
> > > + * @param ipv4_hdr
> > > + * The pointer to the contiguous IPv4 header.
> > > + * @param l4_off
> > > + * The offset in bytes to start L4 checksum.
> > > + * @return
> > > + * Return 0 if the checksum is correct, else -1.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> > > + const struct rte_ipv4_hdr *ipv4_hdr,
> > > + uint16_t l4_off)
> > > +{
> > > + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
> > l4_off);
> > > +
> > > + if (cksum != 0xffff)
> > > + return -1;
> > cksum other than 0xffff, should return error. Is that the intent or I
> > am missing something obvious.
>
> This is the intent. This function is to verify if the cksum in the packet is correct.
>
> It's different from calling rte_ipv4/6_udptcp_cksum_mbuf(). When calling
> rte_ipv4/6_udptcp_cksum_mbuf(), you need to set the cksum in udp/tcp
> header as 0. Then calculate the cksum.
>
> But here, user should directly call this function with the original packet. Then
> if the udp/tcp cksum is correct, after the calculation (please note that, this is
> calling __rte_ipv4_udptcp_cksum_mbuf(), so the result needs to be ~), it
> should be 0xffff, namely, ~cksum = 0 which means cksum is correct. You can
> see rte_ipv4/6_udptcp_cksum_verify() is doing the same.
>
> > > +
> > > + return 0;
> > > +}
> > > +
> > > /**
> > > * IPv6 Header
> > > */
> > > @@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct
> rte_ipv6_hdr
> > *ipv6_hdr, const void *l4_hdr)
> > > return cksum;
> > > }
> > >
> > > +/**
> > > + * @internal Calculate the non-complemented IPv6 L4 checksum of a
> > > +packet */ static inline uint16_t
> > > +__rte_ipv6_udptcp_cksum_mbuf(const
> > > +struct rte_mbuf *m,
> > > + const struct rte_ipv6_hdr *ipv6_hdr,
> > > + uint16_t l4_off)
> > > +{
> > > + uint16_t raw_cksum;
> > > + uint32_t cksum;
> > > +
> > > + if (l4_off > m->pkt_len)
> > > + return 0;
> > > +
> > > + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
> > &raw_cksum))
> > > + return 0;
> > > +
> > > + cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
> > > +
> > > + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
> > Same, please check if we can opt for __rte_raw_cksum_reduce(cksum)
> > > +
> > > + return (uint16_t)cksum;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Process the IPv6 UDP or TCP checksum of a packet.
> > > + *
> > > + * The IPv6 header must not be followed by extension headers. The
> > > +layer 4
> > > + * checksum must be set to 0 in the L4 header by the caller.
> > > + *
> > > + * @param m
> > > + * The pointer to the mbuf.
> > > + * @param ipv6_hdr
> > > + * The pointer to the contiguous IPv6 header.
> > > + * @param l4_off
> > > + * The offset in bytes to start L4 checksum.
> > > + * @return
> > > + * The complemented checksum to set in the L4 header.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> > > + const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
> > {
> > > + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
> > l4_off);
> > > +
> > > + cksum = ~cksum;
> > > +
> > > + /*
> > > + * Per RFC 768: If the computed checksum is zero for UDP,
> > > + * it is transmitted as all ones
> > > + * (the equivalent in one's complement arithmetic).
> > > + */
> > > + if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
> > > + cksum = 0xffff;
> > > +
> > > + return cksum;
> > > +}
> > > +
> > > /**
> > > * Validate the IPv6 UDP or TCP checksum.
> > > *
> > > @@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct
> > rte_ipv6_hdr *ipv6_hdr,
> > > return 0;
> > > }
> > >
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Validate the IPv6 UDP or TCP checksum of a packet.
> > > + *
> > > + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is
> 0:
> > > + * this is either invalid or means no checksum in some situations.
> > > +See 8.1
> > > + * (Upper-Layer Checksums) in RFC 8200.
> > > + *
> > > + * @param m
> > > + * The pointer to the mbuf.
> > > + * @param ipv6_hdr
> > > + * The pointer to the contiguous IPv6 header.
> > > + * @param l4_off
> > > + * The offset in bytes to start L4 checksum.
> > > + * @return
> > > + * Return 0 if the checksum is correct, else -1.
> > > + */
> > > +__rte_experimental
> > > +static inline int
> > > +rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> > > + const struct rte_ipv6_hdr *ipv6_hdr,
> > > + uint16_t l4_off)
> > > +{
> > > + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
> > l4_off);
> > > +
> > > + if (cksum != 0xffff)
> > > + return -1;
> > > +
> > > + return 0;
> > > +}
> > > +
> > > /** IPv6 fragment extension header. */
> > > #define RTE_IPV6_EHDR_MF_SHIFT 0
> > > #define RTE_IPV6_EHDR_MF_MASK 1
> > > diff --git a/lib/net/version.map b/lib/net/version.map index
> > > 4f4330d1c4..0f2aacdef8 100644
> > > --- a/lib/net/version.map
> > > +++ b/lib/net/version.map
> > > @@ -12,3 +12,13 @@ DPDK_22 {
> > >
> > > local: *;
> > > };
> > > +
> > > +EXPERIMENTAL {
> > > + global:
> > > +
> > > + # added in 22.03
> > > + rte_ipv4_udptcp_cksum_mbuf;
> > > + rte_ipv4_udptcp_cksum_mbuf_verify;
> > > + rte_ipv6_udptcp_cksum_mbuf;
> > > + rte_ipv6_udptcp_cksum_mbuf_verify;
> > > +};
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
2022-01-04 15:40 0% ` Li, Xiaoyun
@ 2022-01-06 12:56 0% ` Singh, Aman Deep
0 siblings, 0 replies; 200+ results
From: Singh, Aman Deep @ 2022-01-06 12:56 UTC (permalink / raw)
To: Li, Xiaoyun, Yigit, Ferruh, olivier.matz, mb, Ananyev,
Konstantin, stephen, Medvedkin, Vladimir
Cc: dev
[-- Attachment #1: Type: text/plain, Size: 11772 bytes --]
On 1/4/2022 9:10 PM, Li, Xiaoyun wrote:
> Hi
>
>> -----Original Message-----
>> From: Li, Xiaoyun<xiaoyun.li@intel.com>
>> Sent: Tuesday, January 4, 2022 15:19
>> To: Singh, Aman Deep<aman.deep.singh@intel.com>; Yigit, Ferruh
>> <ferruh.yigit@intel.com>;olivier.matz@6wind.com;
>> mb@smartsharesystems.com; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>;stephen@networkplumber.org;
>> Medvedkin, Vladimir<vladimir.medvedkin@intel.com>
>> Cc:dev@dpdk.org
>> Subject: RE: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in
>> mbuf
>>
>> Hi
>>
>>> -----Original Message-----
>>> From: Singh, Aman Deep<aman.deep.singh@intel.com>
>>> Sent: Wednesday, December 15, 2021 11:34
>>> To: Li, Xiaoyun<xiaoyun.li@intel.com>; Yigit, Ferruh
>>> <ferruh.yigit@intel.com>;olivier.matz@6wind.com;
>>> mb@smartsharesystems.com; Ananyev, Konstantin
>>> <konstantin.ananyev@intel.com>;stephen@networkplumber.org;
>> Medvedkin,
>>> Vladimir<vladimir.medvedkin@intel.com>
>>> Cc:dev@dpdk.org
>>> Subject: Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP
>>> cksum in mbuf
>>>
>>>
>>> On 12/3/2021 5:08 PM, Xiaoyun Li wrote:
>>>> Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
>>>> UDP/TCP checksum in mbuf which can be over multi-segments.
>>>>
>>>> Signed-off-by: Xiaoyun Li<xiaoyun.li@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
>>>> ---
>>>> doc/guides/rel_notes/release_22_03.rst | 10 ++
>>>> lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
>>>> lib/net/version.map | 10 ++
>>>> 3 files changed, 206 insertions(+)
>>>>
>>>> diff --git a/doc/guides/rel_notes/release_22_03.rst
>>>> b/doc/guides/rel_notes/release_22_03.rst
>>>> index 6d99d1eaa9..7a082c4427 100644
>>>> --- a/doc/guides/rel_notes/release_22_03.rst
>>>> +++ b/doc/guides/rel_notes/release_22_03.rst
>>>> @@ -55,6 +55,13 @@ New Features
>>>> Also, make sure to start the actual text at the margin.
>>>> =======================================================
>>>>
>>>> +* **Added functions to calculate UDP/TCP checksum in mbuf.**
>>>> + * Added the following functions to calculate UDP/TCP checksum of
>>> packets
>>>> + which can be over multi-segments:
>>>> + - ``rte_ipv4_udptcp_cksum_mbuf()``
>>>> + - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
>>>> + - ``rte_ipv6_udptcp_cksum_mbuf()``
>>>> + - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
>>>>
>>>> Removed Items
>>>> -------------
>>>> @@ -84,6 +91,9 @@ API Changes
>>>> Also, make sure to start the actual text at the margin.
>>>> =======================================================
>>>>
>>>> +* net: added experimental functions
>>>> +``rte_ipv4_udptcp_cksum_mbuf()``,
>>>> + ``rte_ipv4_udptcp_cksum_mbuf_verify()``,
>>>> +``rte_ipv6_udptcp_cksum_mbuf()``,
>>>> + ``rte_ipv6_udptcp_cksum_mbuf_verify()``
>>>>
>>>> ABI Changes
>>>> -----------
>>>> diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h index
>>>> c575250852..534f401d26 100644
>>>> --- a/lib/net/rte_ip.h
>>>> +++ b/lib/net/rte_ip.h
>>>> @@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct
>> rte_ipv4_hdr
>>> *ipv4_hdr, const void *l4_hdr)
>>>> return cksum;
>>>> }
>>>>
>>>> +/**
>>>> + * @internal Calculate the non-complemented IPv4 L4 checksum of a
>>>> +packet */ static inline uint16_t
>>>> +__rte_ipv4_udptcp_cksum_mbuf(const
>>>> +struct rte_mbuf *m,
>>>> + const struct rte_ipv4_hdr *ipv4_hdr,
>>>> + uint16_t l4_off)
>>>> +{
>>>> + uint16_t raw_cksum;
>>>> + uint32_t cksum;
>>>> +
>>>> + if (l4_off > m->pkt_len)
>>>> + return 0;
>>>> +
>>>> + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
>>> &raw_cksum))
>>>> + return 0;
>>>> +
>>>> + cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
>>>> +
>>>> + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
>>> At times, even after above operation "cksum" might stay above 16-bits,
>>> ex "cksum = 0x1FFFF" to start with.
>>> Can we consider using "return __rte_raw_cksum_reduce(cksum);"
>> Will use it in next version. Thanks.
>>
>> Also, not related to this patch. It means that __rte_ipv4_udptcp_cksum and
>> __rte_ipv6_udptcp_cksum have the same issue, right?
>> Should anyone fix that?
> Forgot the intent here.
> rte_raw_cksum_mbuf() already calls __rte_raw_cksum_reduce().
> So actually, it's a result of uint16_t + uint16_t. So it's impossible of your case. There's no need to call __rte_raw_cksum_reduce().
Got it, Thanks. With u16 + u16, max 1-bit overflow only possible. So
effective operation here reduces to-
cksum = ((cksum & 0x10000) >> 16) + (cksum & 0xffff);
>>>> +
>>>> + return (uint16_t)cksum;
>>>> +}
>>>> +
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Compute the IPv4 UDP/TCP checksum of a packet.
>>>> + *
>>>> + * @param m
>>>> + * The pointer to the mbuf.
>>>> + * @param ipv4_hdr
>>>> + * The pointer to the contiguous IPv4 header.
>>>> + * @param l4_off
>>>> + * The offset in bytes to start L4 checksum.
>>>> + * @return
>>>> + * The complemented checksum to set in the L4 header.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline uint16_t
>>>> +rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
>>>> + const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
>>> {
>>>> + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
>>> l4_off);
>>>> +
>>>> + cksum = ~cksum;
>>>> +
>>>> + /*
>>>> + * Per RFC 768: If the computed checksum is zero for UDP,
>>>> + * it is transmitted as all ones
>>>> + * (the equivalent in one's complement arithmetic).
>>>> + */
>>>> + if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
>>>> + cksum = 0xffff;
>>>> +
>>>> + return cksum;
>>>> +}
>>>> +
>>>> /**
>>>> * Validate the IPv4 UDP or TCP checksum.
>>>> *
>>>> @@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct
>>> rte_ipv4_hdr *ipv4_hdr,
>>>> return 0;
>>>> }
>>>>
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Verify the IPv4 UDP/TCP checksum of a packet.
>>>> + *
>>>> + * In case of UDP, the caller must first check if
>>>> +udp_hdr->dgram_cksum is 0
>>>> + * (i.e. no checksum).
>>>> + *
>>>> + * @param m
>>>> + * The pointer to the mbuf.
>>>> + * @param ipv4_hdr
>>>> + * The pointer to the contiguous IPv4 header.
>>>> + * @param l4_off
>>>> + * The offset in bytes to start L4 checksum.
>>>> + * @return
>>>> + * Return 0 if the checksum is correct, else -1.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline uint16_t
>>>> +rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
>>>> + const struct rte_ipv4_hdr *ipv4_hdr,
>>>> + uint16_t l4_off)
>>>> +{
>>>> + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
>>> l4_off);
>>>> +
>>>> + if (cksum != 0xffff)
>>>> + return -1;
>>> cksum other than 0xffff, should return error. Is that the intent or I
>>> am missing something obvious.
>> This is the intent. This function is to verify if the cksum in the packet is correct.
>>
>> It's different from calling rte_ipv4/6_udptcp_cksum_mbuf(). When calling
>> rte_ipv4/6_udptcp_cksum_mbuf(), you need to set the cksum in udp/tcp
>> header as 0. Then calculate the cksum.
>>
>> But here, user should directly call this function with the original packet. Then
>> if the udp/tcp cksum is correct, after the calculation (please note that, this is
>> calling __rte_ipv4_udptcp_cksum_mbuf(), so the result needs to be ~), it
>> should be 0xffff, namely, ~cksum = 0 which means cksum is correct. You can
>> see rte_ipv4/6_udptcp_cksum_verify() is doing the same.
>>
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> /**
>>>> * IPv6 Header
>>>> */
>>>> @@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct
>> rte_ipv6_hdr
>>> *ipv6_hdr, const void *l4_hdr)
>>>> return cksum;
>>>> }
>>>>
>>>> +/**
>>>> + * @internal Calculate the non-complemented IPv6 L4 checksum of a
>>>> +packet */ static inline uint16_t
>>>> +__rte_ipv6_udptcp_cksum_mbuf(const
>>>> +struct rte_mbuf *m,
>>>> + const struct rte_ipv6_hdr *ipv6_hdr,
>>>> + uint16_t l4_off)
>>>> +{
>>>> + uint16_t raw_cksum;
>>>> + uint32_t cksum;
>>>> +
>>>> + if (l4_off > m->pkt_len)
>>>> + return 0;
>>>> +
>>>> + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
>>> &raw_cksum))
>>>> + return 0;
>>>> +
>>>> + cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
>>>> +
>>>> + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
>>> Same, please check if we can opt for __rte_raw_cksum_reduce(cksum)
>>>> +
>>>> + return (uint16_t)cksum;
>>>> +}
>>>> +
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Process the IPv6 UDP or TCP checksum of a packet.
>>>> + *
>>>> + * The IPv6 header must not be followed by extension headers. The
>>>> +layer 4
>>>> + * checksum must be set to 0 in the L4 header by the caller.
>>>> + *
>>>> + * @param m
>>>> + * The pointer to the mbuf.
>>>> + * @param ipv6_hdr
>>>> + * The pointer to the contiguous IPv6 header.
>>>> + * @param l4_off
>>>> + * The offset in bytes to start L4 checksum.
>>>> + * @return
>>>> + * The complemented checksum to set in the L4 header.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline uint16_t
>>>> +rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
>>>> + const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
>>> {
>>>> + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
>>> l4_off);
>>>> +
>>>> + cksum = ~cksum;
>>>> +
>>>> + /*
>>>> + * Per RFC 768: If the computed checksum is zero for UDP,
>>>> + * it is transmitted as all ones
>>>> + * (the equivalent in one's complement arithmetic).
>>>> + */
>>>> + if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
>>>> + cksum = 0xffff;
>>>> +
>>>> + return cksum;
>>>> +}
>>>> +
>>>> /**
>>>> * Validate the IPv6 UDP or TCP checksum.
>>>> *
>>>> @@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct
>>> rte_ipv6_hdr *ipv6_hdr,
>>>> return 0;
>>>> }
>>>>
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Validate the IPv6 UDP or TCP checksum of a packet.
>>>> + *
>>>> + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is
>> 0:
>>>> + * this is either invalid or means no checksum in some situations.
>>>> +See 8.1
>>>> + * (Upper-Layer Checksums) in RFC 8200.
>>>> + *
>>>> + * @param m
>>>> + * The pointer to the mbuf.
>>>> + * @param ipv6_hdr
>>>> + * The pointer to the contiguous IPv6 header.
>>>> + * @param l4_off
>>>> + * The offset in bytes to start L4 checksum.
>>>> + * @return
>>>> + * Return 0 if the checksum is correct, else -1.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline int
>>>> +rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
>>>> + const struct rte_ipv6_hdr *ipv6_hdr,
>>>> + uint16_t l4_off)
>>>> +{
>>>> + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
>>> l4_off);
>>>> +
>>>> + if (cksum != 0xffff)
>>>> + return -1;
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> /** IPv6 fragment extension header. */
>>>> #define RTE_IPV6_EHDR_MF_SHIFT 0
>>>> #define RTE_IPV6_EHDR_MF_MASK 1
>>>> diff --git a/lib/net/version.map b/lib/net/version.map index
>>>> 4f4330d1c4..0f2aacdef8 100644
>>>> --- a/lib/net/version.map
>>>> +++ b/lib/net/version.map
>>>> @@ -12,3 +12,13 @@ DPDK_22 {
>>>>
>>>> local: *;
>>>> };
>>>> +
>>>> +EXPERIMENTAL {
>>>> + global:
>>>> +
>>>> + # added in 22.03
>>>> + rte_ipv4_udptcp_cksum_mbuf;
>>>> + rte_ipv4_udptcp_cksum_mbuf_verify;
>>>> + rte_ipv6_udptcp_cksum_mbuf;
>>>> + rte_ipv6_udptcp_cksum_mbuf_verify;
>>>> +};
[-- Attachment #2: Type: text/html, Size: 16806 bytes --]
^ permalink raw reply [relevance 0%]
* [PATCH v5 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
@ 2022-01-06 16:03 3% ` Xiaoyun Li
0 siblings, 0 replies; 200+ results
From: Xiaoyun Li @ 2022-01-06 16:03 UTC (permalink / raw)
To: Aman.Deep.Singh, ferruh.yigit, olivier.matz, mb,
konstantin.ananyev, stephen, vladimir.medvedkin
Cc: dev, Xiaoyun Li, Aman Singh, Sunil Pai G
Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
UDP/TCP checksum in mbuf which can be over multi-segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Tested-by: Sunil Pai G <sunil.pai.g@intel.com>
---
doc/guides/rel_notes/release_22_03.rst | 11 ++
lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
2 files changed, 197 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..785fd22001 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,14 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added functions to calculate UDP/TCP checksum in mbuf.**
+
+ * Added the following functions to calculate UDP/TCP checksum of packets
+ which can be over multi-segments:
+ - ``rte_ipv4_udptcp_cksum_mbuf()``
+ - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
+ - ``rte_ipv6_udptcp_cksum_mbuf()``
+ - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
Removed Items
-------------
@@ -84,6 +92,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
+ ``rte_ipv4_udptcp_cksum_mbuf_verify()``, ``rte_ipv6_udptcp_cksum_mbuf()``,
+ ``rte_ipv6_udptcp_cksum_mbuf_verify()``
ABI Changes
-----------
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index c575250852..534f401d26 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr *ipv4_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv4 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Compute the IPv4 UDP/TCP checksum of a packet.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv4 UDP or TCP checksum.
*
@@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct rte_ipv4_hdr *ipv4_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Verify the IPv4 UDP/TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0
+ * (i.e. no checksum).
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/**
* IPv6 Header
*/
@@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr *ipv6_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv6 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process the IPv6 UDP or TCP checksum of a packet.
+ *
+ * The IPv6 header must not be followed by extension headers. The layer 4
+ * checksum must be set to 0 in the L4 header by the caller.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv6 UDP or TCP checksum.
*
@@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct rte_ipv6_hdr *ipv6_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Validate the IPv6 UDP or TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
+ * this is either invalid or means no checksum in some situations. See 8.1
+ * (Upper-Layer Checksums) in RFC 8200.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline int
+rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/** IPv6 fragment extension header. */
#define RTE_IPV6_EHDR_MF_SHIFT 0
#define RTE_IPV6_EHDR_MF_MASK 1
--
2.25.1
^ permalink raw reply [relevance 3%]
* [PATCH v5 1/2] eal: add API for bus close
@ 2022-01-10 5:26 3% ` rohit.raj
0 siblings, 0 replies; 200+ results
From: rohit.raj @ 2022-01-10 5:26 UTC (permalink / raw)
To: Bruce Richardson, Ray Kinsella, Dmitry Kozlyuk,
Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam
Cc: dev, nipun.gupta, sachin.saxena, hemant.agrawal, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
As per the current code we have API for bus probe, but the
bus close API is missing. This breaks the multi process
scenarios as objects are not cleaned while terminating the
secondary processes.
This patch adds a new API rte_bus_close() for cleanup of
bus objects which were acquired during probe.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
Rebased on this patch series:
https://patches.dpdk.org/project/dpdk/list/?series=21049
v5:
* Updated release notes for new feature and API change.
* Added support for error checking while closing bus.
* Added experimental banner for new API.
* Squashed changes related to freebsd and windows into single patch.
* Discarded patch to fix a bug which is already fixed on latest
release.
v4:
* Added comments to clarify responsibility of rte_bus_close.
* Added support for rte_bus_close on freebsd.
* Added support for rte_bus_close on windows.
v3:
* nit: combined nested if statements.
v2:
* Moved rte_bus_close call to rte_eal_cleanup path.
doc/guides/rel_notes/release_22_03.rst | 8 +++++++
lib/eal/common/eal_common_bus.c | 33 +++++++++++++++++++++++++-
lib/eal/freebsd/eal.c | 1 +
lib/eal/include/rte_bus.h | 30 ++++++++++++++++++++++-
lib/eal/linux/eal.c | 8 +++++++
lib/eal/version.map | 3 +++
lib/eal/windows/eal.c | 1 +
7 files changed, 82 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..7417606a2a 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,11 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+ * **Added support to close bus.**
+
+ Added capability to allow a user to do cleanup of bus objects which
+ were acquired during bus probe.
+
Removed Items
-------------
@@ -84,6 +89,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+ * eal: Added new API ``rte_bus_close`` to perform cleanup bus objects which
+ were acquired during bus probe.
+
ABI Changes
-----------
diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c
index baa5b532af..2c3c0a90d2 100644
--- a/lib/eal/common/eal_common_bus.c
+++ b/lib/eal/common/eal_common_bus.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2016 NXP
+ * Copyright 2016,2022 NXP
*/
#include <stdio.h>
@@ -85,6 +85,37 @@ rte_bus_probe(void)
return 0;
}
+/* Close all devices of all buses */
+int
+rte_bus_close(void)
+{
+ int ret;
+ struct rte_bus *bus, *vbus = NULL;
+
+ TAILQ_FOREACH(bus, &rte_bus_list, next) {
+ if (!strcmp(bus->name, "vdev")) {
+ vbus = bus;
+ continue;
+ }
+
+ if (bus->close) {
+ ret = bus->close();
+ if (ret)
+ RTE_LOG(ERR, EAL, "Bus (%s) close failed.\n",
+ bus->name);
+ }
+ }
+
+ if (vbus && vbus->close) {
+ ret = vbus->close();
+ if (ret)
+ RTE_LOG(ERR, EAL, "Bus (%s) close failed.\n",
+ vbus->name);
+ }
+
+ return 0;
+}
+
/* Dump information of a single bus */
static int
bus_dump_one(FILE *f, struct rte_bus *bus)
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index a1cd2462db..87d70c6898 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -984,6 +984,7 @@ rte_eal_cleanup(void)
{
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ rte_bus_close();
rte_service_finalize();
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index bbbb6efd28..c6211bbd95 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2016 NXP
+ * Copyright 2016,2022 NXP
*/
#ifndef _RTE_BUS_H_
@@ -66,6 +66,23 @@ typedef int (*rte_bus_scan_t)(void);
*/
typedef int (*rte_bus_probe_t)(void);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Implementation specific close function which is responsible for resetting all
+ * detected devices on the bus to a default state, closing UIO nodes or VFIO
+ * groups and also freeing any memory allocated during rte_bus_probe like
+ * private resources for device list.
+ *
+ * This is called while iterating over each registered bus.
+ *
+ * @return
+ * 0 for successful close
+ * !0 for any error while closing
+ */
+typedef int (*rte_bus_close_t)(void);
+
/**
* Device iterator to find a device on a bus.
*
@@ -263,6 +280,7 @@ struct rte_bus {
const char *name; /**< Name of the bus */
rte_bus_scan_t scan; /**< Scan for devices attached to bus */
rte_bus_probe_t probe; /**< Probe devices on bus */
+ rte_bus_close_t close; /**< Close devices on bus */
rte_bus_find_device_t find_device; /**< Find a device on the bus */
rte_bus_plug_t plug; /**< Probe single device for drivers */
rte_bus_unplug_t unplug; /**< Remove single device from driver */
@@ -317,6 +335,16 @@ int rte_bus_scan(void);
*/
int rte_bus_probe(void);
+/**
+ * For each device on the buses, call the device specific close.
+ *
+ * @return
+ * 0 for successful close
+ * !0 otherwise
+ */
+__rte_experimental
+int rte_bus_close(void);
+
/**
* Dump information of all the buses registered with EAL.
*
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 60b4924838..5c60131e46 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1362,6 +1362,14 @@ rte_eal_cleanup(void)
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
rte_memseg_walk(mark_freeable, NULL);
+
+ /* Close all the buses and devices/drivers on them */
+ if (rte_bus_close()) {
+ rte_eal_init_alert("Cannot close devices");
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
rte_service_finalize();
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index ab28c22791..39882dbbd5 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -420,6 +420,9 @@ EXPERIMENTAL {
rte_intr_instance_free;
rte_intr_type_get;
rte_intr_type_set;
+
+ # added in 22.03
+ rte_bus_close;
};
INTERNAL {
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 67db7f099a..5915ab6291 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -260,6 +260,7 @@ rte_eal_cleanup(void)
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ rte_bus_close();
eal_intr_thread_cancel();
eal_mem_virt2iova_cleanup();
/* after this point, any DPDK pointers will become dangling */
--
2.17.1
^ permalink raw reply [relevance 3%]
* [PATCH v3] ethdev: mark old macros as deprecated
@ 2022-01-12 14:36 1% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-01-12 14:36 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko, Hemant Agrawal,
Tyler Retzlaff, Chenbo Xia, Jerin Jacob
Cc: dev, Ferruh Yigit, Stephen Hemminger
Old macros kept for backward compatibility, but this cause old macro
usage to sneak in silently.
Marking old macros as deprecated. Downside is this will cause some noise
for applications that are using old macros.
Fixes: 295968d17407 ("ethdev: add namespace")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
v2:
* Release notes updated
v3:
* Update 22.03 release note
---
doc/guides/rel_notes/release_22_03.rst | 3 +
lib/ethdev/rte_ethdev.h | 474 +++++++++++++------------
2 files changed, 247 insertions(+), 230 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa94a..16c66c0641d4 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -84,6 +84,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: Old public macros and enumeration constants without ``RTE_ETH_`` prefix,
+ which are kept for backward compatibility, are marked as deprecated.
+
ABI Changes
-----------
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index fa299c8ad70e..147cc1ced36a 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -288,76 +288,78 @@ struct rte_eth_stats {
* Device supported speeds bitmap flags
*/
#define RTE_ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_AUTONEG RTE_ETH_LINK_SPEED_AUTONEG
#define RTE_ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_FIXED RTE_ETH_LINK_SPEED_FIXED
#define RTE_ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M_HD RTE_ETH_LINK_SPEED_10M_HD
#define RTE_ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */
-#define ETH_LINK_SPEED_10M RTE_ETH_LINK_SPEED_10M
#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M_HD RTE_ETH_LINK_SPEED_100M_HD
#define RTE_ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M RTE_ETH_LINK_SPEED_100M
#define RTE_ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */
-#define ETH_LINK_SPEED_1G RTE_ETH_LINK_SPEED_1G
#define RTE_ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_2_5G RTE_ETH_LINK_SPEED_2_5G
#define RTE_ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */
-#define ETH_LINK_SPEED_5G RTE_ETH_LINK_SPEED_5G
#define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
-#define ETH_LINK_SPEED_10G RTE_ETH_LINK_SPEED_10G
#define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
-#define ETH_LINK_SPEED_20G RTE_ETH_LINK_SPEED_20G
#define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
-#define ETH_LINK_SPEED_25G RTE_ETH_LINK_SPEED_25G
#define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
-#define ETH_LINK_SPEED_40G RTE_ETH_LINK_SPEED_40G
#define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
-#define ETH_LINK_SPEED_50G RTE_ETH_LINK_SPEED_50G
#define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
-#define ETH_LINK_SPEED_56G RTE_ETH_LINK_SPEED_56G
#define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_100G RTE_ETH_LINK_SPEED_100G
#define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
-#define ETH_LINK_SPEED_200G RTE_ETH_LINK_SPEED_200G
/**@}*/
+#define ETH_LINK_SPEED_AUTONEG RTE_DEPRECATED(ETH_LINK_SPEED_AUTONEG) RTE_ETH_LINK_SPEED_AUTONEG
+#define ETH_LINK_SPEED_FIXED RTE_DEPRECATED(ETH_LINK_SPEED_FIXED) RTE_ETH_LINK_SPEED_FIXED
+#define ETH_LINK_SPEED_10M_HD RTE_DEPRECATED(ETH_LINK_SPEED_10M_HD) RTE_ETH_LINK_SPEED_10M_HD
+#define ETH_LINK_SPEED_10M RTE_DEPRECATED(ETH_LINK_SPEED_10M) RTE_ETH_LINK_SPEED_10M
+#define ETH_LINK_SPEED_100M_HD RTE_DEPRECATED(ETH_LINK_SPEED_100M_HD) RTE_ETH_LINK_SPEED_100M_HD
+#define ETH_LINK_SPEED_100M RTE_DEPRECATED(ETH_LINK_SPEED_100M) RTE_ETH_LINK_SPEED_100M
+#define ETH_LINK_SPEED_1G RTE_DEPRECATED(ETH_LINK_SPEED_1G) RTE_ETH_LINK_SPEED_1G
+#define ETH_LINK_SPEED_2_5G RTE_DEPRECATED(ETH_LINK_SPEED_2_5G) RTE_ETH_LINK_SPEED_2_5G
+#define ETH_LINK_SPEED_5G RTE_DEPRECATED(ETH_LINK_SPEED_5G) RTE_ETH_LINK_SPEED_5G
+#define ETH_LINK_SPEED_10G RTE_DEPRECATED(ETH_LINK_SPEED_10G) RTE_ETH_LINK_SPEED_10G
+#define ETH_LINK_SPEED_20G RTE_DEPRECATED(ETH_LINK_SPEED_20G) RTE_ETH_LINK_SPEED_20G
+#define ETH_LINK_SPEED_25G RTE_DEPRECATED(ETH_LINK_SPEED_25G) RTE_ETH_LINK_SPEED_25G
+#define ETH_LINK_SPEED_40G RTE_DEPRECATED(ETH_LINK_SPEED_40G) RTE_ETH_LINK_SPEED_40G
+#define ETH_LINK_SPEED_50G RTE_DEPRECATED(ETH_LINK_SPEED_50G) RTE_ETH_LINK_SPEED_50G
+#define ETH_LINK_SPEED_56G RTE_DEPRECATED(ETH_LINK_SPEED_56G) RTE_ETH_LINK_SPEED_56G
+#define ETH_LINK_SPEED_100G RTE_DEPRECATED(ETH_LINK_SPEED_100G) RTE_ETH_LINK_SPEED_100G
+#define ETH_LINK_SPEED_200G RTE_DEPRECATED(ETH_LINK_SPEED_200G) RTE_ETH_LINK_SPEED_200G
+
/**@{@name Link speed
* Ethernet numeric link speeds in Mbps
*/
#define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */
-#define ETH_SPEED_NUM_NONE RTE_ETH_SPEED_NUM_NONE
#define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
-#define ETH_SPEED_NUM_10M RTE_ETH_SPEED_NUM_10M
#define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_100M RTE_ETH_SPEED_NUM_100M
#define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
-#define ETH_SPEED_NUM_1G RTE_ETH_SPEED_NUM_1G
#define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_2_5G RTE_ETH_SPEED_NUM_2_5G
#define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
-#define ETH_SPEED_NUM_5G RTE_ETH_SPEED_NUM_5G
#define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
-#define ETH_SPEED_NUM_10G RTE_ETH_SPEED_NUM_10G
#define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
-#define ETH_SPEED_NUM_20G RTE_ETH_SPEED_NUM_20G
#define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
-#define ETH_SPEED_NUM_25G RTE_ETH_SPEED_NUM_25G
#define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
-#define ETH_SPEED_NUM_40G RTE_ETH_SPEED_NUM_40G
#define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
-#define ETH_SPEED_NUM_50G RTE_ETH_SPEED_NUM_50G
#define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
-#define ETH_SPEED_NUM_56G RTE_ETH_SPEED_NUM_56G
#define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_100G RTE_ETH_SPEED_NUM_100G
#define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_200G RTE_ETH_SPEED_NUM_200G
#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
-#define ETH_SPEED_NUM_UNKNOWN RTE_ETH_SPEED_NUM_UNKNOWN
/**@}*/
+#define ETH_SPEED_NUM_NONE RTE_DEPRECATED(ETH_SPEED_NUM_NONE) RTE_ETH_SPEED_NUM_NONE
+#define ETH_SPEED_NUM_10M RTE_DEPRECATED(ETH_SPEED_NUM_10M) RTE_ETH_SPEED_NUM_10M
+#define ETH_SPEED_NUM_100M RTE_DEPRECATED(ETH_SPEED_NUM_100M) RTE_ETH_SPEED_NUM_100M
+#define ETH_SPEED_NUM_1G RTE_DEPRECATED(ETH_SPEED_NUM_1G) RTE_ETH_SPEED_NUM_1G
+#define ETH_SPEED_NUM_2_5G RTE_DEPRECATED(ETH_SPEED_NUM_2_5G) RTE_ETH_SPEED_NUM_2_5G
+#define ETH_SPEED_NUM_5G RTE_DEPRECATED(ETH_SPEED_NUM_5G) RTE_ETH_SPEED_NUM_5G
+#define ETH_SPEED_NUM_10G RTE_DEPRECATED(ETH_SPEED_NUM_10G) RTE_ETH_SPEED_NUM_10G
+#define ETH_SPEED_NUM_20G RTE_DEPRECATED(ETH_SPEED_NUM_20G) RTE_ETH_SPEED_NUM_20G
+#define ETH_SPEED_NUM_25G RTE_DEPRECATED(ETH_SPEED_NUM_25G) RTE_ETH_SPEED_NUM_25G
+#define ETH_SPEED_NUM_40G RTE_DEPRECATED(ETH_SPEED_NUM_40G) RTE_ETH_SPEED_NUM_40G
+#define ETH_SPEED_NUM_50G RTE_DEPRECATED(ETH_SPEED_NUM_50G) RTE_ETH_SPEED_NUM_50G
+#define ETH_SPEED_NUM_56G RTE_DEPRECATED(ETH_SPEED_NUM_56G) RTE_ETH_SPEED_NUM_56G
+#define ETH_SPEED_NUM_100G RTE_DEPRECATED(ETH_SPEED_NUM_100G) RTE_ETH_SPEED_NUM_100G
+#define ETH_SPEED_NUM_200G RTE_DEPRECATED(ETH_SPEED_NUM_200G) RTE_ETH_SPEED_NUM_200G
+#define ETH_SPEED_NUM_UNKNOWN RTE_DEPRECATED(ETH_SPEED_NUM_UNKNOWN) RTE_ETH_SPEED_NUM_UNKNOWN
+
/**
* A structure used to retrieve link-level information of an Ethernet port.
*/
@@ -373,20 +375,21 @@ struct rte_eth_link {
* Constants used in link management.
*/
#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_HALF_DUPLEX RTE_ETH_LINK_HALF_DUPLEX
#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX RTE_ETH_LINK_FULL_DUPLEX
#define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
-#define ETH_LINK_DOWN RTE_ETH_LINK_DOWN
#define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */
-#define ETH_LINK_UP RTE_ETH_LINK_UP
#define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_FIXED RTE_ETH_LINK_FIXED
#define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
-#define ETH_LINK_AUTONEG RTE_ETH_LINK_AUTONEG
#define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
/**@}*/
+#define ETH_LINK_HALF_DUPLEX RTE_DEPRECATED(ETH_LINK_HALF_DUPLEX) RTE_ETH_LINK_HALF_DUPLEX
+#define ETH_LINK_FULL_DUPLEX RTE_DEPRECATED(ETH_LINK_FULL_DUPLEX) RTE_ETH_LINK_FULL_DUPLEX
+#define ETH_LINK_DOWN RTE_DEPRECATED(ETH_LINK_DOWN) RTE_ETH_LINK_DOWN
+#define ETH_LINK_UP RTE_DEPRECATED(ETH_LINK_UP) RTE_ETH_LINK_UP
+#define ETH_LINK_FIXED RTE_DEPRECATED(ETH_LINK_FIXED) RTE_ETH_LINK_FIXED
+#define ETH_LINK_AUTONEG RTE_DEPRECATED(ETH_LINK_AUTONEG) RTE_ETH_LINK_AUTONEG
+
/**
* A structure used to configure the ring threshold registers of an Rx/Tx
* queue for an Ethernet port.
@@ -401,13 +404,14 @@ struct rte_eth_thresh {
* @see rte_eth_conf.rxmode.mq_mode.
*/
#define RTE_ETH_MQ_RX_RSS_FLAG RTE_BIT32(0) /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_RSS_FLAG RTE_ETH_MQ_RX_RSS_FLAG
#define RTE_ETH_MQ_RX_DCB_FLAG RTE_BIT32(1) /**< Enable DCB. */
-#define ETH_MQ_RX_DCB_FLAG RTE_ETH_MQ_RX_DCB_FLAG
#define RTE_ETH_MQ_RX_VMDQ_FLAG RTE_BIT32(2) /**< Enable VMDq. */
-#define ETH_MQ_RX_VMDQ_FLAG RTE_ETH_MQ_RX_VMDQ_FLAG
/**@}*/
+#define ETH_MQ_RX_RSS_FLAG RTE_DEPRECATED(ETH_MQ_RX_RSS_FLAG) RTE_ETH_MQ_RX_RSS_FLAG
+#define ETH_MQ_RX_DCB_FLAG RTE_DEPRECATED(ETH_MQ_RX_DCB_FLAG) RTE_ETH_MQ_RX_DCB_FLAG
+#define ETH_MQ_RX_VMDQ_FLAG RTE_DEPRECATED(ETH_MQ_RX_VMDQ_FLAG) RTE_ETH_MQ_RX_VMDQ_FLAG
+
/**
* A set of values to identify what method is to be used to route
* packets to multiple queues.
@@ -434,14 +438,14 @@ enum rte_eth_rx_mq_mode {
RTE_ETH_MQ_RX_VMDQ_FLAG,
};
-#define ETH_MQ_RX_NONE RTE_ETH_MQ_RX_NONE
-#define ETH_MQ_RX_RSS RTE_ETH_MQ_RX_RSS
-#define ETH_MQ_RX_DCB RTE_ETH_MQ_RX_DCB
-#define ETH_MQ_RX_DCB_RSS RTE_ETH_MQ_RX_DCB_RSS
-#define ETH_MQ_RX_VMDQ_ONLY RTE_ETH_MQ_RX_VMDQ_ONLY
-#define ETH_MQ_RX_VMDQ_RSS RTE_ETH_MQ_RX_VMDQ_RSS
-#define ETH_MQ_RX_VMDQ_DCB RTE_ETH_MQ_RX_VMDQ_DCB
-#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_ETH_MQ_RX_VMDQ_DCB_RSS
+#define ETH_MQ_RX_NONE RTE_DEPRECATED(ETH_MQ_RX_NONE) RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS RTE_DEPRECATED(ETH_MQ_RX_RSS) RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB RTE_DEPRECATED(ETH_MQ_RX_DCB) RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS RTE_DEPRECATED(ETH_MQ_RX_DCB_RSS) RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY RTE_DEPRECATED(ETH_MQ_RX_VMDQ_ONLY) RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS RTE_DEPRECATED(ETH_MQ_RX_VMDQ_RSS) RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB RTE_DEPRECATED(ETH_MQ_RX_VMDQ_DCB) RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_DEPRECATED(ETH_MQ_RX_VMDQ_DCB_RSS) RTE_ETH_MQ_RX_VMDQ_DCB_RSS
/**
* A set of values to identify what method is to be used to transmit
@@ -453,10 +457,11 @@ enum rte_eth_tx_mq_mode {
RTE_ETH_MQ_TX_VMDQ_DCB, /**< For Tx side,both DCB and VT is on. */
RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
-#define ETH_MQ_TX_NONE RTE_ETH_MQ_TX_NONE
-#define ETH_MQ_TX_DCB RTE_ETH_MQ_TX_DCB
-#define ETH_MQ_TX_VMDQ_DCB RTE_ETH_MQ_TX_VMDQ_DCB
-#define ETH_MQ_TX_VMDQ_ONLY RTE_ETH_MQ_TX_VMDQ_ONLY
+
+#define ETH_MQ_TX_NONE RTE_DEPRECATED(ETH_MQ_TX_NONE) RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB RTE_DEPRECATED(ETH_MQ_TX_DCB) RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB RTE_DEPRECATED(ETH_MQ_TX_VMDQ_DCB) RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY RTE_DEPRECATED(ETH_MQ_TX_VMDQ_ONLY) RTE_ETH_MQ_TX_VMDQ_ONLY
/**
* A structure used to configure the Rx features of an Ethernet port.
@@ -490,10 +495,10 @@ enum rte_vlan_type {
RTE_ETH_VLAN_TYPE_MAX,
};
-#define ETH_VLAN_TYPE_UNKNOWN RTE_ETH_VLAN_TYPE_UNKNOWN
-#define ETH_VLAN_TYPE_INNER RTE_ETH_VLAN_TYPE_INNER
-#define ETH_VLAN_TYPE_OUTER RTE_ETH_VLAN_TYPE_OUTER
-#define ETH_VLAN_TYPE_MAX RTE_ETH_VLAN_TYPE_MAX
+#define ETH_VLAN_TYPE_UNKNOWN RTE_DEPRECATED(ETH_VLAN_TYPE_UNKNOWN) RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER RTE_DEPRECATED(ETH_VLAN_TYPE_INNER) RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER RTE_DEPRECATED(ETH_VLAN_TYPE_OUTER) RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX RTE_DEPRECATED(ETH_VLAN_TYPE_MAX) RTE_ETH_VLAN_TYPE_MAX
/**
* A structure used to describe a VLAN filter.
@@ -566,69 +571,70 @@ struct rte_eth_rss_conf {
* fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
*/
#define RTE_ETH_RSS_IPV4 RTE_BIT64(2)
-#define ETH_RSS_IPV4 RTE_ETH_RSS_IPV4
#define RTE_ETH_RSS_FRAG_IPV4 RTE_BIT64(3)
-#define ETH_RSS_FRAG_IPV4 RTE_ETH_RSS_FRAG_IPV4
#define RTE_ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4)
-#define ETH_RSS_NONFRAG_IPV4_TCP RTE_ETH_RSS_NONFRAG_IPV4_TCP
#define RTE_ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5)
-#define ETH_RSS_NONFRAG_IPV4_UDP RTE_ETH_RSS_NONFRAG_IPV4_UDP
#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6)
-#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_ETH_RSS_NONFRAG_IPV4_SCTP
#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
-#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_ETH_RSS_NONFRAG_IPV4_OTHER
#define RTE_ETH_RSS_IPV6 RTE_BIT64(8)
-#define ETH_RSS_IPV6 RTE_ETH_RSS_IPV6
#define RTE_ETH_RSS_FRAG_IPV6 RTE_BIT64(9)
-#define ETH_RSS_FRAG_IPV6 RTE_ETH_RSS_FRAG_IPV6
#define RTE_ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10)
-#define ETH_RSS_NONFRAG_IPV6_TCP RTE_ETH_RSS_NONFRAG_IPV6_TCP
#define RTE_ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11)
-#define ETH_RSS_NONFRAG_IPV6_UDP RTE_ETH_RSS_NONFRAG_IPV6_UDP
#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12)
-#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_ETH_RSS_NONFRAG_IPV6_SCTP
#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
-#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_ETH_RSS_NONFRAG_IPV6_OTHER
#define RTE_ETH_RSS_L2_PAYLOAD RTE_BIT64(14)
-#define ETH_RSS_L2_PAYLOAD RTE_ETH_RSS_L2_PAYLOAD
#define RTE_ETH_RSS_IPV6_EX RTE_BIT64(15)
-#define ETH_RSS_IPV6_EX RTE_ETH_RSS_IPV6_EX
#define RTE_ETH_RSS_IPV6_TCP_EX RTE_BIT64(16)
-#define ETH_RSS_IPV6_TCP_EX RTE_ETH_RSS_IPV6_TCP_EX
#define RTE_ETH_RSS_IPV6_UDP_EX RTE_BIT64(17)
-#define ETH_RSS_IPV6_UDP_EX RTE_ETH_RSS_IPV6_UDP_EX
#define RTE_ETH_RSS_PORT RTE_BIT64(18)
-#define ETH_RSS_PORT RTE_ETH_RSS_PORT
#define RTE_ETH_RSS_VXLAN RTE_BIT64(19)
-#define ETH_RSS_VXLAN RTE_ETH_RSS_VXLAN
#define RTE_ETH_RSS_GENEVE RTE_BIT64(20)
-#define ETH_RSS_GENEVE RTE_ETH_RSS_GENEVE
#define RTE_ETH_RSS_NVGRE RTE_BIT64(21)
-#define ETH_RSS_NVGRE RTE_ETH_RSS_NVGRE
#define RTE_ETH_RSS_GTPU RTE_BIT64(23)
-#define ETH_RSS_GTPU RTE_ETH_RSS_GTPU
#define RTE_ETH_RSS_ETH RTE_BIT64(24)
-#define ETH_RSS_ETH RTE_ETH_RSS_ETH
#define RTE_ETH_RSS_S_VLAN RTE_BIT64(25)
-#define ETH_RSS_S_VLAN RTE_ETH_RSS_S_VLAN
#define RTE_ETH_RSS_C_VLAN RTE_BIT64(26)
-#define ETH_RSS_C_VLAN RTE_ETH_RSS_C_VLAN
#define RTE_ETH_RSS_ESP RTE_BIT64(27)
-#define ETH_RSS_ESP RTE_ETH_RSS_ESP
#define RTE_ETH_RSS_AH RTE_BIT64(28)
-#define ETH_RSS_AH RTE_ETH_RSS_AH
#define RTE_ETH_RSS_L2TPV3 RTE_BIT64(29)
-#define ETH_RSS_L2TPV3 RTE_ETH_RSS_L2TPV3
#define RTE_ETH_RSS_PFCP RTE_BIT64(30)
-#define ETH_RSS_PFCP RTE_ETH_RSS_PFCP
#define RTE_ETH_RSS_PPPOE RTE_BIT64(31)
-#define ETH_RSS_PPPOE RTE_ETH_RSS_PPPOE
#define RTE_ETH_RSS_ECPRI RTE_BIT64(32)
-#define ETH_RSS_ECPRI RTE_ETH_RSS_ECPRI
#define RTE_ETH_RSS_MPLS RTE_BIT64(33)
-#define ETH_RSS_MPLS RTE_ETH_RSS_MPLS
#define RTE_ETH_RSS_IPV4_CHKSUM RTE_BIT64(34)
-#define ETH_RSS_IPV4_CHKSUM RTE_ETH_RSS_IPV4_CHKSUM
+
+#define ETH_RSS_IPV4 RTE_DEPRECATED(ETH_RSS_IPV4) RTE_ETH_RSS_IPV4
+#define ETH_RSS_FRAG_IPV4 RTE_DEPRECATED(ETH_RSS_FRAG_IPV4) RTE_ETH_RSS_FRAG_IPV4
+#define ETH_RSS_NONFRAG_IPV4_TCP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_TCP) RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define ETH_RSS_NONFRAG_IPV4_UDP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_UDP) RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_SCTP) RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_OTHER) RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define ETH_RSS_IPV6 RTE_DEPRECATED(ETH_RSS_IPV6) RTE_ETH_RSS_IPV6
+#define ETH_RSS_FRAG_IPV6 RTE_DEPRECATED(ETH_RSS_FRAG_IPV6) RTE_ETH_RSS_FRAG_IPV6
+#define ETH_RSS_NONFRAG_IPV6_TCP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_TCP) RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define ETH_RSS_NONFRAG_IPV6_UDP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_UDP) RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_SCTP) RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_OTHER) RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define ETH_RSS_L2_PAYLOAD RTE_DEPRECATED(ETH_RSS_L2_PAYLOAD) RTE_ETH_RSS_L2_PAYLOAD
+#define ETH_RSS_IPV6_EX RTE_DEPRECATED(ETH_RSS_IPV6_EX) RTE_ETH_RSS_IPV6_EX
+#define ETH_RSS_IPV6_TCP_EX RTE_DEPRECATED(ETH_RSS_IPV6_TCP_EX) RTE_ETH_RSS_IPV6_TCP_EX
+#define ETH_RSS_IPV6_UDP_EX RTE_DEPRECATED(ETH_RSS_IPV6_UDP_EX) RTE_ETH_RSS_IPV6_UDP_EX
+#define ETH_RSS_PORT RTE_DEPRECATED(ETH_RSS_PORT) RTE_ETH_RSS_PORT
+#define ETH_RSS_VXLAN RTE_DEPRECATED(ETH_RSS_VXLAN) RTE_ETH_RSS_VXLAN
+#define ETH_RSS_GENEVE RTE_DEPRECATED(ETH_RSS_GENEVE) RTE_ETH_RSS_GENEVE
+#define ETH_RSS_NVGRE RTE_DEPRECATED(ETH_RSS_NVGRE) RTE_ETH_RSS_NVGRE
+#define ETH_RSS_GTPU RTE_DEPRECATED(ETH_RSS_GTPU) RTE_ETH_RSS_GTPU
+#define ETH_RSS_ETH RTE_DEPRECATED(ETH_RSS_ETH) RTE_ETH_RSS_ETH
+#define ETH_RSS_S_VLAN RTE_DEPRECATED(ETH_RSS_S_VLAN) RTE_ETH_RSS_S_VLAN
+#define ETH_RSS_C_VLAN RTE_DEPRECATED(ETH_RSS_C_VLAN) RTE_ETH_RSS_C_VLAN
+#define ETH_RSS_ESP RTE_DEPRECATED(ETH_RSS_ESP) RTE_ETH_RSS_ESP
+#define ETH_RSS_AH RTE_DEPRECATED(ETH_RSS_AH) RTE_ETH_RSS_AH
+#define ETH_RSS_L2TPV3 RTE_DEPRECATED(ETH_RSS_L2TPV3) RTE_ETH_RSS_L2TPV3
+#define ETH_RSS_PFCP RTE_DEPRECATED(ETH_RSS_PFCP) RTE_ETH_RSS_PFCP
+#define ETH_RSS_PPPOE RTE_DEPRECATED(ETH_RSS_PPPOE) RTE_ETH_RSS_PPPOE
+#define ETH_RSS_ECPRI RTE_DEPRECATED(ETH_RSS_ECPRI) RTE_ETH_RSS_ECPRI
+#define ETH_RSS_MPLS RTE_DEPRECATED(ETH_RSS_MPLS) RTE_ETH_RSS_MPLS
+#define ETH_RSS_IPV4_CHKSUM RTE_DEPRECATED(ETH_RSS_IPV4_CHKSUM) RTE_ETH_RSS_IPV4_CHKSUM
/**
* The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
@@ -643,7 +649,7 @@ struct rte_eth_rss_conf {
* it takes the reserved value 0 as input for the hash function.
*/
#define RTE_ETH_RSS_L4_CHKSUM RTE_BIT64(35)
-#define ETH_RSS_L4_CHKSUM RTE_ETH_RSS_L4_CHKSUM
+#define ETH_RSS_L4_CHKSUM RTE_DEPRECATED(ETH_RSS_L4_CHKSUM) RTE_ETH_RSS_L4_CHKSUM
/*
* We use the following macros to combine with above RTE_ETH_RSS_* for
@@ -655,21 +661,22 @@ struct rte_eth_rss_conf {
* them are added.
*/
#define RTE_ETH_RSS_L3_SRC_ONLY RTE_BIT64(63)
-#define ETH_RSS_L3_SRC_ONLY RTE_ETH_RSS_L3_SRC_ONLY
#define RTE_ETH_RSS_L3_DST_ONLY RTE_BIT64(62)
-#define ETH_RSS_L3_DST_ONLY RTE_ETH_RSS_L3_DST_ONLY
#define RTE_ETH_RSS_L4_SRC_ONLY RTE_BIT64(61)
-#define ETH_RSS_L4_SRC_ONLY RTE_ETH_RSS_L4_SRC_ONLY
#define RTE_ETH_RSS_L4_DST_ONLY RTE_BIT64(60)
-#define ETH_RSS_L4_DST_ONLY RTE_ETH_RSS_L4_DST_ONLY
#define RTE_ETH_RSS_L2_SRC_ONLY RTE_BIT64(59)
-#define ETH_RSS_L2_SRC_ONLY RTE_ETH_RSS_L2_SRC_ONLY
#define RTE_ETH_RSS_L2_DST_ONLY RTE_BIT64(58)
-#define ETH_RSS_L2_DST_ONLY RTE_ETH_RSS_L2_DST_ONLY
+
+#define ETH_RSS_L3_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L3_SRC_ONLY) RTE_ETH_RSS_L3_SRC_ONLY
+#define ETH_RSS_L3_DST_ONLY RTE_DEPRECATED(ETH_RSS_L3_DST_ONLY) RTE_ETH_RSS_L3_DST_ONLY
+#define ETH_RSS_L4_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L4_SRC_ONLY) RTE_ETH_RSS_L4_SRC_ONLY
+#define ETH_RSS_L4_DST_ONLY RTE_DEPRECATED(ETH_RSS_L4_DST_ONLY) RTE_ETH_RSS_L4_DST_ONLY
+#define ETH_RSS_L2_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L2_SRC_ONLY) RTE_ETH_RSS_L2_SRC_ONLY
+#define ETH_RSS_L2_DST_ONLY RTE_DEPRECATED(ETH_RSS_L2_DST_ONLY) RTE_ETH_RSS_L2_DST_ONLY
/*
* Only select IPV6 address prefix as RSS input set according to
- * https:tools.ietf.org/html/rfc6052
+ * https://tools.ietf.org/html/rfc6052
* Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
* RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
*/
@@ -694,26 +701,27 @@ struct rte_eth_rss_conf {
* can be performed on according to PMD and device capabilities.
*/
#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (UINT64_C(0) << 50)
-#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_ETH_RSS_LEVEL_PMD_DEFAULT
+#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_DEPRECATED(ETH_RSS_LEVEL_PMD_DEFAULT) RTE_ETH_RSS_LEVEL_PMD_DEFAULT
/**
* level 1, requests RSS to be performed on the outermost packet
* encapsulation level.
*/
#define RTE_ETH_RSS_LEVEL_OUTERMOST (UINT64_C(1) << 50)
-#define ETH_RSS_LEVEL_OUTERMOST RTE_ETH_RSS_LEVEL_OUTERMOST
+#define ETH_RSS_LEVEL_OUTERMOST RTE_DEPRECATED(ETH_RSS_LEVEL_OUTERMOST) RTE_ETH_RSS_LEVEL_OUTERMOST
/**
* level 2, requests RSS to be performed on the specified inner packet
* encapsulation level, from outermost to innermost (lower to higher values).
*/
#define RTE_ETH_RSS_LEVEL_INNERMOST (UINT64_C(2) << 50)
-#define ETH_RSS_LEVEL_INNERMOST RTE_ETH_RSS_LEVEL_INNERMOST
#define RTE_ETH_RSS_LEVEL_MASK (UINT64_C(3) << 50)
-#define ETH_RSS_LEVEL_MASK RTE_ETH_RSS_LEVEL_MASK
+
+#define ETH_RSS_LEVEL_INNERMOST RTE_DEPRECATED(ETH_RSS_LEVEL_INNERMOST) RTE_ETH_RSS_LEVEL_INNERMOST
+#define ETH_RSS_LEVEL_MASK RTE_DEPRECATED(ETH_RSS_LEVEL_MASK) RTE_ETH_RSS_LEVEL_MASK
#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
-#define ETH_RSS_LEVEL(rss_hf) RTE_ETH_RSS_LEVEL(rss_hf)
+#define ETH_RSS_LEVEL(rss_hf) RTE_DEPRECATED(ETH_RSS_LEVEL(rss_hf)) RTE_ETH_RSS_LEVEL(rss_hf)
/**
* For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -740,122 +748,122 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
#define RTE_ETH_RSS_IPV6_PRE32 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE32)
-#define ETH_RSS_IPV6_PRE32 RTE_ETH_RSS_IPV6_PRE32
+#define ETH_RSS_IPV6_PRE32 RTE_DEPRECATED(ETH_RSS_IPV6_PRE32) RTE_ETH_RSS_IPV6_PRE32
#define RTE_ETH_RSS_IPV6_PRE40 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE40)
-#define ETH_RSS_IPV6_PRE40 RTE_ETH_RSS_IPV6_PRE40
+#define ETH_RSS_IPV6_PRE40 RTE_DEPRECATED(ETH_RSS_IPV6_PRE40) RTE_ETH_RSS_IPV6_PRE40
#define RTE_ETH_RSS_IPV6_PRE48 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE48)
-#define ETH_RSS_IPV6_PRE48 RTE_ETH_RSS_IPV6_PRE48
+#define ETH_RSS_IPV6_PRE48 RTE_DEPRECATED(ETH_RSS_IPV6_PRE48) RTE_ETH_RSS_IPV6_PRE48
#define RTE_ETH_RSS_IPV6_PRE56 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE56)
-#define ETH_RSS_IPV6_PRE56 RTE_ETH_RSS_IPV6_PRE56
+#define ETH_RSS_IPV6_PRE56 RTE_DEPRECATED(ETH_RSS_IPV6_PRE56) RTE_ETH_RSS_IPV6_PRE56
#define RTE_ETH_RSS_IPV6_PRE64 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE64)
-#define ETH_RSS_IPV6_PRE64 RTE_ETH_RSS_IPV6_PRE64
+#define ETH_RSS_IPV6_PRE64 RTE_DEPRECATED(ETH_RSS_IPV6_PRE64) RTE_ETH_RSS_IPV6_PRE64
#define RTE_ETH_RSS_IPV6_PRE96 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE96)
-#define ETH_RSS_IPV6_PRE96 RTE_ETH_RSS_IPV6_PRE96
+#define ETH_RSS_IPV6_PRE96 RTE_DEPRECATED(ETH_RSS_IPV6_PRE96) RTE_ETH_RSS_IPV6_PRE96
#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE32)
-#define ETH_RSS_IPV6_PRE32_UDP RTE_ETH_RSS_IPV6_PRE32_UDP
+#define ETH_RSS_IPV6_PRE32_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_UDP) RTE_ETH_RSS_IPV6_PRE32_UDP
#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE40)
-#define ETH_RSS_IPV6_PRE40_UDP RTE_ETH_RSS_IPV6_PRE40_UDP
+#define ETH_RSS_IPV6_PRE40_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_UDP) RTE_ETH_RSS_IPV6_PRE40_UDP
#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE48)
-#define ETH_RSS_IPV6_PRE48_UDP RTE_ETH_RSS_IPV6_PRE48_UDP
+#define ETH_RSS_IPV6_PRE48_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_UDP) RTE_ETH_RSS_IPV6_PRE48_UDP
#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE56)
-#define ETH_RSS_IPV6_PRE56_UDP RTE_ETH_RSS_IPV6_PRE56_UDP
+#define ETH_RSS_IPV6_PRE56_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_UDP) RTE_ETH_RSS_IPV6_PRE56_UDP
#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE64)
-#define ETH_RSS_IPV6_PRE64_UDP RTE_ETH_RSS_IPV6_PRE64_UDP
+#define ETH_RSS_IPV6_PRE64_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_UDP) RTE_ETH_RSS_IPV6_PRE64_UDP
#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE96)
-#define ETH_RSS_IPV6_PRE96_UDP RTE_ETH_RSS_IPV6_PRE96_UDP
+#define ETH_RSS_IPV6_PRE96_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_UDP) RTE_ETH_RSS_IPV6_PRE96_UDP
#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE32)
-#define ETH_RSS_IPV6_PRE32_TCP RTE_ETH_RSS_IPV6_PRE32_TCP
+#define ETH_RSS_IPV6_PRE32_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_TCP) RTE_ETH_RSS_IPV6_PRE32_TCP
#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE40)
-#define ETH_RSS_IPV6_PRE40_TCP RTE_ETH_RSS_IPV6_PRE40_TCP
+#define ETH_RSS_IPV6_PRE40_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_TCP) RTE_ETH_RSS_IPV6_PRE40_TCP
#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE48)
-#define ETH_RSS_IPV6_PRE48_TCP RTE_ETH_RSS_IPV6_PRE48_TCP
+#define ETH_RSS_IPV6_PRE48_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_TCP) RTE_ETH_RSS_IPV6_PRE48_TCP
#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE56)
-#define ETH_RSS_IPV6_PRE56_TCP RTE_ETH_RSS_IPV6_PRE56_TCP
+#define ETH_RSS_IPV6_PRE56_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_TCP) RTE_ETH_RSS_IPV6_PRE56_TCP
#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE64)
-#define ETH_RSS_IPV6_PRE64_TCP RTE_ETH_RSS_IPV6_PRE64_TCP
+#define ETH_RSS_IPV6_PRE64_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_TCP) RTE_ETH_RSS_IPV6_PRE64_TCP
#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE96)
-#define ETH_RSS_IPV6_PRE96_TCP RTE_ETH_RSS_IPV6_PRE96_TCP
+#define ETH_RSS_IPV6_PRE96_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_TCP) RTE_ETH_RSS_IPV6_PRE96_TCP
#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE32)
-#define ETH_RSS_IPV6_PRE32_SCTP RTE_ETH_RSS_IPV6_PRE32_SCTP
+#define ETH_RSS_IPV6_PRE32_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_SCTP) RTE_ETH_RSS_IPV6_PRE32_SCTP
#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE40)
-#define ETH_RSS_IPV6_PRE40_SCTP RTE_ETH_RSS_IPV6_PRE40_SCTP
+#define ETH_RSS_IPV6_PRE40_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_SCTP) RTE_ETH_RSS_IPV6_PRE40_SCTP
#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE48)
-#define ETH_RSS_IPV6_PRE48_SCTP RTE_ETH_RSS_IPV6_PRE48_SCTP
+#define ETH_RSS_IPV6_PRE48_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_SCTP) RTE_ETH_RSS_IPV6_PRE48_SCTP
#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE56)
-#define ETH_RSS_IPV6_PRE56_SCTP RTE_ETH_RSS_IPV6_PRE56_SCTP
+#define ETH_RSS_IPV6_PRE56_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_SCTP) RTE_ETH_RSS_IPV6_PRE56_SCTP
#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE64)
-#define ETH_RSS_IPV6_PRE64_SCTP RTE_ETH_RSS_IPV6_PRE64_SCTP
+#define ETH_RSS_IPV6_PRE64_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_SCTP) RTE_ETH_RSS_IPV6_PRE64_SCTP
#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE96)
-#define ETH_RSS_IPV6_PRE96_SCTP RTE_ETH_RSS_IPV6_PRE96_SCTP
+#define ETH_RSS_IPV6_PRE96_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_SCTP) RTE_ETH_RSS_IPV6_PRE96_SCTP
#define RTE_ETH_RSS_IP ( \
RTE_ETH_RSS_IPV4 | \
@@ -865,35 +873,35 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
RTE_ETH_RSS_FRAG_IPV6 | \
RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
RTE_ETH_RSS_IPV6_EX)
-#define ETH_RSS_IP RTE_ETH_RSS_IP
+#define ETH_RSS_IP RTE_DEPRECATED(ETH_RSS_IP) RTE_ETH_RSS_IP
#define RTE_ETH_RSS_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_IPV6_UDP_EX)
-#define ETH_RSS_UDP RTE_ETH_RSS_UDP
+#define ETH_RSS_UDP RTE_DEPRECATED(ETH_RSS_UDP) RTE_ETH_RSS_UDP
#define RTE_ETH_RSS_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_IPV6_TCP_EX)
-#define ETH_RSS_TCP RTE_ETH_RSS_TCP
+#define ETH_RSS_TCP RTE_DEPRECATED(ETH_RSS_TCP) RTE_ETH_RSS_TCP
#define RTE_ETH_RSS_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define ETH_RSS_SCTP RTE_ETH_RSS_SCTP
+#define ETH_RSS_SCTP RTE_DEPRECATED(ETH_RSS_SCTP) RTE_ETH_RSS_SCTP
#define RTE_ETH_RSS_TUNNEL ( \
RTE_ETH_RSS_VXLAN | \
RTE_ETH_RSS_GENEVE | \
RTE_ETH_RSS_NVGRE)
-#define ETH_RSS_TUNNEL RTE_ETH_RSS_TUNNEL
+#define ETH_RSS_TUNNEL RTE_DEPRECATED(ETH_RSS_TUNNEL) RTE_ETH_RSS_TUNNEL
#define RTE_ETH_RSS_VLAN ( \
RTE_ETH_RSS_S_VLAN | \
RTE_ETH_RSS_C_VLAN)
-#define ETH_RSS_VLAN RTE_ETH_RSS_VLAN
+#define ETH_RSS_VLAN RTE_DEPRECATED(ETH_RSS_VLAN) RTE_ETH_RSS_VLAN
/** Mask of valid RSS hash protocols */
#define RTE_ETH_RSS_PROTO_MASK ( \
@@ -918,7 +926,7 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
RTE_ETH_RSS_GENEVE | \
RTE_ETH_RSS_NVGRE | \
RTE_ETH_RSS_MPLS)
-#define ETH_RSS_PROTO_MASK RTE_ETH_RSS_PROTO_MASK
+#define ETH_RSS_PROTO_MASK RTE_DEPRECATED(ETH_RSS_PROTO_MASK) RTE_ETH_RSS_PROTO_MASK
/*
* Definitions used for redirection table entry size.
@@ -926,84 +934,90 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
* documentation or the description of relevant functions for more details.
*/
#define RTE_ETH_RSS_RETA_SIZE_64 64
-#define ETH_RSS_RETA_SIZE_64 RTE_ETH_RSS_RETA_SIZE_64
#define RTE_ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_128 RTE_ETH_RSS_RETA_SIZE_128
#define RTE_ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_256 RTE_ETH_RSS_RETA_SIZE_256
#define RTE_ETH_RSS_RETA_SIZE_512 512
-#define ETH_RSS_RETA_SIZE_512 RTE_ETH_RSS_RETA_SIZE_512
#define RTE_ETH_RETA_GROUP_SIZE 64
-#define RTE_RETA_GROUP_SIZE RTE_ETH_RETA_GROUP_SIZE
+
+#define ETH_RSS_RETA_SIZE_64 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_64) RTE_ETH_RSS_RETA_SIZE_64
+#define ETH_RSS_RETA_SIZE_128 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_128) RTE_ETH_RSS_RETA_SIZE_128
+#define ETH_RSS_RETA_SIZE_256 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_256) RTE_ETH_RSS_RETA_SIZE_256
+#define ETH_RSS_RETA_SIZE_512 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_512) RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_RETA_GROUP_SIZE RTE_DEPRECATED(RTE_RETA_GROUP_SIZE) RTE_ETH_RETA_GROUP_SIZE
/**@{@name VMDq and DCB maximums */
#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDq VLAN filters. */
-#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_ETH_VMDQ_MAX_VLAN_FILTERS
#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
-#define ETH_DCB_NUM_USER_PRIORITIES RTE_ETH_DCB_NUM_USER_PRIORITIES
#define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDq DCB queues. */
-#define ETH_VMDQ_DCB_NUM_QUEUES RTE_ETH_VMDQ_DCB_NUM_QUEUES
#define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
-#define ETH_DCB_NUM_QUEUES RTE_ETH_DCB_NUM_QUEUES
/**@}*/
+#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_DEPRECATED(ETH_VMDQ_MAX_VLAN_FILTERS) RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define ETH_DCB_NUM_USER_PRIORITIES RTE_DEPRECATED(ETH_DCB_NUM_USER_PRIORITIES) RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define ETH_VMDQ_DCB_NUM_QUEUES RTE_DEPRECATED(ETH_VMDQ_DCB_NUM_QUEUES) RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define ETH_DCB_NUM_QUEUES RTE_DEPRECATED(ETH_DCB_NUM_QUEUES) RTE_ETH_DCB_NUM_QUEUES
+
/**@{@name DCB capabilities */
#define RTE_ETH_DCB_PG_SUPPORT RTE_BIT32(0) /**< Priority Group(ETS) support. */
-#define ETH_DCB_PG_SUPPORT RTE_ETH_DCB_PG_SUPPORT
#define RTE_ETH_DCB_PFC_SUPPORT RTE_BIT32(1) /**< Priority Flow Control support. */
-#define ETH_DCB_PFC_SUPPORT RTE_ETH_DCB_PFC_SUPPORT
/**@}*/
+#define ETH_DCB_PG_SUPPORT RTE_DEPRECATED(ETH_DCB_PG_SUPPORT) RTE_ETH_DCB_PG_SUPPORT
+#define ETH_DCB_PFC_SUPPORT RTE_DEPRECATED(ETH_DCB_PFC_SUPPORT) RTE_ETH_DCB_PFC_SUPPORT
+
/**@{@name VLAN offload bits */
#define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
-#define ETH_VLAN_STRIP_OFFLOAD RTE_ETH_VLAN_STRIP_OFFLOAD
#define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD RTE_ETH_VLAN_FILTER_OFFLOAD
#define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD RTE_ETH_VLAN_EXTEND_OFFLOAD
#define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define ETH_VLAN_STRIP_OFFLOAD RTE_DEPRECATED(ETH_VLAN_STRIP_OFFLOAD) RTE_ETH_VLAN_STRIP_OFFLOAD
+#define ETH_VLAN_FILTER_OFFLOAD RTE_DEPRECATED(ETH_VLAN_FILTER_OFFLOAD) RTE_ETH_VLAN_FILTER_OFFLOAD
+#define ETH_VLAN_EXTEND_OFFLOAD RTE_DEPRECATED(ETH_VLAN_EXTEND_OFFLOAD) RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define ETH_QINQ_STRIP_OFFLOAD RTE_DEPRECATED(ETH_QINQ_STRIP_OFFLOAD) RTE_ETH_QINQ_STRIP_OFFLOAD
#define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
-#define ETH_VLAN_STRIP_MASK RTE_ETH_VLAN_STRIP_MASK
#define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
-#define ETH_VLAN_FILTER_MASK RTE_ETH_VLAN_FILTER_MASK
#define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
-#define ETH_VLAN_EXTEND_MASK RTE_ETH_VLAN_EXTEND_MASK
#define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
-#define ETH_QINQ_STRIP_MASK RTE_ETH_QINQ_STRIP_MASK
#define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
-#define ETH_VLAN_ID_MAX RTE_ETH_VLAN_ID_MAX
/**@}*/
+#define ETH_VLAN_STRIP_MASK RTE_DEPRECATED(ETH_VLAN_STRIP_MASK) RTE_ETH_VLAN_STRIP_MASK
+#define ETH_VLAN_FILTER_MASK RTE_DEPRECATED(ETH_VLAN_FILTER_MASK) RTE_ETH_VLAN_FILTER_MASK
+#define ETH_VLAN_EXTEND_MASK RTE_DEPRECATED(ETH_VLAN_EXTEND_MASK) RTE_ETH_VLAN_EXTEND_MASK
+#define ETH_QINQ_STRIP_MASK RTE_DEPRECATED(ETH_QINQ_STRIP_MASK) RTE_ETH_QINQ_STRIP_MASK
+#define ETH_VLAN_ID_MAX RTE_DEPRECATED(ETH_VLAN_ID_MAX) RTE_ETH_VLAN_ID_MAX
+
/* Definitions used for receive MAC address */
#define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
-#define ETH_NUM_RECEIVE_MAC_ADDR RTE_ETH_NUM_RECEIVE_MAC_ADDR
+#define ETH_NUM_RECEIVE_MAC_ADDR RTE_DEPRECATED(ETH_NUM_RECEIVE_MAC_ADDR) RTE_ETH_NUM_RECEIVE_MAC_ADDR
/* Definitions used for unicast hash */
#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_DEPRECATED(ETH_VMDQ_NUM_UC_HASH_ARRAY) RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
/**@{@name VMDq Rx mode
* @see rte_eth_vmdq_rx_conf.rx_mode
*/
/** Accept untagged packets. */
#define RTE_ETH_VMDQ_ACCEPT_UNTAG RTE_BIT32(0)
-#define ETH_VMDQ_ACCEPT_UNTAG RTE_ETH_VMDQ_ACCEPT_UNTAG
/** Accept packets in multicast table. */
#define RTE_ETH_VMDQ_ACCEPT_HASH_MC RTE_BIT32(1)
-#define ETH_VMDQ_ACCEPT_HASH_MC RTE_ETH_VMDQ_ACCEPT_HASH_MC
/** Accept packets in unicast table. */
#define RTE_ETH_VMDQ_ACCEPT_HASH_UC RTE_BIT32(2)
-#define ETH_VMDQ_ACCEPT_HASH_UC RTE_ETH_VMDQ_ACCEPT_HASH_UC
/** Accept broadcast packets. */
#define RTE_ETH_VMDQ_ACCEPT_BROADCAST RTE_BIT32(3)
-#define ETH_VMDQ_ACCEPT_BROADCAST RTE_ETH_VMDQ_ACCEPT_BROADCAST
/** Multicast promiscuous. */
#define RTE_ETH_VMDQ_ACCEPT_MULTICAST RTE_BIT32(4)
-#define ETH_VMDQ_ACCEPT_MULTICAST RTE_ETH_VMDQ_ACCEPT_MULTICAST
/**@}*/
+#define ETH_VMDQ_ACCEPT_UNTAG RTE_DEPRECATED(ETH_VMDQ_ACCEPT_UNTAG) RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define ETH_VMDQ_ACCEPT_HASH_MC RTE_DEPRECATED(ETH_VMDQ_ACCEPT_HASH_MC) RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define ETH_VMDQ_ACCEPT_HASH_UC RTE_DEPRECATED(ETH_VMDQ_ACCEPT_HASH_UC) RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define ETH_VMDQ_ACCEPT_BROADCAST RTE_DEPRECATED(ETH_VMDQ_ACCEPT_BROADCAST) RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define ETH_VMDQ_ACCEPT_MULTICAST RTE_DEPRECATED(ETH_VMDQ_ACCEPT_MULTICAST) RTE_ETH_VMDQ_ACCEPT_MULTICAST
+
/**
* A structure used to configure 64 entries of Redirection Table of the
* Receive Side Scaling (RSS) feature of an Ethernet port. To configure
@@ -1025,8 +1039,8 @@ enum rte_eth_nb_tcs {
RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */
};
-#define ETH_4_TCS RTE_ETH_4_TCS
-#define ETH_8_TCS RTE_ETH_8_TCS
+#define ETH_4_TCS RTE_DEPRECATED(ETH_4_TCS) RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_DEPRECATED(ETH_8_TCS) RTE_ETH_8_TCS
/**
* This enum indicates the possible number of queue pools
@@ -1038,10 +1052,10 @@ enum rte_eth_nb_pools {
RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */
RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */
};
-#define ETH_8_POOLS RTE_ETH_8_POOLS
-#define ETH_16_POOLS RTE_ETH_16_POOLS
-#define ETH_32_POOLS RTE_ETH_32_POOLS
-#define ETH_64_POOLS RTE_ETH_64_POOLS
+#define ETH_8_POOLS RTE_DEPRECATED(ETH_8_POOLS) RTE_ETH_8_POOLS
+#define ETH_16_POOLS RTE_DEPRECATED(ETH_16_POOLS) RTE_ETH_16_POOLS
+#define ETH_32_POOLS RTE_DEPRECATED(ETH_32_POOLS) RTE_ETH_32_POOLS
+#define ETH_64_POOLS RTE_DEPRECATED(ETH_64_POOLS) RTE_ETH_64_POOLS
/* This structure may be extended in future. */
struct rte_eth_dcb_rx_conf {
@@ -1364,11 +1378,10 @@ enum rte_eth_fc_mode {
RTE_ETH_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
RTE_ETH_FC_FULL /**< Enable flow control on both side. */
};
-
-#define RTE_FC_NONE RTE_ETH_FC_NONE
-#define RTE_FC_RX_PAUSE RTE_ETH_FC_RX_PAUSE
-#define RTE_FC_TX_PAUSE RTE_ETH_FC_TX_PAUSE
-#define RTE_FC_FULL RTE_ETH_FC_FULL
+#define RTE_FC_NONE RTE_DEPRECATED(RTE_FC_NONE) RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE RTE_DEPRECATED(RTE_FC_RX_PAUSE) RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE RTE_DEPRECATED(RTE_FC_TX_PAUSE) RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL RTE_DEPRECATED(RTE_FC_FULL) RTE_ETH_FC_FULL
/**
* A structure used to configure Ethernet flow control parameter.
@@ -1411,17 +1424,16 @@ enum rte_eth_tunnel_type {
RTE_ETH_TUNNEL_TYPE_ECPRI,
RTE_ETH_TUNNEL_TYPE_MAX,
};
-
-#define RTE_TUNNEL_TYPE_NONE RTE_ETH_TUNNEL_TYPE_NONE
-#define RTE_TUNNEL_TYPE_VXLAN RTE_ETH_TUNNEL_TYPE_VXLAN
-#define RTE_TUNNEL_TYPE_GENEVE RTE_ETH_TUNNEL_TYPE_GENEVE
-#define RTE_TUNNEL_TYPE_TEREDO RTE_ETH_TUNNEL_TYPE_TEREDO
-#define RTE_TUNNEL_TYPE_NVGRE RTE_ETH_TUNNEL_TYPE_NVGRE
-#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
-#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_ETH_L2_TUNNEL_TYPE_E_TAG
-#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
-#define RTE_TUNNEL_TYPE_ECPRI RTE_ETH_TUNNEL_TYPE_ECPRI
-#define RTE_TUNNEL_TYPE_MAX RTE_ETH_TUNNEL_TYPE_MAX
+#define RTE_TUNNEL_TYPE_NONE RTE_DEPRECATED(RTE_TUNNEL_TYPE_NONE) RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN RTE_DEPRECATED(RTE_TUNNEL_TYPE_VXLAN) RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE RTE_DEPRECATED(RTE_TUNNEL_TYPE_GENEVE) RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO RTE_DEPRECATED(RTE_TUNNEL_TYPE_TEREDO) RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE RTE_DEPRECATED(RTE_TUNNEL_TYPE_NVGRE) RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_DEPRECATED(RTE_TUNNEL_TYPE_IP_IN_GRE) RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_DEPRECATED(RTE_L2_TUNNEL_TYPE_E_TAG) RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_DEPRECATED(RTE_TUNNEL_TYPE_VXLAN_GPE) RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI RTE_DEPRECATED(RTE_TUNNEL_TYPE_ECPRI) RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX RTE_DEPRECATED(RTE_TUNNEL_TYPE_MAX) RTE_ETH_TUNNEL_TYPE_MAX
/* Deprecated API file for rte_eth_dev_filter_* functions */
#include "rte_eth_ctrl.h"
@@ -1437,9 +1449,9 @@ enum rte_eth_fdir_pballoc_type {
};
#define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type
-#define RTE_FDIR_PBALLOC_64K RTE_ETH_FDIR_PBALLOC_64K
-#define RTE_FDIR_PBALLOC_128K RTE_ETH_FDIR_PBALLOC_128K
-#define RTE_FDIR_PBALLOC_256K RTE_ETH_FDIR_PBALLOC_256K
+#define RTE_FDIR_PBALLOC_64K RTE_DEPRECATED(RTE_FDIR_PBALLOC_64K) RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K RTE_DEPRECATED(RTE_FDIR_PBALLOC_128K) RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K RTE_DEPRECATED(RTE_FDIR_PBALLOC_256K) RTE_ETH_FDIR_PBALLOC_256K
/**
* Select report mode of FDIR hash information in Rx descriptors.
@@ -1466,7 +1478,6 @@ struct rte_eth_fdir_conf {
/** Flex payload configuration. */
struct rte_eth_fdir_flex_conf flex_conf;
};
-
#define rte_fdir_conf rte_eth_fdir_conf
/**
@@ -1545,57 +1556,58 @@ struct rte_eth_conf {
* Rx offload capabilities of a device.
*/
#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP RTE_BIT64(0)
-#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_ETH_RX_OFFLOAD_VLAN_STRIP
#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM RTE_BIT64(1)
-#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM RTE_BIT64(2)
-#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_ETH_RX_OFFLOAD_UDP_CKSUM
#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM RTE_BIT64(3)
-#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_ETH_RX_OFFLOAD_TCP_CKSUM
#define RTE_ETH_RX_OFFLOAD_TCP_LRO RTE_BIT64(4)
-#define DEV_RX_OFFLOAD_TCP_LRO RTE_ETH_RX_OFFLOAD_TCP_LRO
#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP RTE_BIT64(5)
-#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_ETH_RX_OFFLOAD_QINQ_STRIP
#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(6)
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP RTE_BIT64(7)
-#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT RTE_BIT64(8)
-#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER RTE_BIT64(9)
-#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_ETH_RX_OFFLOAD_VLAN_FILTER
#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND RTE_BIT64(10)
-#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
#define RTE_ETH_RX_OFFLOAD_SCATTER RTE_BIT64(13)
-#define DEV_RX_OFFLOAD_SCATTER RTE_ETH_RX_OFFLOAD_SCATTER
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
#define RTE_ETH_RX_OFFLOAD_TIMESTAMP RTE_BIT64(14)
-#define DEV_RX_OFFLOAD_TIMESTAMP RTE_ETH_RX_OFFLOAD_TIMESTAMP
#define RTE_ETH_RX_OFFLOAD_SECURITY RTE_BIT64(15)
-#define DEV_RX_OFFLOAD_SECURITY RTE_ETH_RX_OFFLOAD_SECURITY
#define RTE_ETH_RX_OFFLOAD_KEEP_CRC RTE_BIT64(16)
-#define DEV_RX_OFFLOAD_KEEP_CRC RTE_ETH_RX_OFFLOAD_KEEP_CRC
#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM RTE_BIT64(17)
-#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(18)
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
#define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19)
-#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20)
+#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_STRIP) RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_UDP_CKSUM) RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_TCP_CKSUM) RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define DEV_RX_OFFLOAD_TCP_LRO RTE_DEPRECATED(DEV_RX_OFFLOAD_TCP_LRO) RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_QINQ_STRIP) RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_MACSEC_STRIP) RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_DEPRECATED(DEV_RX_OFFLOAD_HEADER_SPLIT) RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_FILTER) RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_EXTEND) RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define DEV_RX_OFFLOAD_SCATTER RTE_DEPRECATED(DEV_RX_OFFLOAD_SCATTER) RTE_ETH_RX_OFFLOAD_SCATTER
+#define DEV_RX_OFFLOAD_TIMESTAMP RTE_DEPRECATED(DEV_RX_OFFLOAD_TIMESTAMP) RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define DEV_RX_OFFLOAD_SECURITY RTE_DEPRECATED(DEV_RX_OFFLOAD_SECURITY) RTE_ETH_RX_OFFLOAD_SECURITY
+#define DEV_RX_OFFLOAD_KEEP_CRC RTE_DEPRECATED(DEV_RX_OFFLOAD_KEEP_CRC) RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_SCTP_CKSUM) RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_UDP_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define DEV_RX_OFFLOAD_RSS_HASH RTE_DEPRECATED(DEV_RX_OFFLOAD_RSS_HASH) RTE_ETH_RX_OFFLOAD_RSS_HASH
+
#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_CHECKSUM RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define DEV_RX_OFFLOAD_CHECKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_CHECKSUM) RTE_ETH_RX_OFFLOAD_CHECKSUM
#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
-#define DEV_RX_OFFLOAD_VLAN RTE_ETH_RX_OFFLOAD_VLAN
+#define DEV_RX_OFFLOAD_VLAN RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN) RTE_ETH_RX_OFFLOAD_VLAN
/*
* If new Rx offload capabilities are defined, they also must be
@@ -1606,80 +1618,81 @@ struct rte_eth_conf {
* Tx offload capabilities of a device.
*/
#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT RTE_BIT64(0)
-#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_ETH_TX_OFFLOAD_VLAN_INSERT
#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM RTE_BIT64(1)
-#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM RTE_BIT64(2)
-#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_ETH_TX_OFFLOAD_UDP_CKSUM
#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM RTE_BIT64(3)
-#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_ETH_TX_OFFLOAD_TCP_CKSUM
#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM RTE_BIT64(4)
-#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
#define RTE_ETH_TX_OFFLOAD_TCP_TSO RTE_BIT64(5)
-#define DEV_TX_OFFLOAD_TCP_TSO RTE_ETH_TX_OFFLOAD_TCP_TSO
#define RTE_ETH_TX_OFFLOAD_UDP_TSO RTE_BIT64(6)
-#define DEV_TX_OFFLOAD_UDP_TSO RTE_ETH_TX_OFFLOAD_UDP_TSO
#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(7) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT RTE_BIT64(8)
-#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_ETH_TX_OFFLOAD_QINQ_INSERT
#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO RTE_BIT64(9) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO RTE_BIT64(10) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO RTE_BIT64(11) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO RTE_BIT64(12) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT RTE_BIT64(13)
-#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
/**
* Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
* Tx queue without SW lock.
*/
#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE RTE_BIT64(14)
-#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
/** Device supports multi segment send. */
#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS RTE_BIT64(15)
-#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_ETH_TX_OFFLOAD_MULTI_SEGS
/**
* Device supports optimization for fast release of mbufs.
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE RTE_BIT64(16)
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
#define RTE_ETH_TX_OFFLOAD_SECURITY RTE_BIT64(17)
-#define DEV_TX_OFFLOAD_SECURITY RTE_ETH_TX_OFFLOAD_SECURITY
/**
* Device supports generic UDP tunneled packet TSO.
* Application must set RTE_MBUF_F_TX_TUNNEL_UDP and other mbuf fields required
* for tunnel TSO.
*/
#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO RTE_BIT64(18)
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
/**
* Device supports generic IP tunneled packet TSO.
* Application must set RTE_MBUF_F_TX_TUNNEL_IP and other mbuf fields required
* for tunnel TSO.
*/
#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO RTE_BIT64(19)
-#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
/** Device supports outer UDP checksum */
#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(20)
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
/**
* Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_BIT64(21)
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
*/
+#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_VLAN_INSERT) RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_IPV4_CKSUM) RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_CKSUM) RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_TCP_CKSUM) RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_SCTP_CKSUM) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define DEV_TX_OFFLOAD_TCP_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_TCP_TSO) RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define DEV_TX_OFFLOAD_UDP_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_TSO) RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_QINQ_INSERT) RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_VXLAN_TNL_TSO) RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_GRE_TNL_TSO) RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_IPIP_TNL_TSO) RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_GENEVE_TNL_TSO) RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_MACSEC_INSERT) RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_DEPRECATED(DEV_TX_OFFLOAD_MT_LOCKFREE) RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
+#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_DEPRECATED(DEV_TX_OFFLOAD_MULTI_SEGS) RTE_ETH_TX_OFFLOAD_MULTI_SEGS
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_DEPRECATED(DEV_TX_OFFLOAD_MBUF_FAST_FREE) RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
+#define DEV_TX_OFFLOAD_SECURITY RTE_DEPRECATED(DEV_TX_OFFLOAD_SECURITY) RTE_ETH_TX_OFFLOAD_SECURITY
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_TNL_TSO) RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
+#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_IP_TNL_TSO) RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_DEPRECATED(DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
+
/**@{@name Device capabilities
* Non-offload capabilities reported in rte_eth_dev_info.dev_capa.
*/
@@ -1931,9 +1944,10 @@ struct rte_eth_xstat_name {
};
#define RTE_ETH_DCB_NUM_TCS 8
-#define ETH_DCB_NUM_TCS RTE_ETH_DCB_NUM_TCS
#define RTE_ETH_MAX_VMDQ_POOL 64
-#define ETH_MAX_VMDQ_POOL RTE_ETH_MAX_VMDQ_POOL
+
+#define ETH_DCB_NUM_TCS RTE_DEPRECATED(ETH_DCB_NUM_TCS) RTE_ETH_DCB_NUM_TCS
+#define ETH_MAX_VMDQ_POOL RTE_DEPRECATED(ETH_MAX_VMDQ_POOL) RTE_ETH_MAX_VMDQ_POOL
/**
* A structure used to get the information of queue and
--
2.34.1
^ permalink raw reply [relevance 1%]
* RE: [PATCH 00/12] add packet generator library and example app
@ 2022-01-12 16:18 3% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2022-01-12 16:18 UTC (permalink / raw)
To: Bruce Richardson, Ronan Randles; +Cc: dev, harry.van.haaren
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Tuesday, 14 December 2021 15.58
>
> On Tue, Dec 14, 2021 at 02:12:30PM +0000, Ronan Randles wrote:
> > This patchset introduces a Gen library for DPDK. This library
> provides an easy
> > way to generate traffic in order to test software based network
> components.
> >
> > This library enables the basic functionality required in the traffic
> generator.
> > This includes: raw data setting, packet Tx and Rx, creation and
> destruction of a
> > Gen instance and various types of data parsing.
> > This functionality is implemented in "lib/gen/rte_gen.c". IPv4
> parsing
> > functionality is also added in "lib/net/rte_ip.c", this is then used
> in the gen
> > library.
> >
> > A sample app is included in "examples/generator" which shows the use
> of the gen
> > library in making a traffic generator. This can be used to generate
> traffic by
> > running the dpdk-generator generator executable. This sample app
> supports
> > runtime stats reporting (/gen/stats) and line rate limiting
> > (/gen/mpps,<target traffic rate in mpps>) through telemetry.py.
> >
> > As more features are added to the gen library, the sample application
> will
> > become more powerful through the "/gen/packet" string parameter
> > (currently supports IP and Ether address setting). This will allow
> every
> > application to generate more complex traffic types in the future
> without
> > changing API.
> >
>
> I think this is great to see, and sounds a good addition to DPDK. One
> thing
> to address in any v2 is to add more documentation for both the library
> and
> the example app. You need a chapter on the lib added to the programmers
> guide to help others use the library from their code, and a chapter on
> the
> generator example in the example apps guide.
>
> More general question - if we do have a traffic generator in DPDK,
> would it
> be better in the "app" rather than the examples one? If it's only going
> to
> ever stay a simple example of using the lib, examples might be fine,
> but I
> suspect that it will get quite complicated if people start using it and
> adding more features, in which case a move to the "app" folder might be
> more appropriate. Thoughts?
>
> /Bruce
If adding a traffic generator lib/app to DPDK itself, it should be able to evolve freely, unencumbered by the DPDK ABI/API stability requirements.
Also, it MUST be optional when building DPDK for production purposes. Consider the security perspective: If a network appliance based on DPDK is compromised by a hacker, you don't want it to include a traffic generator.
-Morten
^ permalink raw reply [relevance 3%]
* RE: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
@ 2022-01-14 6:30 3% ` Xia, Chenbo
2022-01-17 5:39 0% ` Hu, Jiayu
0 siblings, 1 reply; 200+ results
From: Xia, Chenbo @ 2022-01-14 6:30 UTC (permalink / raw)
To: Hu, Jiayu, dev
Cc: maxime.coquelin, i.maximets, Richardson, Bruce, Van Haaren,
Harry, Pai G, Sunil, Mcnamara, John, Ding, Xuan, Jiang, Cheng1,
liangma
Hi Jiayu,
This is first round of review, I'll spend time on OVS patches later and look back.
> -----Original Message-----
> From: Hu, Jiayu <jiayu.hu@intel.com>
> Sent: Friday, December 31, 2021 5:55 AM
> To: dev@dpdk.org
> Cc: maxime.coquelin@redhat.com; i.maximets@ovn.org; Xia, Chenbo
> <chenbo.xia@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Van
> Haaren, Harry <harry.van.haaren@intel.com>; Pai G, Sunil
> <sunil.pai.g@intel.com>; Mcnamara, John <john.mcnamara@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>;
> liangma@liangbit.com; Hu, Jiayu <jiayu.hu@intel.com>
> Subject: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
>
> Since dmadev is introduced in 21.11, to avoid the overhead of vhost DMA
> abstraction layer and simplify application logics, this patch integrates
> dmadev in asynchronous data path.
>
> Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
> Signed-off-by: Sunil Pai G <sunil.pai.g@intel.com>
> ---
> doc/guides/prog_guide/vhost_lib.rst | 70 ++++-----
> examples/vhost/Makefile | 2 +-
> examples/vhost/ioat.c | 218 --------------------------
> examples/vhost/ioat.h | 63 --------
> examples/vhost/main.c | 230 +++++++++++++++++++++++-----
> examples/vhost/main.h | 11 ++
> examples/vhost/meson.build | 6 +-
> lib/vhost/meson.build | 3 +-
> lib/vhost/rte_vhost_async.h | 121 +++++----------
> lib/vhost/version.map | 3 +
> lib/vhost/vhost.c | 130 +++++++++++-----
> lib/vhost/vhost.h | 53 ++++++-
> lib/vhost/virtio_net.c | 206 +++++++++++++++++++------
> 13 files changed, 587 insertions(+), 529 deletions(-)
> delete mode 100644 examples/vhost/ioat.c
> delete mode 100644 examples/vhost/ioat.h
>
> diff --git a/doc/guides/prog_guide/vhost_lib.rst
> b/doc/guides/prog_guide/vhost_lib.rst
> index 76f5d303c9..bdce7cbf02 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -218,38 +218,12 @@ The following is an overview of some key Vhost API
> functions:
>
> Enable or disable zero copy feature of the vhost crypto backend.
>
> -* ``rte_vhost_async_channel_register(vid, queue_id, config, ops)``
> +* ``rte_vhost_async_channel_register(vid, queue_id)``
>
> Register an async copy device channel for a vhost queue after vring
Since dmadev is here, let's just use 'DMA device' instead of 'copy device'
> - is enabled. Following device ``config`` must be specified together
> - with the registration:
> + is enabled.
>
> - * ``features``
> -
> - This field is used to specify async copy device features.
> -
> - ``RTE_VHOST_ASYNC_INORDER`` represents the async copy device can
> - guarantee the order of copy completion is the same as the order
> - of copy submission.
> -
> - Currently, only ``RTE_VHOST_ASYNC_INORDER`` capable device is
> - supported by vhost.
> -
> - Applications must provide following ``ops`` callbacks for vhost lib to
> - work with the async copy devices:
> -
> - * ``transfer_data(vid, queue_id, descs, opaque_data, count)``
> -
> - vhost invokes this function to submit copy data to the async devices.
> - For non-async_inorder capable devices, ``opaque_data`` could be used
> - for identifying the completed packets.
> -
> - * ``check_completed_copies(vid, queue_id, opaque_data, max_packets)``
> -
> - vhost invokes this function to get the copy data completed by async
> - devices.
> -
> -* ``rte_vhost_async_channel_register_thread_unsafe(vid, queue_id, config,
> ops)``
> +* ``rte_vhost_async_channel_register_thread_unsafe(vid, queue_id)``
>
> Register an async copy device channel for a vhost queue without
> performing any locking.
> @@ -277,18 +251,13 @@ The following is an overview of some key Vhost API
> functions:
> This function is only safe to call in vhost callback functions
> (i.e., struct rte_vhost_device_ops).
>
> -* ``rte_vhost_submit_enqueue_burst(vid, queue_id, pkts, count, comp_pkts,
> comp_count)``
> +* ``rte_vhost_submit_enqueue_burst(vid, queue_id, pkts, count, dma_id,
> dma_vchan)``
>
> Submit an enqueue request to transmit ``count`` packets from host to guest
> - by async data path. Successfully enqueued packets can be transfer completed
> - or being occupied by DMA engines; transfer completed packets are returned
> in
> - ``comp_pkts``, but others are not guaranteed to finish, when this API
> - call returns.
> + by async data path. Applications must not free the packets submitted for
> + enqueue until the packets are completed.
>
> - Applications must not free the packets submitted for enqueue until the
> - packets are completed.
> -
> -* ``rte_vhost_poll_enqueue_completed(vid, queue_id, pkts, count)``
> +* ``rte_vhost_poll_enqueue_completed(vid, queue_id, pkts, count, dma_id,
> dma_vchan)``
>
> Poll enqueue completion status from async data path. Completed packets
> are returned to applications through ``pkts``.
> @@ -298,7 +267,7 @@ The following is an overview of some key Vhost API
> functions:
> This function returns the amount of in-flight packets for the vhost
> queue using async acceleration.
>
> -* ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count)``
> +* ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id,
> dma_vchan)``
>
> Clear inflight packets which are submitted to DMA engine in vhost async
> data
> path. Completed packets are returned to applications through ``pkts``.
> @@ -442,3 +411,26 @@ Finally, a set of device ops is defined for device
> specific operations:
> * ``get_notify_area``
>
> Called to get the notify area info of the queue.
> +
> +Vhost asynchronous data path
> +----------------------------
> +
> +Vhost asynchronous data path leverages DMA devices to offload memory
> +copies from the CPU and it is implemented in an asynchronous way. It
> +enables applcations, like OVS, to save CPU cycles and hide memory copy
> +overhead, thus achieving higher throughput.
> +
> +Vhost doesn't manage DMA devices and applications, like OVS, need to
> +manage and configure DMA devices. Applications need to tell vhost what
> +DMA devices to use in every data path function call. This design enables
> +the flexibility for applications to dynamically use DMA channels in
> +different function modules, not limited in vhost.
> +
> +In addition, vhost supports M:N mapping between vrings and DMA virtual
> +channels. Specifically, one vring can use multiple different DMA channels
> +and one DMA channel can be shared by multiple vrings at the same time.
> +The reason of enabling one vring to use multiple DMA channels is that
> +it's possible that more than one dataplane threads enqueue packets to
> +the same vring with their own DMA virtual channels. Besides, the number
> +of DMA devices is limited. For the purpose of scaling, it's necessary to
> +support sharing DMA channels among vrings.
> diff --git a/examples/vhost/Makefile b/examples/vhost/Makefile
> index 587ea2ab47..975a5dfe40 100644
> --- a/examples/vhost/Makefile
> +++ b/examples/vhost/Makefile
> @@ -5,7 +5,7 @@
> APP = vhost-switch
>
> # all source are stored in SRCS-y
> -SRCS-y := main.c virtio_net.c ioat.c
> +SRCS-y := main.c virtio_net.c
>
> PKGCONF ?= pkg-config
>
> diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c
> deleted file mode 100644
> index 9aeeb12fd9..0000000000
> --- a/examples/vhost/ioat.c
> +++ /dev/null
> @@ -1,218 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2010-2020 Intel Corporation
> - */
> -
> -#include <sys/uio.h>
> -#ifdef RTE_RAW_IOAT
> -#include <rte_rawdev.h>
> -#include <rte_ioat_rawdev.h>
> -
> -#include "ioat.h"
> -#include "main.h"
> -
> -struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE];
> -
> -struct packet_tracker {
> - unsigned short size_track[MAX_ENQUEUED_SIZE];
> - unsigned short next_read;
> - unsigned short next_write;
> - unsigned short last_remain;
> - unsigned short ioat_space;
> -};
> -
> -struct packet_tracker cb_tracker[MAX_VHOST_DEVICE];
> -
> -int
> -open_ioat(const char *value)
> -{
> - struct dma_for_vhost *dma_info = dma_bind;
> - char *input = strndup(value, strlen(value) + 1);
> - char *addrs = input;
> - char *ptrs[2];
> - char *start, *end, *substr;
> - int64_t vid, vring_id;
> - struct rte_ioat_rawdev_config config;
> - struct rte_rawdev_info info = { .dev_private = &config };
> - char name[32];
> - int dev_id;
> - int ret = 0;
> - uint16_t i = 0;
> - char *dma_arg[MAX_VHOST_DEVICE];
> - int args_nr;
> -
> - while (isblank(*addrs))
> - addrs++;
> - if (*addrs == '\0') {
> - ret = -1;
> - goto out;
> - }
> -
> - /* process DMA devices within bracket. */
> - addrs++;
> - substr = strtok(addrs, ";]");
> - if (!substr) {
> - ret = -1;
> - goto out;
> - }
> - args_nr = rte_strsplit(substr, strlen(substr),
> - dma_arg, MAX_VHOST_DEVICE, ',');
> - if (args_nr <= 0) {
> - ret = -1;
> - goto out;
> - }
> - while (i < args_nr) {
> - char *arg_temp = dma_arg[i];
> - uint8_t sub_nr;
> - sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@');
> - if (sub_nr != 2) {
> - ret = -1;
> - goto out;
> - }
> -
> - start = strstr(ptrs[0], "txd");
> - if (start == NULL) {
> - ret = -1;
> - goto out;
> - }
> -
> - start += 3;
> - vid = strtol(start, &end, 0);
> - if (end == start) {
> - ret = -1;
> - goto out;
> - }
> -
> - vring_id = 0 + VIRTIO_RXQ;
> - if (rte_pci_addr_parse(ptrs[1],
> - &(dma_info + vid)->dmas[vring_id].addr) < 0) {
> - ret = -1;
> - goto out;
> - }
> -
> - rte_pci_device_name(&(dma_info + vid)->dmas[vring_id].addr,
> - name, sizeof(name));
> - dev_id = rte_rawdev_get_dev_id(name);
> - if (dev_id == (uint16_t)(-ENODEV) ||
> - dev_id == (uint16_t)(-EINVAL)) {
> - ret = -1;
> - goto out;
> - }
> -
> - if (rte_rawdev_info_get(dev_id, &info, sizeof(config)) < 0 ||
> - strstr(info.driver_name, "ioat") == NULL) {
> - ret = -1;
> - goto out;
> - }
> -
> - (dma_info + vid)->dmas[vring_id].dev_id = dev_id;
> - (dma_info + vid)->dmas[vring_id].is_valid = true;
> - config.ring_size = IOAT_RING_SIZE;
> - config.hdls_disable = true;
> - if (rte_rawdev_configure(dev_id, &info, sizeof(config)) < 0) {
> - ret = -1;
> - goto out;
> - }
> - rte_rawdev_start(dev_id);
> - cb_tracker[dev_id].ioat_space = IOAT_RING_SIZE - 1;
> - dma_info->nr++;
> - i++;
> - }
> -out:
> - free(input);
> - return ret;
> -}
> -
> -int32_t
> -ioat_transfer_data_cb(int vid, uint16_t queue_id,
> - struct rte_vhost_iov_iter *iov_iter,
> - struct rte_vhost_async_status *opaque_data, uint16_t count)
> -{
> - uint32_t i_iter;
> - uint16_t dev_id = dma_bind[vid].dmas[queue_id * 2 + VIRTIO_RXQ].dev_id;
> - struct rte_vhost_iov_iter *iter = NULL;
> - unsigned long i_seg;
> - unsigned short mask = MAX_ENQUEUED_SIZE - 1;
> - unsigned short write = cb_tracker[dev_id].next_write;
> -
> - if (!opaque_data) {
> - for (i_iter = 0; i_iter < count; i_iter++) {
> - iter = iov_iter + i_iter;
> - i_seg = 0;
> - if (cb_tracker[dev_id].ioat_space < iter->nr_segs)
> - break;
> - while (i_seg < iter->nr_segs) {
> - rte_ioat_enqueue_copy(dev_id,
> - (uintptr_t)(iter->iov[i_seg].src_addr),
> - (uintptr_t)(iter->iov[i_seg].dst_addr),
> - iter->iov[i_seg].len,
> - 0,
> - 0);
> - i_seg++;
> - }
> - write &= mask;
> - cb_tracker[dev_id].size_track[write] = iter->nr_segs;
> - cb_tracker[dev_id].ioat_space -= iter->nr_segs;
> - write++;
> - }
> - } else {
> - /* Opaque data is not supported */
> - return -1;
> - }
> - /* ring the doorbell */
> - rte_ioat_perform_ops(dev_id);
> - cb_tracker[dev_id].next_write = write;
> - return i_iter;
> -}
> -
> -int32_t
> -ioat_check_completed_copies_cb(int vid, uint16_t queue_id,
> - struct rte_vhost_async_status *opaque_data,
> - uint16_t max_packets)
> -{
> - if (!opaque_data) {
> - uintptr_t dump[255];
> - int n_seg;
> - unsigned short read, write;
> - unsigned short nb_packet = 0;
> - unsigned short mask = MAX_ENQUEUED_SIZE - 1;
> - unsigned short i;
> -
> - uint16_t dev_id = dma_bind[vid].dmas[queue_id * 2
> - + VIRTIO_RXQ].dev_id;
> - n_seg = rte_ioat_completed_ops(dev_id, 255, NULL, NULL, dump,
> dump);
> - if (n_seg < 0) {
> - RTE_LOG(ERR,
> - VHOST_DATA,
> - "fail to poll completed buf on IOAT device %u",
> - dev_id);
> - return 0;
> - }
> - if (n_seg == 0)
> - return 0;
> -
> - cb_tracker[dev_id].ioat_space += n_seg;
> - n_seg += cb_tracker[dev_id].last_remain;
> -
> - read = cb_tracker[dev_id].next_read;
> - write = cb_tracker[dev_id].next_write;
> - for (i = 0; i < max_packets; i++) {
> - read &= mask;
> - if (read == write)
> - break;
> - if (n_seg >= cb_tracker[dev_id].size_track[read]) {
> - n_seg -= cb_tracker[dev_id].size_track[read];
> - read++;
> - nb_packet++;
> - } else {
> - break;
> - }
> - }
> - cb_tracker[dev_id].next_read = read;
> - cb_tracker[dev_id].last_remain = n_seg;
> - return nb_packet;
> - }
> - /* Opaque data is not supported */
> - return -1;
> -}
> -
> -#endif /* RTE_RAW_IOAT */
> diff --git a/examples/vhost/ioat.h b/examples/vhost/ioat.h
> deleted file mode 100644
> index d9bf717e8d..0000000000
> --- a/examples/vhost/ioat.h
> +++ /dev/null
> @@ -1,63 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2010-2020 Intel Corporation
> - */
> -
> -#ifndef _IOAT_H_
> -#define _IOAT_H_
> -
> -#include <rte_vhost.h>
> -#include <rte_pci.h>
> -#include <rte_vhost_async.h>
> -
> -#define MAX_VHOST_DEVICE 1024
> -#define IOAT_RING_SIZE 4096
> -#define MAX_ENQUEUED_SIZE 4096
> -
> -struct dma_info {
> - struct rte_pci_addr addr;
> - uint16_t dev_id;
> - bool is_valid;
> -};
> -
> -struct dma_for_vhost {
> - struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2];
> - uint16_t nr;
> -};
> -
> -#ifdef RTE_RAW_IOAT
> -int open_ioat(const char *value);
> -
> -int32_t
> -ioat_transfer_data_cb(int vid, uint16_t queue_id,
> - struct rte_vhost_iov_iter *iov_iter,
> - struct rte_vhost_async_status *opaque_data, uint16_t count);
> -
> -int32_t
> -ioat_check_completed_copies_cb(int vid, uint16_t queue_id,
> - struct rte_vhost_async_status *opaque_data,
> - uint16_t max_packets);
> -#else
> -static int open_ioat(const char *value __rte_unused)
> -{
> - return -1;
> -}
> -
> -static int32_t
> -ioat_transfer_data_cb(int vid __rte_unused, uint16_t queue_id __rte_unused,
> - struct rte_vhost_iov_iter *iov_iter __rte_unused,
> - struct rte_vhost_async_status *opaque_data __rte_unused,
> - uint16_t count __rte_unused)
> -{
> - return -1;
> -}
> -
> -static int32_t
> -ioat_check_completed_copies_cb(int vid __rte_unused,
> - uint16_t queue_id __rte_unused,
> - struct rte_vhost_async_status *opaque_data __rte_unused,
> - uint16_t max_packets __rte_unused)
> -{
> - return -1;
> -}
> -#endif
> -#endif /* _IOAT_H_ */
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index 33d023aa39..44073499bc 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -24,8 +24,9 @@
> #include <rte_ip.h>
> #include <rte_tcp.h>
> #include <rte_pause.h>
> +#include <rte_dmadev.h>
> +#include <rte_vhost_async.h>
>
> -#include "ioat.h"
> #include "main.h"
>
> #ifndef MAX_QUEUES
> @@ -56,6 +57,14 @@
> #define RTE_TEST_TX_DESC_DEFAULT 512
>
> #define INVALID_PORT_ID 0xFF
> +#define INVALID_DMA_ID -1
> +
> +#define MAX_VHOST_DEVICE 1024
> +#define DMA_RING_SIZE 4096
> +
> +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE];
> +struct rte_vhost_async_dma_info dma_config[RTE_DMADEV_DEFAULT_MAX];
> +static int dma_count;
>
> /* mask of enabled ports */
> static uint32_t enabled_port_mask = 0;
> @@ -96,8 +105,6 @@ static int builtin_net_driver;
>
> static int async_vhost_driver;
>
> -static char *dma_type;
> -
> /* Specify timeout (in useconds) between retries on RX. */
> static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US;
> /* Specify the number of retries on RX. */
> @@ -196,13 +203,134 @@ struct vhost_bufftable *vhost_txbuff[RTE_MAX_LCORE *
> MAX_VHOST_DEVICE];
> #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \
> / US_PER_S * BURST_TX_DRAIN_US)
>
> +static inline bool
> +is_dma_configured(int16_t dev_id)
> +{
> + int i;
> +
> + for (i = 0; i < dma_count; i++) {
> + if (dma_config[i].dev_id == dev_id) {
> + return true;
> + }
> + }
> + return false;
> +}
> +
> static inline int
> open_dma(const char *value)
> {
> - if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0)
> - return open_ioat(value);
> + struct dma_for_vhost *dma_info = dma_bind;
> + char *input = strndup(value, strlen(value) + 1);
> + char *addrs = input;
> + char *ptrs[2];
> + char *start, *end, *substr;
> + int64_t vid, vring_id;
> +
> + struct rte_dma_info info;
> + struct rte_dma_conf dev_config = { .nb_vchans = 1 };
> + struct rte_dma_vchan_conf qconf = {
> + .direction = RTE_DMA_DIR_MEM_TO_MEM,
> + .nb_desc = DMA_RING_SIZE
> + };
> +
> + int dev_id;
> + int ret = 0;
> + uint16_t i = 0;
> + char *dma_arg[MAX_VHOST_DEVICE];
> + int args_nr;
> +
> + while (isblank(*addrs))
> + addrs++;
> + if (*addrs == '\0') {
> + ret = -1;
> + goto out;
> + }
> +
> + /* process DMA devices within bracket. */
> + addrs++;
> + substr = strtok(addrs, ";]");
> + if (!substr) {
> + ret = -1;
> + goto out;
> + }
> +
> + args_nr = rte_strsplit(substr, strlen(substr),
> + dma_arg, MAX_VHOST_DEVICE, ',');
> + if (args_nr <= 0) {
> + ret = -1;
> + goto out;
> + }
> +
> + while (i < args_nr) {
> + char *arg_temp = dma_arg[i];
> + uint8_t sub_nr;
> +
> + sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@');
> + if (sub_nr != 2) {
> + ret = -1;
> + goto out;
> + }
> +
> + start = strstr(ptrs[0], "txd");
> + if (start == NULL) {
> + ret = -1;
> + goto out;
> + }
> +
> + start += 3;
> + vid = strtol(start, &end, 0);
> + if (end == start) {
> + ret = -1;
> + goto out;
> + }
> +
> + vring_id = 0 + VIRTIO_RXQ;
No need to introduce vring_id, it's always VIRTIO_RXQ
> +
> + dev_id = rte_dma_get_dev_id_by_name(ptrs[1]);
> + if (dev_id < 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Fail to find DMA %s.\n",
> ptrs[1]);
> + ret = -1;
> + goto out;
> + } else if (is_dma_configured(dev_id)) {
> + goto done;
> + }
> +
Please call rte_dma_info_get before configure to make sure info.max_vchans >=1
> + if (rte_dma_configure(dev_id, &dev_config) != 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Fail to configure DMA %d.\n",
> dev_id);
> + ret = -1;
> + goto out;
> + }
> +
> + if (rte_dma_vchan_setup(dev_id, 0, &qconf) != 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Fail to set up DMA %d.\n",
> dev_id);
> + ret = -1;
> + goto out;
> + }
>
> - return -1;
> + rte_dma_info_get(dev_id, &info);
> + if (info.nb_vchans != 1) {
> + RTE_LOG(ERR, VHOST_CONFIG, "DMA %d has no queues.\n",
> dev_id);
Then the above means the number of vchan is not configured.
> + ret = -1;
> + goto out;
> + }
> +
> + if (rte_dma_start(dev_id) != 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Fail to start DMA %u.\n",
> dev_id);
> + ret = -1;
> + goto out;
> + }
> +
> + dma_config[dma_count].dev_id = dev_id;
> + dma_config[dma_count].max_vchans = 1;
> + dma_config[dma_count++].max_desc = DMA_RING_SIZE;
> +
> +done:
> + (dma_info + vid)->dmas[vring_id].dev_id = dev_id;
> + i++;
> + }
> +out:
> + free(input);
> + return ret;
> }
>
> /*
> @@ -500,8 +628,6 @@ enum {
> OPT_CLIENT_NUM,
> #define OPT_BUILTIN_NET_DRIVER "builtin-net-driver"
> OPT_BUILTIN_NET_DRIVER_NUM,
> -#define OPT_DMA_TYPE "dma-type"
> - OPT_DMA_TYPE_NUM,
> #define OPT_DMAS "dmas"
> OPT_DMAS_NUM,
> };
> @@ -539,8 +665,6 @@ us_vhost_parse_args(int argc, char **argv)
> NULL, OPT_CLIENT_NUM},
> {OPT_BUILTIN_NET_DRIVER, no_argument,
> NULL, OPT_BUILTIN_NET_DRIVER_NUM},
> - {OPT_DMA_TYPE, required_argument,
> - NULL, OPT_DMA_TYPE_NUM},
> {OPT_DMAS, required_argument,
> NULL, OPT_DMAS_NUM},
> {NULL, 0, 0, 0},
> @@ -661,10 +785,6 @@ us_vhost_parse_args(int argc, char **argv)
> }
> break;
>
> - case OPT_DMA_TYPE_NUM:
> - dma_type = optarg;
> - break;
> -
> case OPT_DMAS_NUM:
> if (open_dma(optarg) == -1) {
> RTE_LOG(INFO, VHOST_CONFIG,
> @@ -841,9 +961,10 @@ complete_async_pkts(struct vhost_dev *vdev)
> {
> struct rte_mbuf *p_cpl[MAX_PKT_BURST];
> uint16_t complete_count;
> + int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
>
> complete_count = rte_vhost_poll_enqueue_completed(vdev->vid,
> - VIRTIO_RXQ, p_cpl, MAX_PKT_BURST);
> + VIRTIO_RXQ, p_cpl, MAX_PKT_BURST, dma_id, 0);
> if (complete_count) {
> free_pkts(p_cpl, complete_count);
> __atomic_sub_fetch(&vdev->pkts_inflight, complete_count,
> __ATOMIC_SEQ_CST);
> @@ -883,11 +1004,12 @@ drain_vhost(struct vhost_dev *vdev)
>
> if (builtin_net_driver) {
> ret = vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit);
> - } else if (async_vhost_driver) {
> + } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> uint16_t enqueue_fail = 0;
> + int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
>
> complete_async_pkts(vdev);
> - ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m,
> nr_xmit);
> + ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m,
> nr_xmit, dma_id, 0);
> __atomic_add_fetch(&vdev->pkts_inflight, ret, __ATOMIC_SEQ_CST);
>
> enqueue_fail = nr_xmit - ret;
> @@ -905,7 +1027,7 @@ drain_vhost(struct vhost_dev *vdev)
> __ATOMIC_SEQ_CST);
> }
>
> - if (!async_vhost_driver)
> + if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> free_pkts(m, nr_xmit);
> }
>
> @@ -1211,12 +1333,13 @@ drain_eth_rx(struct vhost_dev *vdev)
> if (builtin_net_driver) {
> enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ,
> pkts, rx_count);
> - } else if (async_vhost_driver) {
> + } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> uint16_t enqueue_fail = 0;
> + int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
>
> complete_async_pkts(vdev);
> enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid,
> - VIRTIO_RXQ, pkts, rx_count);
> + VIRTIO_RXQ, pkts, rx_count, dma_id, 0);
> __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count,
> __ATOMIC_SEQ_CST);
>
> enqueue_fail = rx_count - enqueue_count;
> @@ -1235,7 +1358,7 @@ drain_eth_rx(struct vhost_dev *vdev)
> __ATOMIC_SEQ_CST);
> }
>
> - if (!async_vhost_driver)
> + if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> free_pkts(pkts, rx_count);
> }
>
> @@ -1387,18 +1510,20 @@ destroy_device(int vid)
> "(%d) device has been removed from data core\n",
> vdev->vid);
>
> - if (async_vhost_driver) {
> + if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> uint16_t n_pkt = 0;
> + int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> struct rte_mbuf *m_cpl[vdev->pkts_inflight];
>
> while (vdev->pkts_inflight) {
> n_pkt = rte_vhost_clear_queue_thread_unsafe(vid, VIRTIO_RXQ,
> - m_cpl, vdev->pkts_inflight);
> + m_cpl, vdev->pkts_inflight, dma_id, 0);
> free_pkts(m_cpl, n_pkt);
> __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> __ATOMIC_SEQ_CST);
> }
>
> rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> + dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> }
>
> rte_free(vdev);
> @@ -1468,20 +1593,14 @@ new_device(int vid)
> "(%d) device has been added to data core %d\n",
> vid, vdev->coreid);
>
> - if (async_vhost_driver) {
> - struct rte_vhost_async_config config = {0};
> - struct rte_vhost_async_channel_ops channel_ops;
> -
> - if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0) {
> - channel_ops.transfer_data = ioat_transfer_data_cb;
> - channel_ops.check_completed_copies =
> - ioat_check_completed_copies_cb;
> -
> - config.features = RTE_VHOST_ASYNC_INORDER;
> + if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) {
> + int ret;
>
> - return rte_vhost_async_channel_register(vid, VIRTIO_RXQ,
> - config, &channel_ops);
> + ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ);
> + if (ret == 0) {
> + dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = true;
> }
> + return ret;
> }
>
> return 0;
> @@ -1502,14 +1621,15 @@ vring_state_changed(int vid, uint16_t queue_id, int
> enable)
> if (queue_id != VIRTIO_RXQ)
> return 0;
>
> - if (async_vhost_driver) {
> + if (dma_bind[vid].dmas[queue_id].async_enabled) {
> if (!enable) {
> uint16_t n_pkt = 0;
> + int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> struct rte_mbuf *m_cpl[vdev->pkts_inflight];
>
> while (vdev->pkts_inflight) {
> n_pkt = rte_vhost_clear_queue_thread_unsafe(vid,
> queue_id,
> - m_cpl, vdev->pkts_inflight);
> + m_cpl, vdev->pkts_inflight, dma_id,
> 0);
> free_pkts(m_cpl, n_pkt);
> __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> __ATOMIC_SEQ_CST);
> }
> @@ -1657,6 +1777,25 @@ create_mbuf_pool(uint16_t nr_port, uint32_t
> nr_switch_core, uint32_t mbuf_size,
> rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
> }
>
> +static void
> +init_dma(void)
> +{
> + int i;
> +
> + for (i = 0; i < MAX_VHOST_DEVICE; i++) {
> + int j;
> +
> + for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
> + dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
> + dma_bind[i].dmas[j].async_enabled = false;
> + }
> + }
> +
> + for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> + dma_config[i].dev_id = INVALID_DMA_ID;
> + }
> +}
> +
> /*
> * Main function, does initialisation and calls the per-lcore functions.
> */
> @@ -1679,6 +1818,9 @@ main(int argc, char *argv[])
> argc -= ret;
> argv += ret;
>
> + /* initialize dma structures */
> + init_dma();
> +
> /* parse app arguments */
> ret = us_vhost_parse_args(argc, argv);
> if (ret < 0)
> @@ -1754,6 +1896,20 @@ main(int argc, char *argv[])
> if (client_mode)
> flags |= RTE_VHOST_USER_CLIENT;
>
> + if (async_vhost_driver) {
> + if (rte_vhost_async_dma_configure(dma_config, dma_count) < 0) {
> + RTE_LOG(ERR, VHOST_PORT, "Failed to configure DMA in
> vhost.\n");
> + for (i = 0; i < dma_count; i++) {
> + if (dma_config[i].dev_id != INVALID_DMA_ID) {
> + rte_dma_stop(dma_config[i].dev_id);
> + dma_config[i].dev_id = INVALID_DMA_ID;
> + }
> + }
> + dma_count = 0;
> + async_vhost_driver = false;
> + }
> + }
> +
> /* Register vhost user driver to handle vhost messages. */
> for (i = 0; i < nb_sockets; i++) {
> char *file = socket_files + i * PATH_MAX;
> diff --git a/examples/vhost/main.h b/examples/vhost/main.h
> index e7b1ac60a6..b4a453e77e 100644
> --- a/examples/vhost/main.h
> +++ b/examples/vhost/main.h
> @@ -8,6 +8,7 @@
> #include <sys/queue.h>
>
> #include <rte_ether.h>
> +#include <rte_pci.h>
>
> /* Macros for printing using RTE_LOG */
> #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
> @@ -79,6 +80,16 @@ struct lcore_info {
> struct vhost_dev_tailq_list vdev_list;
> };
>
> +struct dma_info {
> + struct rte_pci_addr addr;
> + int16_t dev_id;
> + bool async_enabled;
> +};
> +
> +struct dma_for_vhost {
> + struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2];
> +};
> +
> /* we implement non-extra virtio net features */
> #define VIRTIO_NET_FEATURES 0
>
> diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build
> index 3efd5e6540..87a637f83f 100644
> --- a/examples/vhost/meson.build
> +++ b/examples/vhost/meson.build
> @@ -12,13 +12,9 @@ if not is_linux
> endif
>
> deps += 'vhost'
> +deps += 'dmadev'
> allow_experimental_apis = true
> sources = files(
> 'main.c',
> 'virtio_net.c',
> )
> -
> -if dpdk_conf.has('RTE_RAW_IOAT')
> - deps += 'raw_ioat'
> - sources += files('ioat.c')
> -endif
> diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
> index cdb37a4814..8107329400 100644
> --- a/lib/vhost/meson.build
> +++ b/lib/vhost/meson.build
> @@ -33,7 +33,8 @@ headers = files(
> 'rte_vhost_async.h',
> 'rte_vhost_crypto.h',
> )
> +
> driver_sdk_headers = files(
> 'vdpa_driver.h',
> )
> -deps += ['ethdev', 'cryptodev', 'hash', 'pci']
> +deps += ['ethdev', 'cryptodev', 'hash', 'pci', 'dmadev']
> diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> index a87ea6ba37..23a7a2d8b3 100644
> --- a/lib/vhost/rte_vhost_async.h
> +++ b/lib/vhost/rte_vhost_async.h
> @@ -27,70 +27,12 @@ struct rte_vhost_iov_iter {
> };
>
> /**
> - * dma transfer status
> + * DMA device information
> */
> -struct rte_vhost_async_status {
> - /** An array of application specific data for source memory */
> - uintptr_t *src_opaque_data;
> - /** An array of application specific data for destination memory */
> - uintptr_t *dst_opaque_data;
> -};
> -
> -/**
> - * dma operation callbacks to be implemented by applications
> - */
> -struct rte_vhost_async_channel_ops {
> - /**
> - * instruct async engines to perform copies for a batch of packets
> - *
> - * @param vid
> - * id of vhost device to perform data copies
> - * @param queue_id
> - * queue id to perform data copies
> - * @param iov_iter
> - * an array of IOV iterators
> - * @param opaque_data
> - * opaque data pair sending to DMA engine
> - * @param count
> - * number of elements in the "descs" array
> - * @return
> - * number of IOV iterators processed, negative value means error
> - */
> - int32_t (*transfer_data)(int vid, uint16_t queue_id,
> - struct rte_vhost_iov_iter *iov_iter,
> - struct rte_vhost_async_status *opaque_data,
> - uint16_t count);
> - /**
> - * check copy-completed packets from the async engine
> - * @param vid
> - * id of vhost device to check copy completion
> - * @param queue_id
> - * queue id to check copy completion
> - * @param opaque_data
> - * buffer to receive the opaque data pair from DMA engine
> - * @param max_packets
> - * max number of packets could be completed
> - * @return
> - * number of async descs completed, negative value means error
> - */
> - int32_t (*check_completed_copies)(int vid, uint16_t queue_id,
> - struct rte_vhost_async_status *opaque_data,
> - uint16_t max_packets);
> -};
> -
> -/**
> - * async channel features
> - */
> -enum {
> - RTE_VHOST_ASYNC_INORDER = 1U << 0,
> -};
> -
> -/**
> - * async channel configuration
> - */
> -struct rte_vhost_async_config {
> - uint32_t features;
> - uint32_t rsvd[2];
> +struct rte_vhost_async_dma_info {
> + int16_t dev_id; /* DMA device ID */
> + uint16_t max_vchans; /* max number of vchan */
> + uint16_t max_desc; /* max desc number of vchan */
> };
>
> /**
> @@ -100,17 +42,11 @@ struct rte_vhost_async_config {
> * vhost device id async channel to be attached to
> * @param queue_id
> * vhost queue id async channel to be attached to
> - * @param config
> - * Async channel configuration structure
> - * @param ops
> - * Async channel operation callbacks
> * @return
> * 0 on success, -1 on failures
> */
> __rte_experimental
> -int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> - struct rte_vhost_async_config config,
> - struct rte_vhost_async_channel_ops *ops);
> +int rte_vhost_async_channel_register(int vid, uint16_t queue_id);
>
> /**
> * Unregister an async channel for a vhost queue
> @@ -136,17 +72,11 @@ int rte_vhost_async_channel_unregister(int vid, uint16_t
> queue_id);
> * vhost device id async channel to be attached to
> * @param queue_id
> * vhost queue id async channel to be attached to
> - * @param config
> - * Async channel configuration
> - * @param ops
> - * Async channel operation callbacks
> * @return
> * 0 on success, -1 on failures
> */
> __rte_experimental
> -int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id,
> - struct rte_vhost_async_config config,
> - struct rte_vhost_async_channel_ops *ops);
> +int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> queue_id);
>
> /**
> * Unregister an async channel for a vhost queue without performing any
> @@ -179,12 +109,17 @@ int rte_vhost_async_channel_unregister_thread_unsafe(int
> vid,
> * array of packets to be enqueued
> * @param count
> * packets num to be enqueued
> + * @param dma_id
> + * the identifier of the DMA device
> + * @param vchan
> + * the identifier of virtual DMA channel
> * @return
> * num of packets enqueued
> */
> __rte_experimental
> uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count);
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan);
All dma_id in the API should be uint16_t. Otherwise you need to check if valid.
>
> /**
> * This function checks async completion status for a specific vhost
> @@ -199,12 +134,17 @@ uint16_t rte_vhost_submit_enqueue_burst(int vid,
> uint16_t queue_id,
> * blank array to get return packet pointer
> * @param count
> * size of the packet array
> + * @param dma_id
> + * the identifier of the DMA device
> + * @param vchan
> + * the identifier of virtual DMA channel
> * @return
> * num of packets returned
> */
> __rte_experimental
> uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count);
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan);
>
> /**
> * This function returns the amount of in-flight packets for the vhost
> @@ -235,11 +175,32 @@ int rte_vhost_async_get_inflight(int vid, uint16_t
> queue_id);
> * Blank array to get return packet pointer
> * @param count
> * Size of the packet array
> + * @param dma_id
> + * the identifier of the DMA device
> + * @param vchan
> + * the identifier of virtual DMA channel
> * @return
> * Number of packets returned
> */
> __rte_experimental
> uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count);
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan);
> +/**
> + * The DMA vChannels used in asynchronous data path must be configured
> + * first. So this function needs to be called before enabling DMA
> + * acceleration for vring. If this function fails, asynchronous data path
> + * cannot be enabled for any vring further.
> + *
> + * @param dmas
> + * DMA information
> + * @param count
> + * Element number of 'dmas'
> + * @return
> + * 0 on success, and -1 on failure
> + */
> +__rte_experimental
> +int rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info *dmas,
> + uint16_t count);
I think based on current design, vhost can use every vchan if user app let it.
So the max_desc and max_vchans can just be got from dmadev APIs? Then there's
no need to introduce the new ABI struct rte_vhost_async_dma_info.
And about max_desc, I see the dmadev lib, you can get vchan's max_desc but you
may use a nb_desc (<= max_desc) to configure vchanl. And IIUC, vhost wants to
know the nb_desc instead of max_desc?
>
> #endif /* _RTE_VHOST_ASYNC_H_ */
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index a7ef7f1976..1202ba9c1a 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -84,6 +84,9 @@ EXPERIMENTAL {
>
> # added in 21.11
> rte_vhost_get_monitor_addr;
> +
> + # added in 22.03
> + rte_vhost_async_dma_configure;
> };
>
> INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 13a9bb9dd1..32f37f4851 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -344,6 +344,7 @@ vhost_free_async_mem(struct vhost_virtqueue *vq)
> return;
>
> rte_free(vq->async->pkts_info);
> + rte_free(vq->async->pkts_cmpl_flag);
>
> rte_free(vq->async->buffers_packed);
> vq->async->buffers_packed = NULL;
> @@ -1626,8 +1627,7 @@ rte_vhost_extern_callback_register(int vid,
> }
>
> static __rte_always_inline int
> -async_channel_register(int vid, uint16_t queue_id,
> - struct rte_vhost_async_channel_ops *ops)
> +async_channel_register(int vid, uint16_t queue_id)
> {
> struct virtio_net *dev = get_device(vid);
> struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> @@ -1656,6 +1656,14 @@ async_channel_register(int vid, uint16_t queue_id,
> goto out_free_async;
> }
>
> + async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size * sizeof(bool),
> + RTE_CACHE_LINE_SIZE, node);
> + if (!async->pkts_cmpl_flag) {
> + VHOST_LOG_CONFIG(ERR, "failed to allocate async pkts_cmpl_flag
> (vid %d, qid: %d)\n",
> + vid, queue_id);
qid: %u
> + goto out_free_async;
> + }
> +
> if (vq_is_packed(dev)) {
> async->buffers_packed = rte_malloc_socket(NULL,
> vq->size * sizeof(struct vring_used_elem_packed),
> @@ -1676,9 +1684,6 @@ async_channel_register(int vid, uint16_t queue_id,
> }
> }
>
> - async->ops.check_completed_copies = ops->check_completed_copies;
> - async->ops.transfer_data = ops->transfer_data;
> -
> vq->async = async;
>
> return 0;
> @@ -1691,15 +1696,13 @@ async_channel_register(int vid, uint16_t queue_id,
> }
>
> int
> -rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> - struct rte_vhost_async_config config,
> - struct rte_vhost_async_channel_ops *ops)
> +rte_vhost_async_channel_register(int vid, uint16_t queue_id)
> {
> struct vhost_virtqueue *vq;
> struct virtio_net *dev = get_device(vid);
> int ret;
>
> - if (dev == NULL || ops == NULL)
> + if (dev == NULL)
> return -1;
>
> if (queue_id >= VHOST_MAX_VRING)
> @@ -1710,33 +1713,20 @@ rte_vhost_async_channel_register(int vid, uint16_t
> queue_id,
> if (unlikely(vq == NULL || !dev->async_copy))
> return -1;
>
> - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> - VHOST_LOG_CONFIG(ERR,
> - "async copy is not supported on non-inorder mode "
> - "(vid %d, qid: %d)\n", vid, queue_id);
> - return -1;
> - }
> -
> - if (unlikely(ops->check_completed_copies == NULL ||
> - ops->transfer_data == NULL))
> - return -1;
> -
> rte_spinlock_lock(&vq->access_lock);
> - ret = async_channel_register(vid, queue_id, ops);
> + ret = async_channel_register(vid, queue_id);
> rte_spinlock_unlock(&vq->access_lock);
>
> return ret;
> }
>
> int
> -rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id,
> - struct rte_vhost_async_config config,
> - struct rte_vhost_async_channel_ops *ops)
> +rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id)
> {
> struct vhost_virtqueue *vq;
> struct virtio_net *dev = get_device(vid);
>
> - if (dev == NULL || ops == NULL)
> + if (dev == NULL)
> return -1;
>
> if (queue_id >= VHOST_MAX_VRING)
> @@ -1747,18 +1737,7 @@ rte_vhost_async_channel_register_thread_unsafe(int vid,
> uint16_t queue_id,
> if (unlikely(vq == NULL || !dev->async_copy))
> return -1;
>
> - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> - VHOST_LOG_CONFIG(ERR,
> - "async copy is not supported on non-inorder mode "
> - "(vid %d, qid: %d)\n", vid, queue_id);
> - return -1;
> - }
> -
> - if (unlikely(ops->check_completed_copies == NULL ||
> - ops->transfer_data == NULL))
> - return -1;
> -
> - return async_channel_register(vid, queue_id, ops);
> + return async_channel_register(vid, queue_id);
> }
>
> int
> @@ -1835,6 +1814,83 @@ rte_vhost_async_channel_unregister_thread_unsafe(int
> vid, uint16_t queue_id)
> return 0;
> }
>
> +static __rte_always_inline void
> +vhost_free_async_dma_mem(void)
> +{
> + uint16_t i;
> +
> + for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> + struct async_dma_info *dma = &dma_copy_track[i];
> + int16_t j;
> +
> + if (dma->max_vchans == 0) {
> + continue;
> + }
> +
> + for (j = 0; j < dma->max_vchans; j++) {
> + rte_free(dma->vchans[j].metadata);
> + }
> + rte_free(dma->vchans);
> + dma->vchans = NULL;
> + dma->max_vchans = 0;
> + }
> +}
> +
> +int
> +rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info *dmas, uint16_t
> count)
> +{
> + uint16_t i;
> +
> + if (!dmas) {
> + VHOST_LOG_CONFIG(ERR, "Invalid DMA configuration parameter.\n");
> + return -1;
> + }
> +
> + for (i = 0; i < count; i++) {
> + struct async_dma_vchan_info *vchans;
> + int16_t dev_id;
> + uint16_t max_vchans;
> + uint16_t max_desc;
> + uint16_t j;
> +
> + dev_id = dmas[i].dev_id;
> + max_vchans = dmas[i].max_vchans;
> + max_desc = dmas[i].max_desc;
> +
> + if (!rte_is_power_of_2(max_desc)) {
> + max_desc = rte_align32pow2(max_desc);
> + }
I think when aligning to power of 2, it should exceed not max_desc?
And based on above comment, if this max_desc is nb_desc configured for
vchanl, you should just make sure the nb_desc be power-of-2.
> +
> + vchans = rte_zmalloc(NULL, sizeof(struct async_dma_vchan_info) *
> max_vchans,
> + RTE_CACHE_LINE_SIZE);
> + if (vchans == NULL) {
> + VHOST_LOG_CONFIG(ERR, "Failed to allocate vchans for dma-
> %d."
> + " Cannot enable async data-path.\n", dev_id);
> + vhost_free_async_dma_mem();
> + return -1;
> + }
> +
> + for (j = 0; j < max_vchans; j++) {
> + vchans[j].metadata = rte_zmalloc(NULL, sizeof(bool *) *
> max_desc,
> + RTE_CACHE_LINE_SIZE);
> + if (!vchans[j].metadata) {
> + VHOST_LOG_CONFIG(ERR, "Failed to allocate metadata for
> "
> + "dma-%d vchan-%u\n", dev_id, j);
> + vhost_free_async_dma_mem();
> + return -1;
> + }
> +
> + vchans[j].ring_size = max_desc;
> + vchans[j].ring_mask = max_desc - 1;
> + }
> +
> + dma_copy_track[dev_id].vchans = vchans;
> + dma_copy_track[dev_id].max_vchans = max_vchans;
> + }
> +
> + return 0;
> +}
> +
> int
> rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
> {
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 7085e0885c..d9bda34e11 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -19,6 +19,7 @@
> #include <rte_ether.h>
> #include <rte_rwlock.h>
> #include <rte_malloc.h>
> +#include <rte_dmadev.h>
>
> #include "rte_vhost.h"
> #include "rte_vdpa.h"
> @@ -50,6 +51,7 @@
>
> #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST)
> #define VHOST_MAX_ASYNC_VEC 2048
> +#define VHOST_ASYNC_DMA_BATCHING_SIZE 32
>
> #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \
> ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED | VRING_DESC_F_WRITE) : \
> @@ -119,6 +121,41 @@ struct vring_used_elem_packed {
> uint32_t count;
> };
>
> +struct async_dma_vchan_info {
> + /* circular array to track copy metadata */
> + bool **metadata;
If the metadata will only be flags, maybe just use some
name called XXX_flag
> +
> + /* max elements in 'metadata' */
> + uint16_t ring_size;
> + /* ring index mask for 'metadata' */
> + uint16_t ring_mask;
> +
> + /* batching copies before a DMA doorbell */
> + uint16_t nr_batching;
> +
> + /**
> + * DMA virtual channel lock. Although it is able to bind DMA
> + * virtual channels to data plane threads, vhost control plane
> + * thread could call data plane functions too, thus causing
> + * DMA device contention.
> + *
> + * For example, in VM exit case, vhost control plane thread needs
> + * to clear in-flight packets before disable vring, but there could
> + * be anotther data plane thread is enqueuing packets to the same
> + * vring with the same DMA virtual channel. But dmadev PMD functions
> + * are lock-free, so the control plane and data plane threads
> + * could operate the same DMA virtual channel at the same time.
> + */
> + rte_spinlock_t dma_lock;
> +};
> +
> +struct async_dma_info {
> + uint16_t max_vchans;
> + struct async_dma_vchan_info *vchans;
> +};
> +
> +extern struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> +
> /**
> * inflight async packet information
> */
> @@ -129,9 +166,6 @@ struct async_inflight_info {
> };
>
> struct vhost_async {
> - /* operation callbacks for DMA */
> - struct rte_vhost_async_channel_ops ops;
> -
> struct rte_vhost_iov_iter iov_iter[VHOST_MAX_ASYNC_IT];
> struct rte_vhost_iovec iovec[VHOST_MAX_ASYNC_VEC];
> uint16_t iter_idx;
> @@ -139,6 +173,19 @@ struct vhost_async {
>
> /* data transfer status */
> struct async_inflight_info *pkts_info;
> + /**
> + * packet reorder array. "true" indicates that DMA
> + * device completes all copies for the packet.
> + *
> + * Note that this array could be written by multiple
> + * threads at the same time. For example, two threads
> + * enqueue packets to the same virtqueue with their
> + * own DMA devices. However, since offloading is
> + * per-packet basis, each packet flag will only be
> + * written by one thread. And single byte write is
> + * atomic, so no lock is needed.
> + */
> + bool *pkts_cmpl_flag;
> uint16_t pkts_idx;
> uint16_t pkts_inflight_n;
> union {
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index b3d954aab4..9f81fc9733 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -11,6 +11,7 @@
> #include <rte_net.h>
> #include <rte_ether.h>
> #include <rte_ip.h>
> +#include <rte_dmadev.h>
> #include <rte_vhost.h>
> #include <rte_tcp.h>
> #include <rte_udp.h>
> @@ -25,6 +26,9 @@
>
> #define MAX_BATCH_LEN 256
>
> +/* DMA device copy operation tracking array. */
> +struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> +
> static __rte_always_inline bool
> rxvq_is_mergeable(struct virtio_net *dev)
> {
> @@ -43,6 +47,108 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t
> nr_vring)
> return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
> }
>
> +static __rte_always_inline uint16_t
> +vhost_async_dma_transfer(struct vhost_virtqueue *vq, int16_t dma_id,
> + uint16_t vchan, uint16_t head_idx,
> + struct rte_vhost_iov_iter *pkts, uint16_t nr_pkts)
> +{
> + struct async_dma_vchan_info *dma_info =
> &dma_copy_track[dma_id].vchans[vchan];
> + uint16_t ring_mask = dma_info->ring_mask;
> + uint16_t pkt_idx;
> +
> + rte_spinlock_lock(&dma_info->dma_lock);
> +
> + for (pkt_idx = 0; pkt_idx < nr_pkts; pkt_idx++) {
> + struct rte_vhost_iovec *iov = pkts[pkt_idx].iov;
> + int copy_idx = 0;
> + uint16_t nr_segs = pkts[pkt_idx].nr_segs;
> + uint16_t i;
> +
> + if (rte_dma_burst_capacity(dma_id, vchan) < nr_segs) {
> + goto out;
> + }
> +
> + for (i = 0; i < nr_segs; i++) {
> + /**
> + * We have checked the available space before submit copies
> to DMA
> + * vChannel, so we don't handle error here.
> + */
> + copy_idx = rte_dma_copy(dma_id, vchan,
> (rte_iova_t)iov[i].src_addr,
> + (rte_iova_t)iov[i].dst_addr, iov[i].len,
> + RTE_DMA_OP_FLAG_LLC);
This assumes rte_dma_copy will always succeed if there's available space.
But the API doxygen says:
* @return
* - 0..UINT16_MAX: index of enqueued job.
* - -ENOSPC: if no space left to enqueue.
* - other values < 0 on failure.
So it should consider other vendor-specific errors.
Thanks,
Chenbo
> +
> + /**
> + * Only store packet completion flag address in the last
> copy's
> + * slot, and other slots are set to NULL.
> + */
> + if (unlikely(i == (nr_segs - 1))) {
> + dma_info->metadata[copy_idx & ring_mask] =
> + &vq->async->pkts_cmpl_flag[head_idx % vq->size];
> + }
> + }
> +
> + dma_info->nr_batching += nr_segs;
> + if (unlikely(dma_info->nr_batching >=
> VHOST_ASYNC_DMA_BATCHING_SIZE)) {
> + rte_dma_submit(dma_id, vchan);
> + dma_info->nr_batching = 0;
> + }
> +
> + head_idx++;
> + }
> +
> +out:
> + if (dma_info->nr_batching > 0) {
> + rte_dma_submit(dma_id, vchan);
> + dma_info->nr_batching = 0;
> + }
> + rte_spinlock_unlock(&dma_info->dma_lock);
> +
> + return pkt_idx;
> +}
> +
> +static __rte_always_inline uint16_t
> +vhost_async_dma_check_completed(int16_t dma_id, uint16_t vchan, uint16_t
> max_pkts)
> +{
> + struct async_dma_vchan_info *dma_info =
> &dma_copy_track[dma_id].vchans[vchan];
> + uint16_t ring_mask = dma_info->ring_mask;
> + uint16_t last_idx = 0;
> + uint16_t nr_copies;
> + uint16_t copy_idx;
> + uint16_t i;
> +
> + rte_spinlock_lock(&dma_info->dma_lock);
> +
> + /**
> + * Since all memory is pinned and addresses should be valid,
> + * we don't check errors.
> + */
> + nr_copies = rte_dma_completed(dma_id, vchan, max_pkts, &last_idx, NULL);
> + if (nr_copies == 0) {
> + goto out;
> + }
> +
> + copy_idx = last_idx - nr_copies + 1;
> + for (i = 0; i < nr_copies; i++) {
> + bool *flag;
> +
> + flag = dma_info->metadata[copy_idx & ring_mask];
> + if (flag) {
> + /**
> + * Mark the packet flag as received. The flag
> + * could belong to another virtqueue but write
> + * is atomic.
> + */
> + *flag = true;
> + dma_info->metadata[copy_idx & ring_mask] = NULL;
> + }
> + copy_idx++;
> + }
> +
> +out:
> + rte_spinlock_unlock(&dma_info->dma_lock);
> + return nr_copies;
> +}
> +
> static inline void
> do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq)
> {
> @@ -1449,9 +1555,9 @@ store_dma_desc_info_packed(struct vring_used_elem_packed
> *s_ring,
> }
>
> static __rte_noinline uint32_t
> -virtio_dev_rx_async_submit_split(struct virtio_net *dev,
> - struct vhost_virtqueue *vq, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint32_t count)
> +virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> + uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count,
> + int16_t dma_id, uint16_t vchan)
> {
> struct buf_vector buf_vec[BUF_VECTOR_MAX];
> uint32_t pkt_idx = 0;
> @@ -1503,17 +1609,16 @@ virtio_dev_rx_async_submit_split(struct virtio_net
> *dev,
> if (unlikely(pkt_idx == 0))
> return 0;
>
> - n_xfer = async->ops.transfer_data(dev->vid, queue_id, async->iov_iter, 0,
> pkt_idx);
> - if (unlikely(n_xfer < 0)) {
> - VHOST_LOG_DATA(ERR, "(%d) %s: failed to transfer data for queue
> id %d.\n",
> - dev->vid, __func__, queue_id);
> - n_xfer = 0;
> - }
> + n_xfer = vhost_async_dma_transfer(vq, dma_id, vchan, async->pkts_idx,
> async->iov_iter,
> + pkt_idx);
>
> pkt_err = pkt_idx - n_xfer;
> if (unlikely(pkt_err)) {
> uint16_t num_descs = 0;
>
> + VHOST_LOG_DATA(DEBUG, "(%d) %s: failed to transfer %u packets for
> queue %u.\n",
> + dev->vid, __func__, pkt_err, queue_id);
> +
> /* update number of completed packets */
> pkt_idx = n_xfer;
>
> @@ -1656,13 +1761,13 @@ dma_error_handler_packed(struct vhost_virtqueue *vq,
> uint16_t slot_idx,
> }
>
> static __rte_noinline uint32_t
> -virtio_dev_rx_async_submit_packed(struct virtio_net *dev,
> - struct vhost_virtqueue *vq, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint32_t count)
> +virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> + uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count,
> + int16_t dma_id, uint16_t vchan)
> {
> uint32_t pkt_idx = 0;
> uint32_t remained = count;
> - int32_t n_xfer;
> + uint16_t n_xfer;
> uint16_t num_buffers;
> uint16_t num_descs;
>
> @@ -1670,6 +1775,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev,
> struct async_inflight_info *pkts_info = async->pkts_info;
> uint32_t pkt_err = 0;
> uint16_t slot_idx = 0;
> + uint16_t head_idx = async->pkts_idx % vq->size;
>
> do {
> rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
> @@ -1694,19 +1800,17 @@ virtio_dev_rx_async_submit_packed(struct virtio_net
> *dev,
> if (unlikely(pkt_idx == 0))
> return 0;
>
> - n_xfer = async->ops.transfer_data(dev->vid, queue_id, async->iov_iter, 0,
> pkt_idx);
> - if (unlikely(n_xfer < 0)) {
> - VHOST_LOG_DATA(ERR, "(%d) %s: failed to transfer data for queue
> id %d.\n",
> - dev->vid, __func__, queue_id);
> - n_xfer = 0;
> - }
> -
> - pkt_err = pkt_idx - n_xfer;
> + n_xfer = vhost_async_dma_transfer(vq, dma_id, vchan, head_idx,
> + async->iov_iter, pkt_idx);
>
> async_iter_reset(async);
>
> - if (unlikely(pkt_err))
> + pkt_err = pkt_idx - n_xfer;
> + if (unlikely(pkt_err)) {
> + VHOST_LOG_DATA(DEBUG, "(%d) %s: failed to transfer %u packets for
> queue %u.\n",
> + dev->vid, __func__, pkt_err, queue_id);
> dma_error_handler_packed(vq, slot_idx, pkt_err, &pkt_idx);
> + }
>
> if (likely(vq->shadow_used_idx)) {
> /* keep used descriptors. */
> @@ -1826,28 +1930,37 @@ write_back_completed_descs_packed(struct
> vhost_virtqueue *vq,
>
> static __rte_always_inline uint16_t
> vhost_poll_enqueue_completed(struct virtio_net *dev, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count)
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan)
> {
> struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> struct vhost_async *async = vq->async;
> struct async_inflight_info *pkts_info = async->pkts_info;
> - int32_t n_cpl;
> + uint16_t nr_cpl_pkts = 0;
> uint16_t n_descs = 0, n_buffers = 0;
> uint16_t start_idx, from, i;
>
> - n_cpl = async->ops.check_completed_copies(dev->vid, queue_id, 0, count);
> - if (unlikely(n_cpl < 0)) {
> - VHOST_LOG_DATA(ERR, "(%d) %s: failed to check completed copies for
> queue id %d.\n",
> - dev->vid, __func__, queue_id);
> - return 0;
> - }
> -
> - if (n_cpl == 0)
> - return 0;
> + /* Check completed copies for the given DMA vChannel */
> + vhost_async_dma_check_completed(dma_id, vchan, count);
>
> start_idx = async_get_first_inflight_pkt_idx(vq);
>
> - for (i = 0; i < n_cpl; i++) {
> + /**
> + * Calculate the number of copy completed packets.
> + * Note that there may be completed packets even if
> + * no copies are reported done by the given DMA vChannel,
> + * as DMA vChannels could be shared by other threads.
> + */
> + from = start_idx;
> + while (vq->async->pkts_cmpl_flag[from] && count--) {
> + vq->async->pkts_cmpl_flag[from] = false;
> + from++;
> + if (from >= vq->size)
> + from -= vq->size;
> + nr_cpl_pkts++;
> + }
> +
> + for (i = 0; i < nr_cpl_pkts; i++) {
> from = (start_idx + i) % vq->size;
> /* Only used with packed ring */
> n_buffers += pkts_info[from].nr_buffers;
> @@ -1856,7 +1969,7 @@ vhost_poll_enqueue_completed(struct virtio_net *dev,
> uint16_t queue_id,
> pkts[i] = pkts_info[from].mbuf;
> }
>
> - async->pkts_inflight_n -= n_cpl;
> + async->pkts_inflight_n -= nr_cpl_pkts;
>
> if (likely(vq->enabled && vq->access_ok)) {
> if (vq_is_packed(dev)) {
> @@ -1877,12 +1990,13 @@ vhost_poll_enqueue_completed(struct virtio_net *dev,
> uint16_t queue_id,
> }
> }
>
> - return n_cpl;
> + return nr_cpl_pkts;
> }
>
> uint16_t
> rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count)
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan)
> {
> struct virtio_net *dev = get_device(vid);
> struct vhost_virtqueue *vq;
> @@ -1908,7 +2022,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t
> queue_id,
>
> rte_spinlock_lock(&vq->access_lock);
>
> - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count);
> + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count,
> dma_id, vchan);
>
> rte_spinlock_unlock(&vq->access_lock);
>
> @@ -1917,7 +2031,8 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t
> queue_id,
>
> uint16_t
> rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count)
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan)
> {
> struct virtio_net *dev = get_device(vid);
> struct vhost_virtqueue *vq;
> @@ -1941,14 +2056,14 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t
> queue_id,
> return 0;
> }
>
> - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count);
> + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count,
> dma_id, vchan);
>
> return n_pkts_cpl;
> }
>
> static __rte_always_inline uint32_t
> virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint32_t count)
> + struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan)
> {
> struct vhost_virtqueue *vq;
> uint32_t nb_tx = 0;
> @@ -1980,10 +2095,10 @@ virtio_dev_rx_async_submit(struct virtio_net *dev,
> uint16_t queue_id,
>
> if (vq_is_packed(dev))
> nb_tx = virtio_dev_rx_async_submit_packed(dev, vq, queue_id,
> - pkts, count);
> + pkts, count, dma_id, vchan);
> else
> nb_tx = virtio_dev_rx_async_submit_split(dev, vq, queue_id,
> - pkts, count);
> + pkts, count, dma_id, vchan);
>
> out:
> if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
> @@ -1997,7 +2112,8 @@ virtio_dev_rx_async_submit(struct virtio_net *dev,
> uint16_t queue_id,
>
> uint16_t
> rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count)
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan)
> {
> struct virtio_net *dev = get_device(vid);
>
> @@ -2011,7 +2127,7 @@ rte_vhost_submit_enqueue_burst(int vid, uint16_t
> queue_id,
> return 0;
> }
>
> - return virtio_dev_rx_async_submit(dev, queue_id, pkts, count);
> + return virtio_dev_rx_async_submit(dev, queue_id, pkts, count, dma_id,
> vchan);
> }
>
> static inline bool
> --
> 2.25.1
^ permalink raw reply [relevance 3%]
* RE: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
2022-01-14 6:30 3% ` Xia, Chenbo
@ 2022-01-17 5:39 0% ` Hu, Jiayu
2022-01-19 2:18 0% ` Xia, Chenbo
0 siblings, 1 reply; 200+ results
From: Hu, Jiayu @ 2022-01-17 5:39 UTC (permalink / raw)
To: Xia, Chenbo, dev
Cc: maxime.coquelin, i.maximets, Richardson, Bruce, Van Haaren,
Harry, Pai G, Sunil, Mcnamara, John, Ding, Xuan, Jiang, Cheng1,
liangma
Hi Chenbo,
Please see replies inline.
Thanks,
Jiayu
> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia@intel.com>
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > index 33d023aa39..44073499bc 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -24,8 +24,9 @@
> > #include <rte_ip.h>
> > #include <rte_tcp.h>
> > #include <rte_pause.h>
> > +#include <rte_dmadev.h>
> > +#include <rte_vhost_async.h>
> >
> > -#include "ioat.h"
> > #include "main.h"
> >
> > #ifndef MAX_QUEUES
> > @@ -56,6 +57,14 @@
> > #define RTE_TEST_TX_DESC_DEFAULT 512
> >
> > #define INVALID_PORT_ID 0xFF
> > +#define INVALID_DMA_ID -1
> > +
> > +#define MAX_VHOST_DEVICE 1024
> > +#define DMA_RING_SIZE 4096
> > +
> > +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE];
> > +struct rte_vhost_async_dma_info
> dma_config[RTE_DMADEV_DEFAULT_MAX];
> > +static int dma_count;
> >
> > /* mask of enabled ports */
> > static uint32_t enabled_port_mask = 0;
> > @@ -96,8 +105,6 @@ static int builtin_net_driver;
> >
> > static int async_vhost_driver;
> >
> > -static char *dma_type;
> > -
> > /* Specify timeout (in useconds) between retries on RX. */
> > static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US;
> > /* Specify the number of retries on RX. */
> > @@ -196,13 +203,134 @@ struct vhost_bufftable
> *vhost_txbuff[RTE_MAX_LCORE *
> > MAX_VHOST_DEVICE];
> > #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \
> > / US_PER_S * BURST_TX_DRAIN_US)
> >
> > +static inline bool
> > +is_dma_configured(int16_t dev_id)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < dma_count; i++) {
> > + if (dma_config[i].dev_id == dev_id) {
> > + return true;
> > + }
> > + }
> > + return false;
> > +}
> > +
> > static inline int
> > open_dma(const char *value)
> > {
> > - if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0)
> > - return open_ioat(value);
> > + struct dma_for_vhost *dma_info = dma_bind;
> > + char *input = strndup(value, strlen(value) + 1);
> > + char *addrs = input;
> > + char *ptrs[2];
> > + char *start, *end, *substr;
> > + int64_t vid, vring_id;
> > +
> > + struct rte_dma_info info;
> > + struct rte_dma_conf dev_config = { .nb_vchans = 1 };
> > + struct rte_dma_vchan_conf qconf = {
> > + .direction = RTE_DMA_DIR_MEM_TO_MEM,
> > + .nb_desc = DMA_RING_SIZE
> > + };
> > +
> > + int dev_id;
> > + int ret = 0;
> > + uint16_t i = 0;
> > + char *dma_arg[MAX_VHOST_DEVICE];
> > + int args_nr;
> > +
> > + while (isblank(*addrs))
> > + addrs++;
> > + if (*addrs == '\0') {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + /* process DMA devices within bracket. */
> > + addrs++;
> > + substr = strtok(addrs, ";]");
> > + if (!substr) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + args_nr = rte_strsplit(substr, strlen(substr),
> > + dma_arg, MAX_VHOST_DEVICE, ',');
> > + if (args_nr <= 0) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + while (i < args_nr) {
> > + char *arg_temp = dma_arg[i];
> > + uint8_t sub_nr;
> > +
> > + sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@');
> > + if (sub_nr != 2) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + start = strstr(ptrs[0], "txd");
> > + if (start == NULL) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + start += 3;
> > + vid = strtol(start, &end, 0);
> > + if (end == start) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + vring_id = 0 + VIRTIO_RXQ;
>
> No need to introduce vring_id, it's always VIRTIO_RXQ
I will remove it later.
>
> > +
> > + dev_id = rte_dma_get_dev_id_by_name(ptrs[1]);
> > + if (dev_id < 0) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "Fail to find
> DMA %s.\n",
> > ptrs[1]);
> > + ret = -1;
> > + goto out;
> > + } else if (is_dma_configured(dev_id)) {
> > + goto done;
> > + }
> > +
>
> Please call rte_dma_info_get before configure to make sure
> info.max_vchans >=1
Do you suggest to use "rte_dma_info_get() and info.max_vchans=0" to indicate
the device is not configured, rather than using is_dma_configure()?
>
> > + if (rte_dma_configure(dev_id, &dev_config) != 0) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "Fail to configure
> DMA %d.\n",
> > dev_id);
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + if (rte_dma_vchan_setup(dev_id, 0, &qconf) != 0) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "Fail to set up
> DMA %d.\n",
> > dev_id);
> > + ret = -1;
> > + goto out;
> > + }
> >
> > - return -1;
> > + rte_dma_info_get(dev_id, &info);
> > + if (info.nb_vchans != 1) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "DMA %d has no
> queues.\n",
> > dev_id);
>
> Then the above means the number of vchan is not configured.
>
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + if (rte_dma_start(dev_id) != 0) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "Fail to start
> DMA %u.\n",
> > dev_id);
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + dma_config[dma_count].dev_id = dev_id;
> > + dma_config[dma_count].max_vchans = 1;
> > + dma_config[dma_count++].max_desc = DMA_RING_SIZE;
> > +
> > +done:
> > + (dma_info + vid)->dmas[vring_id].dev_id = dev_id;
> > + i++;
> > + }
> > +out:
> > + free(input);
> > + return ret;
> > }
> >
> > /*
> > @@ -500,8 +628,6 @@ enum {
> > OPT_CLIENT_NUM,
> > #define OPT_BUILTIN_NET_DRIVER "builtin-net-driver"
> > OPT_BUILTIN_NET_DRIVER_NUM,
> > -#define OPT_DMA_TYPE "dma-type"
> > - OPT_DMA_TYPE_NUM,
> > #define OPT_DMAS "dmas"
> > OPT_DMAS_NUM,
> > };
> > @@ -539,8 +665,6 @@ us_vhost_parse_args(int argc, char **argv)
> > NULL, OPT_CLIENT_NUM},
> > {OPT_BUILTIN_NET_DRIVER, no_argument,
> > NULL, OPT_BUILTIN_NET_DRIVER_NUM},
> > - {OPT_DMA_TYPE, required_argument,
> > - NULL, OPT_DMA_TYPE_NUM},
> > {OPT_DMAS, required_argument,
> > NULL, OPT_DMAS_NUM},
> > {NULL, 0, 0, 0},
> > @@ -661,10 +785,6 @@ us_vhost_parse_args(int argc, char **argv)
> > }
> > break;
> >
> > - case OPT_DMA_TYPE_NUM:
> > - dma_type = optarg;
> > - break;
> > -
> > case OPT_DMAS_NUM:
> > if (open_dma(optarg) == -1) {
> > RTE_LOG(INFO, VHOST_CONFIG,
> > @@ -841,9 +961,10 @@ complete_async_pkts(struct vhost_dev *vdev)
> > {
> > struct rte_mbuf *p_cpl[MAX_PKT_BURST];
> > uint16_t complete_count;
> > + int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
> >
> > complete_count = rte_vhost_poll_enqueue_completed(vdev->vid,
> > - VIRTIO_RXQ, p_cpl,
> MAX_PKT_BURST);
> > + VIRTIO_RXQ, p_cpl, MAX_PKT_BURST,
> dma_id, 0);
> > if (complete_count) {
> > free_pkts(p_cpl, complete_count);
> > __atomic_sub_fetch(&vdev->pkts_inflight, complete_count,
> > __ATOMIC_SEQ_CST);
> > @@ -883,11 +1004,12 @@ drain_vhost(struct vhost_dev *vdev)
> >
> > if (builtin_net_driver) {
> > ret = vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit);
> > - } else if (async_vhost_driver) {
> > + } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> > uint16_t enqueue_fail = 0;
> > + int16_t dma_id = dma_bind[vdev-
> >vid].dmas[VIRTIO_RXQ].dev_id;
> >
> > complete_async_pkts(vdev);
> > - ret = rte_vhost_submit_enqueue_burst(vdev->vid,
> VIRTIO_RXQ, m,
> > nr_xmit);
> > + ret = rte_vhost_submit_enqueue_burst(vdev->vid,
> VIRTIO_RXQ, m,
> > nr_xmit, dma_id, 0);
> > __atomic_add_fetch(&vdev->pkts_inflight, ret,
> __ATOMIC_SEQ_CST);
> >
> > enqueue_fail = nr_xmit - ret;
> > @@ -905,7 +1027,7 @@ drain_vhost(struct vhost_dev *vdev)
> > __ATOMIC_SEQ_CST);
> > }
> >
> > - if (!async_vhost_driver)
> > + if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> > free_pkts(m, nr_xmit);
> > }
> >
> > @@ -1211,12 +1333,13 @@ drain_eth_rx(struct vhost_dev *vdev)
> > if (builtin_net_driver) {
> > enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ,
> > pkts, rx_count);
> > - } else if (async_vhost_driver) {
> > + } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> > uint16_t enqueue_fail = 0;
> > + int16_t dma_id = dma_bind[vdev-
> >vid].dmas[VIRTIO_RXQ].dev_id;
> >
> > complete_async_pkts(vdev);
> > enqueue_count = rte_vhost_submit_enqueue_burst(vdev-
> >vid,
> > - VIRTIO_RXQ, pkts, rx_count);
> > + VIRTIO_RXQ, pkts, rx_count, dma_id,
> 0);
> > __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count,
> > __ATOMIC_SEQ_CST);
> >
> > enqueue_fail = rx_count - enqueue_count;
> > @@ -1235,7 +1358,7 @@ drain_eth_rx(struct vhost_dev *vdev)
> > __ATOMIC_SEQ_CST);
> > }
> >
> > - if (!async_vhost_driver)
> > + if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> > free_pkts(pkts, rx_count);
> > }
> >
> > @@ -1387,18 +1510,20 @@ destroy_device(int vid)
> > "(%d) device has been removed from data core\n",
> > vdev->vid);
> >
> > - if (async_vhost_driver) {
> > + if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> > uint16_t n_pkt = 0;
> > + int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> > struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> >
> > while (vdev->pkts_inflight) {
> > n_pkt = rte_vhost_clear_queue_thread_unsafe(vid,
> VIRTIO_RXQ,
> > - m_cpl, vdev->pkts_inflight);
> > + m_cpl, vdev->pkts_inflight,
> dma_id, 0);
> > free_pkts(m_cpl, n_pkt);
> > __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> > __ATOMIC_SEQ_CST);
> > }
> >
> > rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> > + dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> > }
> >
> > rte_free(vdev);
> > @@ -1468,20 +1593,14 @@ new_device(int vid)
> > "(%d) device has been added to data core %d\n",
> > vid, vdev->coreid);
> >
> > - if (async_vhost_driver) {
> > - struct rte_vhost_async_config config = {0};
> > - struct rte_vhost_async_channel_ops channel_ops;
> > -
> > - if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0) {
> > - channel_ops.transfer_data = ioat_transfer_data_cb;
> > - channel_ops.check_completed_copies =
> > - ioat_check_completed_copies_cb;
> > -
> > - config.features = RTE_VHOST_ASYNC_INORDER;
> > + if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) {
> > + int ret;
> >
> > - return rte_vhost_async_channel_register(vid,
> VIRTIO_RXQ,
> > - config, &channel_ops);
> > + ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ);
> > + if (ret == 0) {
> > + dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled =
> true;
> > }
> > + return ret;
> > }
> >
> > return 0;
> > @@ -1502,14 +1621,15 @@ vring_state_changed(int vid, uint16_t
> queue_id, int
> > enable)
> > if (queue_id != VIRTIO_RXQ)
> > return 0;
> >
> > - if (async_vhost_driver) {
> > + if (dma_bind[vid].dmas[queue_id].async_enabled) {
> > if (!enable) {
> > uint16_t n_pkt = 0;
> > + int16_t dma_id =
> dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> > struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> >
> > while (vdev->pkts_inflight) {
> > n_pkt =
> rte_vhost_clear_queue_thread_unsafe(vid,
> > queue_id,
> > - m_cpl, vdev-
> >pkts_inflight);
> > + m_cpl, vdev-
> >pkts_inflight, dma_id,
> > 0);
> > free_pkts(m_cpl, n_pkt);
> > __atomic_sub_fetch(&vdev->pkts_inflight,
> n_pkt,
> > __ATOMIC_SEQ_CST);
> > }
> > @@ -1657,6 +1777,25 @@ create_mbuf_pool(uint16_t nr_port, uint32_t
> > nr_switch_core, uint32_t mbuf_size,
> > rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
> > }
> >
> > +static void
> > +init_dma(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < MAX_VHOST_DEVICE; i++) {
> > + int j;
> > +
> > + for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
> > + dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
> > + dma_bind[i].dmas[j].async_enabled = false;
> > + }
> > + }
> > +
> > + for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> > + dma_config[i].dev_id = INVALID_DMA_ID;
> > + }
> > +}
> > +
> > /*
> > * Main function, does initialisation and calls the per-lcore functions.
> > */
> > @@ -1679,6 +1818,9 @@ main(int argc, char *argv[])
> > argc -= ret;
> > argv += ret;
> >
> > + /* initialize dma structures */
> > + init_dma();
> > +
> > /* parse app arguments */
> > ret = us_vhost_parse_args(argc, argv);
> > if (ret < 0)
> > @@ -1754,6 +1896,20 @@ main(int argc, char *argv[])
> > if (client_mode)
> > flags |= RTE_VHOST_USER_CLIENT;
> >
> > + if (async_vhost_driver) {
> > + if (rte_vhost_async_dma_configure(dma_config, dma_count)
> < 0) {
> > + RTE_LOG(ERR, VHOST_PORT, "Failed to configure
> DMA in
> > vhost.\n");
> > + for (i = 0; i < dma_count; i++) {
> > + if (dma_config[i].dev_id != INVALID_DMA_ID)
> {
> > + rte_dma_stop(dma_config[i].dev_id);
> > + dma_config[i].dev_id =
> INVALID_DMA_ID;
> > + }
> > + }
> > + dma_count = 0;
> > + async_vhost_driver = false;
> > + }
> > + }
> > +
> > /* Register vhost user driver to handle vhost messages. */
> > for (i = 0; i < nb_sockets; i++) {
> > char *file = socket_files + i * PATH_MAX;
> > diff --git a/examples/vhost/main.h b/examples/vhost/main.h
> > index e7b1ac60a6..b4a453e77e 100644
> > --- a/examples/vhost/main.h
> > +++ b/examples/vhost/main.h
> > @@ -8,6 +8,7 @@
> > #include <sys/queue.h>
> >
> > #include <rte_ether.h>
> > +#include <rte_pci.h>
> >
> > /* Macros for printing using RTE_LOG */
> > #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
> > @@ -79,6 +80,16 @@ struct lcore_info {
> > struct vhost_dev_tailq_list vdev_list;
> > };
> >
> > +struct dma_info {
> > + struct rte_pci_addr addr;
> > + int16_t dev_id;
> > + bool async_enabled;
> > +};
> > +
> > +struct dma_for_vhost {
> > + struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2];
> > +};
> > +
> > /* we implement non-extra virtio net features */
> > #define VIRTIO_NET_FEATURES 0
> >
> > diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build
> > index 3efd5e6540..87a637f83f 100644
> > --- a/examples/vhost/meson.build
> > +++ b/examples/vhost/meson.build
> > @@ -12,13 +12,9 @@ if not is_linux
> > endif
> >
> > deps += 'vhost'
> > +deps += 'dmadev'
> > allow_experimental_apis = true
> > sources = files(
> > 'main.c',
> > 'virtio_net.c',
> > )
> > -
> > -if dpdk_conf.has('RTE_RAW_IOAT')
> > - deps += 'raw_ioat'
> > - sources += files('ioat.c')
> > -endif
> > diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
> > index cdb37a4814..8107329400 100644
> > --- a/lib/vhost/meson.build
> > +++ b/lib/vhost/meson.build
> > @@ -33,7 +33,8 @@ headers = files(
> > 'rte_vhost_async.h',
> > 'rte_vhost_crypto.h',
> > )
> > +
> > driver_sdk_headers = files(
> > 'vdpa_driver.h',
> > )
> > -deps += ['ethdev', 'cryptodev', 'hash', 'pci']
> > +deps += ['ethdev', 'cryptodev', 'hash', 'pci', 'dmadev']
> > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> > index a87ea6ba37..23a7a2d8b3 100644
> > --- a/lib/vhost/rte_vhost_async.h
> > +++ b/lib/vhost/rte_vhost_async.h
> > @@ -27,70 +27,12 @@ struct rte_vhost_iov_iter {
> > };
> >
> > /**
> > - * dma transfer status
> > + * DMA device information
> > */
> > -struct rte_vhost_async_status {
> > - /** An array of application specific data for source memory */
> > - uintptr_t *src_opaque_data;
> > - /** An array of application specific data for destination memory */
> > - uintptr_t *dst_opaque_data;
> > -};
> > -
> > -/**
> > - * dma operation callbacks to be implemented by applications
> > - */
> > -struct rte_vhost_async_channel_ops {
> > - /**
> > - * instruct async engines to perform copies for a batch of packets
> > - *
> > - * @param vid
> > - * id of vhost device to perform data copies
> > - * @param queue_id
> > - * queue id to perform data copies
> > - * @param iov_iter
> > - * an array of IOV iterators
> > - * @param opaque_data
> > - * opaque data pair sending to DMA engine
> > - * @param count
> > - * number of elements in the "descs" array
> > - * @return
> > - * number of IOV iterators processed, negative value means error
> > - */
> > - int32_t (*transfer_data)(int vid, uint16_t queue_id,
> > - struct rte_vhost_iov_iter *iov_iter,
> > - struct rte_vhost_async_status *opaque_data,
> > - uint16_t count);
> > - /**
> > - * check copy-completed packets from the async engine
> > - * @param vid
> > - * id of vhost device to check copy completion
> > - * @param queue_id
> > - * queue id to check copy completion
> > - * @param opaque_data
> > - * buffer to receive the opaque data pair from DMA engine
> > - * @param max_packets
> > - * max number of packets could be completed
> > - * @return
> > - * number of async descs completed, negative value means error
> > - */
> > - int32_t (*check_completed_copies)(int vid, uint16_t queue_id,
> > - struct rte_vhost_async_status *opaque_data,
> > - uint16_t max_packets);
> > -};
> > -
> > -/**
> > - * async channel features
> > - */
> > -enum {
> > - RTE_VHOST_ASYNC_INORDER = 1U << 0,
> > -};
> > -
> > -/**
> > - * async channel configuration
> > - */
> > -struct rte_vhost_async_config {
> > - uint32_t features;
> > - uint32_t rsvd[2];
> > +struct rte_vhost_async_dma_info {
> > + int16_t dev_id; /* DMA device ID */
> > + uint16_t max_vchans; /* max number of vchan */
> > + uint16_t max_desc; /* max desc number of vchan */
> > };
> >
> > /**
> > @@ -100,17 +42,11 @@ struct rte_vhost_async_config {
> > * vhost device id async channel to be attached to
> > * @param queue_id
> > * vhost queue id async channel to be attached to
> > - * @param config
> > - * Async channel configuration structure
> > - * @param ops
> > - * Async channel operation callbacks
> > * @return
> > * 0 on success, -1 on failures
> > */
> > __rte_experimental
> > -int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > - struct rte_vhost_async_config config,
> > - struct rte_vhost_async_channel_ops *ops);
> > +int rte_vhost_async_channel_register(int vid, uint16_t queue_id);
> >
> > /**
> > * Unregister an async channel for a vhost queue
> > @@ -136,17 +72,11 @@ int rte_vhost_async_channel_unregister(int vid,
> uint16_t
> > queue_id);
> > * vhost device id async channel to be attached to
> > * @param queue_id
> > * vhost queue id async channel to be attached to
> > - * @param config
> > - * Async channel configuration
> > - * @param ops
> > - * Async channel operation callbacks
> > * @return
> > * 0 on success, -1 on failures
> > */
> > __rte_experimental
> > -int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> queue_id,
> > - struct rte_vhost_async_config config,
> > - struct rte_vhost_async_channel_ops *ops);
> > +int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > queue_id);
> >
> > /**
> > * Unregister an async channel for a vhost queue without performing any
> > @@ -179,12 +109,17 @@ int
> rte_vhost_async_channel_unregister_thread_unsafe(int
> > vid,
> > * array of packets to be enqueued
> > * @param count
> > * packets num to be enqueued
> > + * @param dma_id
> > + * the identifier of the DMA device
> > + * @param vchan
> > + * the identifier of virtual DMA channel
> > * @return
> > * num of packets enqueued
> > */
> > __rte_experimental
> > uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
> > - struct rte_mbuf **pkts, uint16_t count);
> > + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > + uint16_t vchan);
>
> All dma_id in the API should be uint16_t. Otherwise you need to check if valid.
Yes, you are right. Although dma_id is defined as int16_t and DMA library checks
if it is valid, vhost doesn't handle DMA failure and we need to make sure dma_id
is valid before using it. And even if vhost handles DMA error, a better place to check
invalid dma_id is before passing it to DMA library too. I will add the check later.
>
> >
> > /**
> > * This function checks async completion status for a specific vhost
> > @@ -199,12 +134,17 @@ uint16_t rte_vhost_submit_enqueue_burst(int
> vid,
> > uint16_t queue_id,
> > * blank array to get return packet pointer
> > * @param count
> > * size of the packet array
> > + * @param dma_id
> > + * the identifier of the DMA device
> > + * @param vchan
> > + * the identifier of virtual DMA channel
> > * @return
> > * num of packets returned
> > */
> > __rte_experimental
> > uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> > - struct rte_mbuf **pkts, uint16_t count);
> > + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > + uint16_t vchan);
> >
> > /**
> > * This function returns the amount of in-flight packets for the vhost
> > @@ -235,11 +175,32 @@ int rte_vhost_async_get_inflight(int vid, uint16_t
> > queue_id);
> > * Blank array to get return packet pointer
> > * @param count
> > * Size of the packet array
> > + * @param dma_id
> > + * the identifier of the DMA device
> > + * @param vchan
> > + * the identifier of virtual DMA channel
> > * @return
> > * Number of packets returned
> > */
> > __rte_experimental
> > uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> > - struct rte_mbuf **pkts, uint16_t count);
> > + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > + uint16_t vchan);
> > +/**
> > + * The DMA vChannels used in asynchronous data path must be
> configured
> > + * first. So this function needs to be called before enabling DMA
> > + * acceleration for vring. If this function fails, asynchronous data path
> > + * cannot be enabled for any vring further.
> > + *
> > + * @param dmas
> > + * DMA information
> > + * @param count
> > + * Element number of 'dmas'
> > + * @return
> > + * 0 on success, and -1 on failure
> > + */
> > +__rte_experimental
> > +int rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info
> *dmas,
> > + uint16_t count);
>
> I think based on current design, vhost can use every vchan if user app let it.
> So the max_desc and max_vchans can just be got from dmadev APIs? Then
> there's
> no need to introduce the new ABI struct rte_vhost_async_dma_info.
Yes, no need to introduce struct rte_vhost_async_dma_info. We can either use
struct rte_dma_info which is suggested by Maxime, or query from dma library
via device id. Since dma device configuration is left to applications, I prefer to
use rte_dma_info directly. How do you think?
>
> And about max_desc, I see the dmadev lib, you can get vchan's max_desc
> but you
> may use a nb_desc (<= max_desc) to configure vchanl. And IIUC, vhost wants
> to
> know the nb_desc instead of max_desc?
True, nb_desc is better than max_desc. But dma library doesn’t provide function
to query nb_desc for every vchannel. And rte_dma_info cannot be used in
rte_vhost_async_dma_configure(), if vhost uses nb_desc. So the only way is
to require users to provide nb_desc for every vchannel, and it will introduce
a new struct. Is it really needed?
>
> >
> > #endif /* _RTE_VHOST_ASYNC_H_ */
> > diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> > index a7ef7f1976..1202ba9c1a 100644
> > --- a/lib/vhost/version.map
> > +++ b/lib/vhost/version.map
> > @@ -84,6 +84,9 @@ EXPERIMENTAL {
> >
> > # added in 21.11
> > rte_vhost_get_monitor_addr;
> > +
> > + # added in 22.03
> > + rte_vhost_async_dma_configure;
> > };
> >
> > INTERNAL {
> > diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> > index 13a9bb9dd1..32f37f4851 100644
> > --- a/lib/vhost/vhost.c
> > +++ b/lib/vhost/vhost.c
> > @@ -344,6 +344,7 @@ vhost_free_async_mem(struct vhost_virtqueue *vq)
> > return;
> >
> > rte_free(vq->async->pkts_info);
> > + rte_free(vq->async->pkts_cmpl_flag);
> >
> > rte_free(vq->async->buffers_packed);
> > vq->async->buffers_packed = NULL;
> > @@ -1626,8 +1627,7 @@ rte_vhost_extern_callback_register(int vid,
> > }
> >
> > static __rte_always_inline int
> > -async_channel_register(int vid, uint16_t queue_id,
> > - struct rte_vhost_async_channel_ops *ops)
> > +async_channel_register(int vid, uint16_t queue_id)
> > {
> > struct virtio_net *dev = get_device(vid);
> > struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> > @@ -1656,6 +1656,14 @@ async_channel_register(int vid, uint16_t
> queue_id,
> > goto out_free_async;
> > }
> >
> > + async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size *
> sizeof(bool),
> > + RTE_CACHE_LINE_SIZE, node);
> > + if (!async->pkts_cmpl_flag) {
> > + VHOST_LOG_CONFIG(ERR, "failed to allocate async
> pkts_cmpl_flag
> > (vid %d, qid: %d)\n",
> > + vid, queue_id);
>
> qid: %u
>
> > + goto out_free_async;
> > + }
> > +
> > if (vq_is_packed(dev)) {
> > async->buffers_packed = rte_malloc_socket(NULL,
> > vq->size * sizeof(struct
> vring_used_elem_packed),
> > @@ -1676,9 +1684,6 @@ async_channel_register(int vid, uint16_t
> queue_id,
> > }
> > }
> >
> > - async->ops.check_completed_copies = ops-
> >check_completed_copies;
> > - async->ops.transfer_data = ops->transfer_data;
> > -
> > vq->async = async;
> >
> > return 0;
> > @@ -1691,15 +1696,13 @@ async_channel_register(int vid, uint16_t
> queue_id,
> > }
> >
> > int
> > -rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > - struct rte_vhost_async_config config,
> > - struct rte_vhost_async_channel_ops *ops)
> > +rte_vhost_async_channel_register(int vid, uint16_t queue_id)
> > {
> > struct vhost_virtqueue *vq;
> > struct virtio_net *dev = get_device(vid);
> > int ret;
> >
> > - if (dev == NULL || ops == NULL)
> > + if (dev == NULL)
> > return -1;
> >
> > if (queue_id >= VHOST_MAX_VRING)
> > @@ -1710,33 +1713,20 @@ rte_vhost_async_channel_register(int vid,
> uint16_t
> > queue_id,
> > if (unlikely(vq == NULL || !dev->async_copy))
> > return -1;
> >
> > - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> > - VHOST_LOG_CONFIG(ERR,
> > - "async copy is not supported on non-inorder mode "
> > - "(vid %d, qid: %d)\n", vid, queue_id);
> > - return -1;
> > - }
> > -
> > - if (unlikely(ops->check_completed_copies == NULL ||
> > - ops->transfer_data == NULL))
> > - return -1;
> > -
> > rte_spinlock_lock(&vq->access_lock);
> > - ret = async_channel_register(vid, queue_id, ops);
> > + ret = async_channel_register(vid, queue_id);
> > rte_spinlock_unlock(&vq->access_lock);
> >
> > return ret;
> > }
> >
> > int
> > -rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> queue_id,
> > - struct rte_vhost_async_config config,
> > - struct rte_vhost_async_channel_ops *ops)
> > +rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> queue_id)
> > {
> > struct vhost_virtqueue *vq;
> > struct virtio_net *dev = get_device(vid);
> >
> > - if (dev == NULL || ops == NULL)
> > + if (dev == NULL)
> > return -1;
> >
> > if (queue_id >= VHOST_MAX_VRING)
> > @@ -1747,18 +1737,7 @@
> rte_vhost_async_channel_register_thread_unsafe(int vid,
> > uint16_t queue_id,
> > if (unlikely(vq == NULL || !dev->async_copy))
> > return -1;
> >
> > - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> > - VHOST_LOG_CONFIG(ERR,
> > - "async copy is not supported on non-inorder mode "
> > - "(vid %d, qid: %d)\n", vid, queue_id);
> > - return -1;
> > - }
> > -
> > - if (unlikely(ops->check_completed_copies == NULL ||
> > - ops->transfer_data == NULL))
> > - return -1;
> > -
> > - return async_channel_register(vid, queue_id, ops);
> > + return async_channel_register(vid, queue_id);
> > }
> >
> > int
> > @@ -1835,6 +1814,83 @@
> rte_vhost_async_channel_unregister_thread_unsafe(int
> > vid, uint16_t queue_id)
> > return 0;
> > }
> >
> > +static __rte_always_inline void
> > +vhost_free_async_dma_mem(void)
> > +{
> > + uint16_t i;
> > +
> > + for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> > + struct async_dma_info *dma = &dma_copy_track[i];
> > + int16_t j;
> > +
> > + if (dma->max_vchans == 0) {
> > + continue;
> > + }
> > +
> > + for (j = 0; j < dma->max_vchans; j++) {
> > + rte_free(dma->vchans[j].metadata);
> > + }
> > + rte_free(dma->vchans);
> > + dma->vchans = NULL;
> > + dma->max_vchans = 0;
> > + }
> > +}
> > +
> > +int
> > +rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info *dmas,
> uint16_t
> > count)
> > +{
> > + uint16_t i;
> > +
> > + if (!dmas) {
> > + VHOST_LOG_CONFIG(ERR, "Invalid DMA configuration
> parameter.\n");
> > + return -1;
> > + }
> > +
> > + for (i = 0; i < count; i++) {
> > + struct async_dma_vchan_info *vchans;
> > + int16_t dev_id;
> > + uint16_t max_vchans;
> > + uint16_t max_desc;
> > + uint16_t j;
> > +
> > + dev_id = dmas[i].dev_id;
> > + max_vchans = dmas[i].max_vchans;
> > + max_desc = dmas[i].max_desc;
> > +
> > + if (!rte_is_power_of_2(max_desc)) {
> > + max_desc = rte_align32pow2(max_desc);
> > + }
>
> I think when aligning to power of 2, it should exceed not max_desc?
Aligned max_desc is used to allocate context tracking array. We only need
to guarantee the size of the array for every vchannel is >= max_desc. So it's
OK to have greater array size than max_desc.
> And based on above comment, if this max_desc is nb_desc configured for
> vchanl, you should just make sure the nb_desc be power-of-2.
>
> > +
> > + vchans = rte_zmalloc(NULL, sizeof(struct
> async_dma_vchan_info) *
> > max_vchans,
> > + RTE_CACHE_LINE_SIZE);
> > + if (vchans == NULL) {
> > + VHOST_LOG_CONFIG(ERR, "Failed to allocate vchans
> for dma-
> > %d."
> > + " Cannot enable async data-path.\n",
> dev_id);
> > + vhost_free_async_dma_mem();
> > + return -1;
> > + }
> > +
> > + for (j = 0; j < max_vchans; j++) {
> > + vchans[j].metadata = rte_zmalloc(NULL, sizeof(bool *)
> *
> > max_desc,
> > + RTE_CACHE_LINE_SIZE);
> > + if (!vchans[j].metadata) {
> > + VHOST_LOG_CONFIG(ERR, "Failed to allocate
> metadata for
> > "
> > + "dma-%d vchan-%u\n",
> dev_id, j);
> > + vhost_free_async_dma_mem();
> > + return -1;
> > + }
> > +
> > + vchans[j].ring_size = max_desc;
> > + vchans[j].ring_mask = max_desc - 1;
> > + }
> > +
> > + dma_copy_track[dev_id].vchans = vchans;
> > + dma_copy_track[dev_id].max_vchans = max_vchans;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > int
> > rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
> > {
> > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> > index 7085e0885c..d9bda34e11 100644
> > --- a/lib/vhost/vhost.h
> > +++ b/lib/vhost/vhost.h
> > @@ -19,6 +19,7 @@
> > #include <rte_ether.h>
> > #include <rte_rwlock.h>
> > #include <rte_malloc.h>
> > +#include <rte_dmadev.h>
> >
> > #include "rte_vhost.h"
> > #include "rte_vdpa.h"
> > @@ -50,6 +51,7 @@
> >
> > #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST)
> > #define VHOST_MAX_ASYNC_VEC 2048
> > +#define VHOST_ASYNC_DMA_BATCHING_SIZE 32
> >
> > #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \
> > ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED |
> VRING_DESC_F_WRITE) : \
> > @@ -119,6 +121,41 @@ struct vring_used_elem_packed {
> > uint32_t count;
> > };
> >
> > +struct async_dma_vchan_info {
> > + /* circular array to track copy metadata */
> > + bool **metadata;
>
> If the metadata will only be flags, maybe just use some
> name called XXX_flag
Sure, I will rename it.
>
> > +
> > + /* max elements in 'metadata' */
> > + uint16_t ring_size;
> > + /* ring index mask for 'metadata' */
> > + uint16_t ring_mask;
> > +
> > + /* batching copies before a DMA doorbell */
> > + uint16_t nr_batching;
> > +
> > + /**
> > + * DMA virtual channel lock. Although it is able to bind DMA
> > + * virtual channels to data plane threads, vhost control plane
> > + * thread could call data plane functions too, thus causing
> > + * DMA device contention.
> > + *
> > + * For example, in VM exit case, vhost control plane thread needs
> > + * to clear in-flight packets before disable vring, but there could
> > + * be anotther data plane thread is enqueuing packets to the same
> > + * vring with the same DMA virtual channel. But dmadev PMD
> functions
> > + * are lock-free, so the control plane and data plane threads
> > + * could operate the same DMA virtual channel at the same time.
> > + */
> > + rte_spinlock_t dma_lock;
> > +};
> > +
> > +struct async_dma_info {
> > + uint16_t max_vchans;
> > + struct async_dma_vchan_info *vchans;
> > +};
> > +
> > +extern struct async_dma_info
> dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> > +
> > /**
> > * inflight async packet information
> > */
> > @@ -129,9 +166,6 @@ struct async_inflight_info {
> > };
> >
> > struct vhost_async {
> > - /* operation callbacks for DMA */
> > - struct rte_vhost_async_channel_ops ops;
> > -
> > struct rte_vhost_iov_iter iov_iter[VHOST_MAX_ASYNC_IT];
> > struct rte_vhost_iovec iovec[VHOST_MAX_ASYNC_VEC];
> > uint16_t iter_idx;
> > @@ -139,6 +173,19 @@ struct vhost_async {
> >
> > /* data transfer status */
> > struct async_inflight_info *pkts_info;
> > + /**
> > + * packet reorder array. "true" indicates that DMA
> > + * device completes all copies for the packet.
> > + *
> > + * Note that this array could be written by multiple
> > + * threads at the same time. For example, two threads
> > + * enqueue packets to the same virtqueue with their
> > + * own DMA devices. However, since offloading is
> > + * per-packet basis, each packet flag will only be
> > + * written by one thread. And single byte write is
> > + * atomic, so no lock is needed.
> > + */
> > + bool *pkts_cmpl_flag;
> > uint16_t pkts_idx;
> > uint16_t pkts_inflight_n;
> > union {
> > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> > index b3d954aab4..9f81fc9733 100644
> > --- a/lib/vhost/virtio_net.c
> > +++ b/lib/vhost/virtio_net.c
> > @@ -11,6 +11,7 @@
> > #include <rte_net.h>
> > #include <rte_ether.h>
> > #include <rte_ip.h>
> > +#include <rte_dmadev.h>
> > #include <rte_vhost.h>
> > #include <rte_tcp.h>
> > #include <rte_udp.h>
> > @@ -25,6 +26,9 @@
> >
> > #define MAX_BATCH_LEN 256
> >
> > +/* DMA device copy operation tracking array. */
> > +struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> > +
> > static __rte_always_inline bool
> > rxvq_is_mergeable(struct virtio_net *dev)
> > {
> > @@ -43,6 +47,108 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx,
> uint32_t
> > nr_vring)
> > return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
> > }
> >
> > +static __rte_always_inline uint16_t
> > +vhost_async_dma_transfer(struct vhost_virtqueue *vq, int16_t dma_id,
> > + uint16_t vchan, uint16_t head_idx,
> > + struct rte_vhost_iov_iter *pkts, uint16_t nr_pkts)
> > +{
> > + struct async_dma_vchan_info *dma_info =
> > &dma_copy_track[dma_id].vchans[vchan];
> > + uint16_t ring_mask = dma_info->ring_mask;
> > + uint16_t pkt_idx;
> > +
> > + rte_spinlock_lock(&dma_info->dma_lock);
> > +
> > + for (pkt_idx = 0; pkt_idx < nr_pkts; pkt_idx++) {
> > + struct rte_vhost_iovec *iov = pkts[pkt_idx].iov;
> > + int copy_idx = 0;
> > + uint16_t nr_segs = pkts[pkt_idx].nr_segs;
> > + uint16_t i;
> > +
> > + if (rte_dma_burst_capacity(dma_id, vchan) < nr_segs) {
> > + goto out;
> > + }
> > +
> > + for (i = 0; i < nr_segs; i++) {
> > + /**
> > + * We have checked the available space before
> submit copies
> > to DMA
> > + * vChannel, so we don't handle error here.
> > + */
> > + copy_idx = rte_dma_copy(dma_id, vchan,
> > (rte_iova_t)iov[i].src_addr,
> > + (rte_iova_t)iov[i].dst_addr, iov[i].len,
> > + RTE_DMA_OP_FLAG_LLC);
>
> This assumes rte_dma_copy will always succeed if there's available space.
>
> But the API doxygen says:
>
> * @return
> * - 0..UINT16_MAX: index of enqueued job.
> * - -ENOSPC: if no space left to enqueue.
> * - other values < 0 on failure.
>
> So it should consider other vendor-specific errors.
Error handling is not free here. Specifically, SW fallback is a way to handle failed
copy operations. But it requires vhost to track VA for every source and destination
buffer for every copy. DMA library uses IOVA, so vhost only prepares IOVA for copies of
every packet in async data-path. In the case of IOVA as PA, the prepared IOVAs cannot
be used as SW fallback, which means vhost needs to store VA for every copy of every
packet too, even if there no errors will happen or IOVA is VA.
I am thinking that the only usable DMA engines in vhost are CBDMA and DSA, is it worth
the cost for "future HW"? If there will be other vendor's HW in future, is it OK to add the
support later? Or is there any way to get VA from IOVA?
Thanks,
Jiayu
>
> Thanks,
> Chenbo
>
>
^ permalink raw reply [relevance 0%]
* [PATCH] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
@ 2022-01-17 23:14 4% Michael Barker
2022-01-17 23:23 4% ` [PATCH v2] " Michael Barker
0 siblings, 1 reply; 200+ results
From: Michael Barker @ 2022-01-17 23:14 UTC (permalink / raw)
To: dev; +Cc: Michael Barker, Ray Kinsella
Signed-off-by: Michael Barker <mikeb01@gmail.com>
---
lib/eal/include/rte_compat.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index 2718612cce..9556bbf4d0 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -33,8 +33,11 @@ section(".text.internal")))
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
#define __rte_internal \
+_Pragma("GCC diagnostic push") \
+_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
-section(".text.internal")))
+section(".text.internal"))) \
+_Pragma("GCC diagnostic pop")
#else
--
2.25.1
^ permalink raw reply [relevance 4%]
* [PATCH v2] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-17 23:14 4% [PATCH] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if Michael Barker
@ 2022-01-17 23:23 4% ` Michael Barker
2022-01-20 14:16 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Michael Barker @ 2022-01-17 23:23 UTC (permalink / raw)
To: dev; +Cc: Michael Barker, Ray Kinsella
When using clang with -Wall the use of diagnose_if kicks up a warning,
requiring all dpdk includes to be wrapped with the pragma. This change
isolates the ignore just the appropriate location and makes it easier
for users to apply -Wall,-Werror
Signed-off-by: Michael Barker <mikeb01@gmail.com>
---
lib/eal/include/rte_compat.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index 2718612cce..9556bbf4d0 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -33,8 +33,11 @@ section(".text.internal")))
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
#define __rte_internal \
+_Pragma("GCC diagnostic push") \
+_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
-section(".text.internal")))
+section(".text.internal"))) \
+_Pragma("GCC diagnostic pop")
#else
--
2.25.1
^ permalink raw reply [relevance 4%]
* RE: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
2022-01-17 5:39 0% ` Hu, Jiayu
@ 2022-01-19 2:18 0% ` Xia, Chenbo
0 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2022-01-19 2:18 UTC (permalink / raw)
To: Hu, Jiayu, dev
Cc: maxime.coquelin, i.maximets, Richardson, Bruce, Van Haaren,
Harry, Pai G, Sunil, Mcnamara, John, Ding, Xuan, Jiang, Cheng1,
liangma
> -----Original Message-----
> From: Hu, Jiayu <jiayu.hu@intel.com>
> Sent: Monday, January 17, 2022 1:40 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>; dev@dpdk.org
> Cc: maxime.coquelin@redhat.com; i.maximets@ovn.org; Richardson, Bruce
> <bruce.richardson@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> Pai G, Sunil <sunil.pai.g@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
> Ding, Xuan <xuan.ding@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>;
> liangma@liangbit.com
> Subject: RE: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
>
> Hi Chenbo,
>
> Please see replies inline.
>
> Thanks,
> Jiayu
>
> > -----Original Message-----
> > From: Xia, Chenbo <chenbo.xia@intel.com>
> > > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > > index 33d023aa39..44073499bc 100644
> > > --- a/examples/vhost/main.c
> > > +++ b/examples/vhost/main.c
> > > @@ -24,8 +24,9 @@
> > > #include <rte_ip.h>
> > > #include <rte_tcp.h>
> > > #include <rte_pause.h>
> > > +#include <rte_dmadev.h>
> > > +#include <rte_vhost_async.h>
> > >
> > > -#include "ioat.h"
> > > #include "main.h"
> > >
> > > #ifndef MAX_QUEUES
> > > @@ -56,6 +57,14 @@
> > > #define RTE_TEST_TX_DESC_DEFAULT 512
> > >
> > > #define INVALID_PORT_ID 0xFF
> > > +#define INVALID_DMA_ID -1
> > > +
> > > +#define MAX_VHOST_DEVICE 1024
> > > +#define DMA_RING_SIZE 4096
> > > +
> > > +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE];
> > > +struct rte_vhost_async_dma_info
> > dma_config[RTE_DMADEV_DEFAULT_MAX];
> > > +static int dma_count;
> > >
> > > /* mask of enabled ports */
> > > static uint32_t enabled_port_mask = 0;
> > > @@ -96,8 +105,6 @@ static int builtin_net_driver;
> > >
> > > static int async_vhost_driver;
> > >
> > > -static char *dma_type;
> > > -
> > > /* Specify timeout (in useconds) between retries on RX. */
> > > static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US;
> > > /* Specify the number of retries on RX. */
> > > @@ -196,13 +203,134 @@ struct vhost_bufftable
> > *vhost_txbuff[RTE_MAX_LCORE *
> > > MAX_VHOST_DEVICE];
> > > #define MBUF_TABLE_DRAIN_TSC((rte_get_tsc_hz() + US_PER_S - 1) \
> > > / US_PER_S * BURST_TX_DRAIN_US)
> > >
> > > +static inline bool
> > > +is_dma_configured(int16_t dev_id)
> > > +{
> > > +int i;
> > > +
> > > +for (i = 0; i < dma_count; i++) {
> > > +if (dma_config[i].dev_id == dev_id) {
> > > +return true;
> > > +}
> > > +}
> > > +return false;
> > > +}
> > > +
> > > static inline int
> > > open_dma(const char *value)
> > > {
> > > -if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0)
> > > -return open_ioat(value);
> > > +struct dma_for_vhost *dma_info = dma_bind;
> > > +char *input = strndup(value, strlen(value) + 1);
> > > +char *addrs = input;
> > > +char *ptrs[2];
> > > +char *start, *end, *substr;
> > > +int64_t vid, vring_id;
> > > +
> > > +struct rte_dma_info info;
> > > +struct rte_dma_conf dev_config = { .nb_vchans = 1 };
> > > +struct rte_dma_vchan_conf qconf = {
> > > +.direction = RTE_DMA_DIR_MEM_TO_MEM,
> > > +.nb_desc = DMA_RING_SIZE
> > > +};
> > > +
> > > +int dev_id;
> > > +int ret = 0;
> > > +uint16_t i = 0;
> > > +char *dma_arg[MAX_VHOST_DEVICE];
> > > +int args_nr;
> > > +
> > > +while (isblank(*addrs))
> > > +addrs++;
> > > +if (*addrs == '\0') {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +/* process DMA devices within bracket. */
> > > +addrs++;
> > > +substr = strtok(addrs, ";]");
> > > +if (!substr) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +args_nr = rte_strsplit(substr, strlen(substr),
> > > +dma_arg, MAX_VHOST_DEVICE, ',');
> > > +if (args_nr <= 0) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +while (i < args_nr) {
> > > +char *arg_temp = dma_arg[i];
> > > +uint8_t sub_nr;
> > > +
> > > +sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@');
> > > +if (sub_nr != 2) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +start = strstr(ptrs[0], "txd");
> > > +if (start == NULL) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +start += 3;
> > > +vid = strtol(start, &end, 0);
> > > +if (end == start) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +vring_id = 0 + VIRTIO_RXQ;
> >
> > No need to introduce vring_id, it's always VIRTIO_RXQ
>
> I will remove it later.
>
> >
> > > +
> > > +dev_id = rte_dma_get_dev_id_by_name(ptrs[1]);
> > > +if (dev_id < 0) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "Fail to find
> > DMA %s.\n",
> > > ptrs[1]);
> > > +ret = -1;
> > > +goto out;
> > > +} else if (is_dma_configured(dev_id)) {
> > > +goto done;
> > > +}
> > > +
> >
> > Please call rte_dma_info_get before configure to make sure
> > info.max_vchans >=1
>
> Do you suggest to use "rte_dma_info_get() and info.max_vchans=0" to indicate
> the device is not configured, rather than using is_dma_configure()?
No, I mean when you configure the dmadev with one vchan, make sure it does have
at least one vchanl, even the 'vchan == 0' case can hardly happen.
Just like the function call sequence in test_dmadev_instance, test_dmadev.c.
>
> >
> > > +if (rte_dma_configure(dev_id, &dev_config) != 0) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "Fail to configure
> > DMA %d.\n",
> > > dev_id);
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +if (rte_dma_vchan_setup(dev_id, 0, &qconf) != 0) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "Fail to set up
> > DMA %d.\n",
> > > dev_id);
> > > +ret = -1;
> > > +goto out;
> > > +}
> > >
> > > -return -1;
> > > +rte_dma_info_get(dev_id, &info);
> > > +if (info.nb_vchans != 1) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "DMA %d has no
> > queues.\n",
> > > dev_id);
> >
> > Then the above means the number of vchan is not configured.
> >
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +if (rte_dma_start(dev_id) != 0) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "Fail to start
> > DMA %u.\n",
> > > dev_id);
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +dma_config[dma_count].dev_id = dev_id;
> > > +dma_config[dma_count].max_vchans = 1;
> > > +dma_config[dma_count++].max_desc = DMA_RING_SIZE;
> > > +
> > > +done:
> > > +(dma_info + vid)->dmas[vring_id].dev_id = dev_id;
> > > +i++;
> > > +}
> > > +out:
> > > +free(input);
> > > +return ret;
> > > }
> > >
> > > /*
> > > @@ -500,8 +628,6 @@ enum {
> > > OPT_CLIENT_NUM,
> > > #define OPT_BUILTIN_NET_DRIVER "builtin-net-driver"
> > > OPT_BUILTIN_NET_DRIVER_NUM,
> > > -#define OPT_DMA_TYPE "dma-type"
> > > -OPT_DMA_TYPE_NUM,
> > > #define OPT_DMAS "dmas"
> > > OPT_DMAS_NUM,
> > > };
> > > @@ -539,8 +665,6 @@ us_vhost_parse_args(int argc, char **argv)
> > > NULL, OPT_CLIENT_NUM},
> > > {OPT_BUILTIN_NET_DRIVER, no_argument,
> > > NULL, OPT_BUILTIN_NET_DRIVER_NUM},
> > > -{OPT_DMA_TYPE, required_argument,
> > > -NULL, OPT_DMA_TYPE_NUM},
> > > {OPT_DMAS, required_argument,
> > > NULL, OPT_DMAS_NUM},
> > > {NULL, 0, 0, 0},
> > > @@ -661,10 +785,6 @@ us_vhost_parse_args(int argc, char **argv)
> > > }
> > > break;
> > >
> > > -case OPT_DMA_TYPE_NUM:
> > > -dma_type = optarg;
> > > -break;
> > > -
> > > case OPT_DMAS_NUM:
> > > if (open_dma(optarg) == -1) {
> > > RTE_LOG(INFO, VHOST_CONFIG,
> > > @@ -841,9 +961,10 @@ complete_async_pkts(struct vhost_dev *vdev)
> > > {
> > > struct rte_mbuf *p_cpl[MAX_PKT_BURST];
> > > uint16_t complete_count;
> > > +int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
> > >
> > > complete_count = rte_vhost_poll_enqueue_completed(vdev->vid,
> > > -VIRTIO_RXQ, p_cpl,
> > MAX_PKT_BURST);
> > > +VIRTIO_RXQ, p_cpl, MAX_PKT_BURST,
> > dma_id, 0);
> > > if (complete_count) {
> > > free_pkts(p_cpl, complete_count);
> > > __atomic_sub_fetch(&vdev->pkts_inflight, complete_count,
> > > __ATOMIC_SEQ_CST);
> > > @@ -883,11 +1004,12 @@ drain_vhost(struct vhost_dev *vdev)
> > >
> > > if (builtin_net_driver) {
> > > ret = vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit);
> > > -} else if (async_vhost_driver) {
> > > +} else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> > > uint16_t enqueue_fail = 0;
> > > +int16_t dma_id = dma_bind[vdev-
> > >vid].dmas[VIRTIO_RXQ].dev_id;
> > >
> > > complete_async_pkts(vdev);
> > > -ret = rte_vhost_submit_enqueue_burst(vdev->vid,
> > VIRTIO_RXQ, m,
> > > nr_xmit);
> > > +ret = rte_vhost_submit_enqueue_burst(vdev->vid,
> > VIRTIO_RXQ, m,
> > > nr_xmit, dma_id, 0);
> > > __atomic_add_fetch(&vdev->pkts_inflight, ret,
> > __ATOMIC_SEQ_CST);
> > >
> > > enqueue_fail = nr_xmit - ret;
> > > @@ -905,7 +1027,7 @@ drain_vhost(struct vhost_dev *vdev)
> > > __ATOMIC_SEQ_CST);
> > > }
> > >
> > > -if (!async_vhost_driver)
> > > +if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> > > free_pkts(m, nr_xmit);
> > > }
> > >
> > > @@ -1211,12 +1333,13 @@ drain_eth_rx(struct vhost_dev *vdev)
> > > if (builtin_net_driver) {
> > > enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ,
> > > pkts, rx_count);
> > > -} else if (async_vhost_driver) {
> > > +} else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> > > uint16_t enqueue_fail = 0;
> > > +int16_t dma_id = dma_bind[vdev-
> > >vid].dmas[VIRTIO_RXQ].dev_id;
> > >
> > > complete_async_pkts(vdev);
> > > enqueue_count = rte_vhost_submit_enqueue_burst(vdev-
> > >vid,
> > > -VIRTIO_RXQ, pkts, rx_count);
> > > +VIRTIO_RXQ, pkts, rx_count, dma_id,
> > 0);
> > > __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count,
> > > __ATOMIC_SEQ_CST);
> > >
> > > enqueue_fail = rx_count - enqueue_count;
> > > @@ -1235,7 +1358,7 @@ drain_eth_rx(struct vhost_dev *vdev)
> > > __ATOMIC_SEQ_CST);
> > > }
> > >
> > > -if (!async_vhost_driver)
> > > +if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> > > free_pkts(pkts, rx_count);
> > > }
> > >
> > > @@ -1387,18 +1510,20 @@ destroy_device(int vid)
> > > "(%d) device has been removed from data core\n",
> > > vdev->vid);
> > >
> > > -if (async_vhost_driver) {
> > > +if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> > > uint16_t n_pkt = 0;
> > > +int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> > > struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> > >
> > > while (vdev->pkts_inflight) {
> > > n_pkt = rte_vhost_clear_queue_thread_unsafe(vid,
> > VIRTIO_RXQ,
> > > -m_cpl, vdev->pkts_inflight);
> > > +m_cpl, vdev->pkts_inflight,
> > dma_id, 0);
> > > free_pkts(m_cpl, n_pkt);
> > > __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> > > __ATOMIC_SEQ_CST);
> > > }
> > >
> > > rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> > > +dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> > > }
> > >
> > > rte_free(vdev);
> > > @@ -1468,20 +1593,14 @@ new_device(int vid)
> > > "(%d) device has been added to data core %d\n",
> > > vid, vdev->coreid);
> > >
> > > -if (async_vhost_driver) {
> > > -struct rte_vhost_async_config config = {0};
> > > -struct rte_vhost_async_channel_ops channel_ops;
> > > -
> > > -if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0) {
> > > -channel_ops.transfer_data = ioat_transfer_data_cb;
> > > -channel_ops.check_completed_copies =
> > > -ioat_check_completed_copies_cb;
> > > -
> > > -config.features = RTE_VHOST_ASYNC_INORDER;
> > > +if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) {
> > > +int ret;
> > >
> > > -return rte_vhost_async_channel_register(vid,
> > VIRTIO_RXQ,
> > > -config, &channel_ops);
> > > +ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ);
> > > +if (ret == 0) {
> > > +dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled =
> > true;
> > > }
> > > +return ret;
> > > }
> > >
> > > return 0;
> > > @@ -1502,14 +1621,15 @@ vring_state_changed(int vid, uint16_t
> > queue_id, int
> > > enable)
> > > if (queue_id != VIRTIO_RXQ)
> > > return 0;
> > >
> > > -if (async_vhost_driver) {
> > > +if (dma_bind[vid].dmas[queue_id].async_enabled) {
> > > if (!enable) {
> > > uint16_t n_pkt = 0;
> > > +int16_t dma_id =
> > dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> > > struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> > >
> > > while (vdev->pkts_inflight) {
> > > n_pkt =
> > rte_vhost_clear_queue_thread_unsafe(vid,
> > > queue_id,
> > > -m_cpl, vdev-
> > >pkts_inflight);
> > > +m_cpl, vdev-
> > >pkts_inflight, dma_id,
> > > 0);
> > > free_pkts(m_cpl, n_pkt);
> > > __atomic_sub_fetch(&vdev->pkts_inflight,
> > n_pkt,
> > > __ATOMIC_SEQ_CST);
> > > }
> > > @@ -1657,6 +1777,25 @@ create_mbuf_pool(uint16_t nr_port, uint32_t
> > > nr_switch_core, uint32_t mbuf_size,
> > > rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
> > > }
> > >
> > > +static void
> > > +init_dma(void)
> > > +{
> > > +int i;
> > > +
> > > +for (i = 0; i < MAX_VHOST_DEVICE; i++) {
> > > +int j;
> > > +
> > > +for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
> > > +dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
> > > +dma_bind[i].dmas[j].async_enabled = false;
> > > +}
> > > +}
> > > +
> > > +for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> > > +dma_config[i].dev_id = INVALID_DMA_ID;
> > > +}
> > > +}
> > > +
> > > /*
> > > * Main function, does initialisation and calls the per-lcore functions.
> > > */
> > > @@ -1679,6 +1818,9 @@ main(int argc, char *argv[])
> > > argc -= ret;
> > > argv += ret;
> > >
> > > +/* initialize dma structures */
> > > +init_dma();
> > > +
> > > /* parse app arguments */
> > > ret = us_vhost_parse_args(argc, argv);
> > > if (ret < 0)
> > > @@ -1754,6 +1896,20 @@ main(int argc, char *argv[])
> > > if (client_mode)
> > > flags |= RTE_VHOST_USER_CLIENT;
> > >
> > > +if (async_vhost_driver) {
> > > +if (rte_vhost_async_dma_configure(dma_config, dma_count)
> > < 0) {
> > > +RTE_LOG(ERR, VHOST_PORT, "Failed to configure
> > DMA in
> > > vhost.\n");
> > > +for (i = 0; i < dma_count; i++) {
> > > +if (dma_config[i].dev_id != INVALID_DMA_ID)
> > {
> > > +rte_dma_stop(dma_config[i].dev_id);
> > > +dma_config[i].dev_id =
> > INVALID_DMA_ID;
> > > +}
> > > +}
> > > +dma_count = 0;
> > > +async_vhost_driver = false;
> > > +}
> > > +}
> > > +
> > > /* Register vhost user driver to handle vhost messages. */
> > > for (i = 0; i < nb_sockets; i++) {
> > > char *file = socket_files + i * PATH_MAX;
> > > diff --git a/examples/vhost/main.h b/examples/vhost/main.h
> > > index e7b1ac60a6..b4a453e77e 100644
> > > --- a/examples/vhost/main.h
> > > +++ b/examples/vhost/main.h
> > > @@ -8,6 +8,7 @@
> > > #include <sys/queue.h>
> > >
> > > #include <rte_ether.h>
> > > +#include <rte_pci.h>
> > >
> > > /* Macros for printing using RTE_LOG */
> > > #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
> > > @@ -79,6 +80,16 @@ struct lcore_info {
> > > struct vhost_dev_tailq_list vdev_list;
> > > };
> > >
> > > +struct dma_info {
> > > +struct rte_pci_addr addr;
> > > +int16_t dev_id;
> > > +bool async_enabled;
> > > +};
> > > +
> > > +struct dma_for_vhost {
> > > +struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2];
> > > +};
> > > +
> > > /* we implement non-extra virtio net features */
> > > #define VIRTIO_NET_FEATURES0
> > >
> > > diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build
> > > index 3efd5e6540..87a637f83f 100644
> > > --- a/examples/vhost/meson.build
> > > +++ b/examples/vhost/meson.build
> > > @@ -12,13 +12,9 @@ if not is_linux
> > > endif
> > >
> > > deps += 'vhost'
> > > +deps += 'dmadev'
> > > allow_experimental_apis = true
> > > sources = files(
> > > 'main.c',
> > > 'virtio_net.c',
> > > )
> > > -
> > > -if dpdk_conf.has('RTE_RAW_IOAT')
> > > - deps += 'raw_ioat'
> > > - sources += files('ioat.c')
> > > -endif
> > > diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
> > > index cdb37a4814..8107329400 100644
> > > --- a/lib/vhost/meson.build
> > > +++ b/lib/vhost/meson.build
> > > @@ -33,7 +33,8 @@ headers = files(
> > > 'rte_vhost_async.h',
> > > 'rte_vhost_crypto.h',
> > > )
> > > +
> > > driver_sdk_headers = files(
> > > 'vdpa_driver.h',
> > > )
> > > -deps += ['ethdev', 'cryptodev', 'hash', 'pci']
> > > +deps += ['ethdev', 'cryptodev', 'hash', 'pci', 'dmadev']
> > > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> > > index a87ea6ba37..23a7a2d8b3 100644
> > > --- a/lib/vhost/rte_vhost_async.h
> > > +++ b/lib/vhost/rte_vhost_async.h
> > > @@ -27,70 +27,12 @@ struct rte_vhost_iov_iter {
> > > };
> > >
> > > /**
> > > - * dma transfer status
> > > + * DMA device information
> > > */
> > > -struct rte_vhost_async_status {
> > > -/** An array of application specific data for source memory */
> > > -uintptr_t *src_opaque_data;
> > > -/** An array of application specific data for destination memory */
> > > -uintptr_t *dst_opaque_data;
> > > -};
> > > -
> > > -/**
> > > - * dma operation callbacks to be implemented by applications
> > > - */
> > > -struct rte_vhost_async_channel_ops {
> > > -/**
> > > - * instruct async engines to perform copies for a batch of packets
> > > - *
> > > - * @param vid
> > > - * id of vhost device to perform data copies
> > > - * @param queue_id
> > > - * queue id to perform data copies
> > > - * @param iov_iter
> > > - * an array of IOV iterators
> > > - * @param opaque_data
> > > - * opaque data pair sending to DMA engine
> > > - * @param count
> > > - * number of elements in the "descs" array
> > > - * @return
> > > - * number of IOV iterators processed, negative value means error
> > > - */
> > > -int32_t (*transfer_data)(int vid, uint16_t queue_id,
> > > -struct rte_vhost_iov_iter *iov_iter,
> > > -struct rte_vhost_async_status *opaque_data,
> > > -uint16_t count);
> > > -/**
> > > - * check copy-completed packets from the async engine
> > > - * @param vid
> > > - * id of vhost device to check copy completion
> > > - * @param queue_id
> > > - * queue id to check copy completion
> > > - * @param opaque_data
> > > - * buffer to receive the opaque data pair from DMA engine
> > > - * @param max_packets
> > > - * max number of packets could be completed
> > > - * @return
> > > - * number of async descs completed, negative value means error
> > > - */
> > > -int32_t (*check_completed_copies)(int vid, uint16_t queue_id,
> > > -struct rte_vhost_async_status *opaque_data,
> > > -uint16_t max_packets);
> > > -};
> > > -
> > > -/**
> > > - * async channel features
> > > - */
> > > -enum {
> > > -RTE_VHOST_ASYNC_INORDER = 1U << 0,
> > > -};
> > > -
> > > -/**
> > > - * async channel configuration
> > > - */
> > > -struct rte_vhost_async_config {
> > > -uint32_t features;
> > > -uint32_t rsvd[2];
> > > +struct rte_vhost_async_dma_info {
> > > +int16_t dev_id;/* DMA device ID */
> > > +uint16_t max_vchans;/* max number of vchan */
> > > +uint16_t max_desc;/* max desc number of vchan */
> > > };
> > >
> > > /**
> > > @@ -100,17 +42,11 @@ struct rte_vhost_async_config {
> > > * vhost device id async channel to be attached to
> > > * @param queue_id
> > > * vhost queue id async channel to be attached to
> > > - * @param config
> > > - * Async channel configuration structure
> > > - * @param ops
> > > - * Async channel operation callbacks
> > > * @return
> > > * 0 on success, -1 on failures
> > > */
> > > __rte_experimental
> > > -int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > > -struct rte_vhost_async_config config,
> > > -struct rte_vhost_async_channel_ops *ops);
> > > +int rte_vhost_async_channel_register(int vid, uint16_t queue_id);
> > >
> > > /**
> > > * Unregister an async channel for a vhost queue
> > > @@ -136,17 +72,11 @@ int rte_vhost_async_channel_unregister(int vid,
> > uint16_t
> > > queue_id);
> > > * vhost device id async channel to be attached to
> > > * @param queue_id
> > > * vhost queue id async channel to be attached to
> > > - * @param config
> > > - * Async channel configuration
> > > - * @param ops
> > > - * Async channel operation callbacks
> > > * @return
> > > * 0 on success, -1 on failures
> > > */
> > > __rte_experimental
> > > -int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > queue_id,
> > > -struct rte_vhost_async_config config,
> > > -struct rte_vhost_async_channel_ops *ops);
> > > +int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > > queue_id);
> > >
> > > /**
> > > * Unregister an async channel for a vhost queue without performing any
> > > @@ -179,12 +109,17 @@ int
> > rte_vhost_async_channel_unregister_thread_unsafe(int
> > > vid,
> > > * array of packets to be enqueued
> > > * @param count
> > > * packets num to be enqueued
> > > + * @param dma_id
> > > + * the identifier of the DMA device
> > > + * @param vchan
> > > + * the identifier of virtual DMA channel
> > > * @return
> > > * num of packets enqueued
> > > */
> > > __rte_experimental
> > > uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
> > > -struct rte_mbuf **pkts, uint16_t count);
> > > +struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > > +uint16_t vchan);
> >
> > All dma_id in the API should be uint16_t. Otherwise you need to check if
> valid.
>
> Yes, you are right. Although dma_id is defined as int16_t and DMA library
> checks
> if it is valid, vhost doesn't handle DMA failure and we need to make sure
> dma_id
> is valid before using it. And even if vhost handles DMA error, a better place
> to check
> invalid dma_id is before passing it to DMA library too. I will add the check
> later.
>
> >
> > >
> > > /**
> > > * This function checks async completion status for a specific vhost
> > > @@ -199,12 +134,17 @@ uint16_t rte_vhost_submit_enqueue_burst(int
> > vid,
> > > uint16_t queue_id,
> > > * blank array to get return packet pointer
> > > * @param count
> > > * size of the packet array
> > > + * @param dma_id
> > > + * the identifier of the DMA device
> > > + * @param vchan
> > > + * the identifier of virtual DMA channel
> > > * @return
> > > * num of packets returned
> > > */
> > > __rte_experimental
> > > uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> > > -struct rte_mbuf **pkts, uint16_t count);
> > > +struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > > +uint16_t vchan);
> > >
> > > /**
> > > * This function returns the amount of in-flight packets for the vhost
> > > @@ -235,11 +175,32 @@ int rte_vhost_async_get_inflight(int vid, uint16_t
> > > queue_id);
> > > * Blank array to get return packet pointer
> > > * @param count
> > > * Size of the packet array
> > > + * @param dma_id
> > > + * the identifier of the DMA device
> > > + * @param vchan
> > > + * the identifier of virtual DMA channel
> > > * @return
> > > * Number of packets returned
> > > */
> > > __rte_experimental
> > > uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> > > -struct rte_mbuf **pkts, uint16_t count);
> > > +struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > > +uint16_t vchan);
> > > +/**
> > > + * The DMA vChannels used in asynchronous data path must be
> > configured
> > > + * first. So this function needs to be called before enabling DMA
> > > + * acceleration for vring. If this function fails, asynchronous data path
> > > + * cannot be enabled for any vring further.
> > > + *
> > > + * @param dmas
> > > + * DMA information
> > > + * @param count
> > > + * Element number of 'dmas'
> > > + * @return
> > > + * 0 on success, and -1 on failure
> > > + */
> > > +__rte_experimental
> > > +int rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info
> > *dmas,
> > > +uint16_t count);
> >
> > I think based on current design, vhost can use every vchan if user app let
> it.
> > So the max_desc and max_vchans can just be got from dmadev APIs? Then
> > there's
> > no need to introduce the new ABI struct rte_vhost_async_dma_info.
>
> Yes, no need to introduce struct rte_vhost_async_dma_info. We can either use
> struct rte_dma_info which is suggested by Maxime, or query from dma library
> via device id. Since dma device configuration is left to applications, I
> prefer to
> use rte_dma_info directly. How do you think?
If you only use rte_dma_info as input param, you will also need to call dmadev
API to get dmadev ID in rte_vhost_async_dma_configure (Or you add both rte_dma_info
and dmadev ID). So I suggest to only use dmadev ID as input.
>
> >
> > And about max_desc, I see the dmadev lib, you can get vchan's max_desc
> > but you
> > may use a nb_desc (<= max_desc) to configure vchanl. And IIUC, vhost wants
> > to
> > know the nb_desc instead of max_desc?
>
> True, nb_desc is better than max_desc. But dma library doesn’t provide
> function
> to query nb_desc for every vchannel. And rte_dma_info cannot be used in
> rte_vhost_async_dma_configure(), if vhost uses nb_desc. So the only way is
> to require users to provide nb_desc for every vchannel, and it will introduce
> a new struct. Is it really needed?
>
Since now dmadev lib does not provide a way to query real nb_desc for a vchanl,
so I think we can just use max_desc.
But ideally, if dmadev lib provides such a way, the configured nb_desc and nb_vchanl
should be used to configure vhost lib.
@Bruce, should you add such a way in dmadev lib? As users now do not know the real
configured nb_desc of vchanl.
> >
> > >
> > > #endif /* _RTE_VHOST_ASYNC_H_ */
> > > diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> > > index a7ef7f1976..1202ba9c1a 100644
> > > --- a/lib/vhost/version.map
> > > +++ b/lib/vhost/version.map
> > > @@ -84,6 +84,9 @@ EXPERIMENTAL {
> > >
> > > # added in 21.11
> > > rte_vhost_get_monitor_addr;
> > > +
> > > +# added in 22.03
> > > +rte_vhost_async_dma_configure;
> > > };
> > >
> > > INTERNAL {
> > > diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> > > index 13a9bb9dd1..32f37f4851 100644
> > > --- a/lib/vhost/vhost.c
> > > +++ b/lib/vhost/vhost.c
> > > @@ -344,6 +344,7 @@ vhost_free_async_mem(struct vhost_virtqueue *vq)
> > > return;
> > >
> > > rte_free(vq->async->pkts_info);
> > > +rte_free(vq->async->pkts_cmpl_flag);
> > >
> > > rte_free(vq->async->buffers_packed);
> > > vq->async->buffers_packed = NULL;
> > > @@ -1626,8 +1627,7 @@ rte_vhost_extern_callback_register(int vid,
> > > }
> > >
> > > static __rte_always_inline int
> > > -async_channel_register(int vid, uint16_t queue_id,
> > > -struct rte_vhost_async_channel_ops *ops)
> > > +async_channel_register(int vid, uint16_t queue_id)
> > > {
> > > struct virtio_net *dev = get_device(vid);
> > > struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> > > @@ -1656,6 +1656,14 @@ async_channel_register(int vid, uint16_t
> > queue_id,
> > > goto out_free_async;
> > > }
> > >
> > > +async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size *
> > sizeof(bool),
> > > +RTE_CACHE_LINE_SIZE, node);
> > > +if (!async->pkts_cmpl_flag) {
> > > +VHOST_LOG_CONFIG(ERR, "failed to allocate async
> > pkts_cmpl_flag
> > > (vid %d, qid: %d)\n",
> > > +vid, queue_id);
> >
> > qid: %u
> >
> > > +goto out_free_async;
> > > +}
> > > +
> > > if (vq_is_packed(dev)) {
> > > async->buffers_packed = rte_malloc_socket(NULL,
> > > vq->size * sizeof(struct
> > vring_used_elem_packed),
> > > @@ -1676,9 +1684,6 @@ async_channel_register(int vid, uint16_t
> > queue_id,
> > > }
> > > }
> > >
> > > -async->ops.check_completed_copies = ops-
> > >check_completed_copies;
> > > -async->ops.transfer_data = ops->transfer_data;
> > > -
> > > vq->async = async;
> > >
> > > return 0;
> > > @@ -1691,15 +1696,13 @@ async_channel_register(int vid, uint16_t
> > queue_id,
> > > }
> > >
> > > int
> > > -rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > > -struct rte_vhost_async_config config,
> > > -struct rte_vhost_async_channel_ops *ops)
> > > +rte_vhost_async_channel_register(int vid, uint16_t queue_id)
> > > {
> > > struct vhost_virtqueue *vq;
> > > struct virtio_net *dev = get_device(vid);
> > > int ret;
> > >
> > > -if (dev == NULL || ops == NULL)
> > > +if (dev == NULL)
> > > return -1;
> > >
> > > if (queue_id >= VHOST_MAX_VRING)
> > > @@ -1710,33 +1713,20 @@ rte_vhost_async_channel_register(int vid,
> > uint16_t
> > > queue_id,
> > > if (unlikely(vq == NULL || !dev->async_copy))
> > > return -1;
> > >
> > > -if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> > > -VHOST_LOG_CONFIG(ERR,
> > > -"async copy is not supported on non-inorder mode "
> > > -"(vid %d, qid: %d)\n", vid, queue_id);
> > > -return -1;
> > > -}
> > > -
> > > -if (unlikely(ops->check_completed_copies == NULL ||
> > > -ops->transfer_data == NULL))
> > > -return -1;
> > > -
> > > rte_spinlock_lock(&vq->access_lock);
> > > -ret = async_channel_register(vid, queue_id, ops);
> > > +ret = async_channel_register(vid, queue_id);
> > > rte_spinlock_unlock(&vq->access_lock);
> > >
> > > return ret;
> > > }
> > >
> > > int
> > > -rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > queue_id,
> > > -struct rte_vhost_async_config config,
> > > -struct rte_vhost_async_channel_ops *ops)
> > > +rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > queue_id)
> > > {
> > > struct vhost_virtqueue *vq;
> > > struct virtio_net *dev = get_device(vid);
> > >
> > > -if (dev == NULL || ops == NULL)
> > > +if (dev == NULL)
> > > return -1;
> > >
> > > if (queue_id >= VHOST_MAX_VRING)
> > > @@ -1747,18 +1737,7 @@
> > rte_vhost_async_channel_register_thread_unsafe(int vid,
> > > uint16_t queue_id,
> > > if (unlikely(vq == NULL || !dev->async_copy))
> > > return -1;
> > >
> > > -if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> > > -VHOST_LOG_CONFIG(ERR,
> > > -"async copy is not supported on non-inorder mode "
> > > -"(vid %d, qid: %d)\n", vid, queue_id);
> > > -return -1;
> > > -}
> > > -
> > > -if (unlikely(ops->check_completed_copies == NULL ||
> > > -ops->transfer_data == NULL))
> > > -return -1;
> > > -
> > > -return async_channel_register(vid, queue_id, ops);
> > > +return async_channel_register(vid, queue_id);
> > > }
> > >
> > > int
> > > @@ -1835,6 +1814,83 @@
> > rte_vhost_async_channel_unregister_thread_unsafe(int
> > > vid, uint16_t queue_id)
> > > return 0;
> > > }
> > >
> > > +static __rte_always_inline void
> > > +vhost_free_async_dma_mem(void)
> > > +{
> > > +uint16_t i;
> > > +
> > > +for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> > > +struct async_dma_info *dma = &dma_copy_track[i];
> > > +int16_t j;
> > > +
> > > +if (dma->max_vchans == 0) {
> > > +continue;
> > > +}
> > > +
> > > +for (j = 0; j < dma->max_vchans; j++) {
> > > +rte_free(dma->vchans[j].metadata);
> > > +}
> > > +rte_free(dma->vchans);
> > > +dma->vchans = NULL;
> > > +dma->max_vchans = 0;
> > > +}
> > > +}
> > > +
> > > +int
> > > +rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info *dmas,
> > uint16_t
> > > count)
> > > +{
> > > +uint16_t i;
> > > +
> > > +if (!dmas) {
> > > +VHOST_LOG_CONFIG(ERR, "Invalid DMA configuration
> > parameter.\n");
> > > +return -1;
> > > +}
> > > +
> > > +for (i = 0; i < count; i++) {
> > > +struct async_dma_vchan_info *vchans;
> > > +int16_t dev_id;
> > > +uint16_t max_vchans;
> > > +uint16_t max_desc;
> > > +uint16_t j;
> > > +
> > > +dev_id = dmas[i].dev_id;
> > > +max_vchans = dmas[i].max_vchans;
> > > +max_desc = dmas[i].max_desc;
> > > +
> > > +if (!rte_is_power_of_2(max_desc)) {
> > > +max_desc = rte_align32pow2(max_desc);
> > > +}
> >
> > I think when aligning to power of 2, it should exceed not max_desc?
>
> Aligned max_desc is used to allocate context tracking array. We only need
> to guarantee the size of the array for every vchannel is >= max_desc. So it's
> OK to have greater array size than max_desc.
>
> > And based on above comment, if this max_desc is nb_desc configured for
> > vchanl, you should just make sure the nb_desc be power-of-2.
> >
> > > +
> > > +vchans = rte_zmalloc(NULL, sizeof(struct
> > async_dma_vchan_info) *
> > > max_vchans,
> > > +RTE_CACHE_LINE_SIZE);
> > > +if (vchans == NULL) {
> > > +VHOST_LOG_CONFIG(ERR, "Failed to allocate vchans
> > for dma-
> > > %d."
> > > +" Cannot enable async data-path.\n",
> > dev_id);
> > > +vhost_free_async_dma_mem();
> > > +return -1;
> > > +}
> > > +
> > > +for (j = 0; j < max_vchans; j++) {
> > > +vchans[j].metadata = rte_zmalloc(NULL, sizeof(bool *)
> > *
> > > max_desc,
> > > +RTE_CACHE_LINE_SIZE);
> > > +if (!vchans[j].metadata) {
> > > +VHOST_LOG_CONFIG(ERR, "Failed to allocate
> > metadata for
> > > "
> > > +"dma-%d vchan-%u\n",
> > dev_id, j);
> > > +vhost_free_async_dma_mem();
> > > +return -1;
> > > +}
> > > +
> > > +vchans[j].ring_size = max_desc;
> > > +vchans[j].ring_mask = max_desc - 1;
> > > +}
> > > +
> > > +dma_copy_track[dev_id].vchans = vchans;
> > > +dma_copy_track[dev_id].max_vchans = max_vchans;
> > > +}
> > > +
> > > +return 0;
> > > +}
> > > +
> > > int
> > > rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
> > > {
> > > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> > > index 7085e0885c..d9bda34e11 100644
> > > --- a/lib/vhost/vhost.h
> > > +++ b/lib/vhost/vhost.h
> > > @@ -19,6 +19,7 @@
> > > #include <rte_ether.h>
> > > #include <rte_rwlock.h>
> > > #include <rte_malloc.h>
> > > +#include <rte_dmadev.h>
> > >
> > > #include "rte_vhost.h"
> > > #include "rte_vdpa.h"
> > > @@ -50,6 +51,7 @@
> > >
> > > #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST)
> > > #define VHOST_MAX_ASYNC_VEC 2048
> > > +#define VHOST_ASYNC_DMA_BATCHING_SIZE 32
> > >
> > > #define PACKED_DESC_ENQUEUE_USED_FLAG(w)\
> > > ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED |
> > VRING_DESC_F_WRITE) : \
> > > @@ -119,6 +121,41 @@ struct vring_used_elem_packed {
> > > uint32_t count;
> > > };
> > >
> > > +struct async_dma_vchan_info {
> > > +/* circular array to track copy metadata */
> > > +bool **metadata;
> >
> > If the metadata will only be flags, maybe just use some
> > name called XXX_flag
>
> Sure, I will rename it.
>
> >
> > > +
> > > +/* max elements in 'metadata' */
> > > +uint16_t ring_size;
> > > +/* ring index mask for 'metadata' */
> > > +uint16_t ring_mask;
> > > +
> > > +/* batching copies before a DMA doorbell */
> > > +uint16_t nr_batching;
> > > +
> > > +/**
> > > + * DMA virtual channel lock. Although it is able to bind DMA
> > > + * virtual channels to data plane threads, vhost control plane
> > > + * thread could call data plane functions too, thus causing
> > > + * DMA device contention.
> > > + *
> > > + * For example, in VM exit case, vhost control plane thread needs
> > > + * to clear in-flight packets before disable vring, but there could
> > > + * be anotther data plane thread is enqueuing packets to the same
> > > + * vring with the same DMA virtual channel. But dmadev PMD
> > functions
> > > + * are lock-free, so the control plane and data plane threads
> > > + * could operate the same DMA virtual channel at the same time.
> > > + */
> > > +rte_spinlock_t dma_lock;
> > > +};
> > > +
> > > +struct async_dma_info {
> > > +uint16_t max_vchans;
> > > +struct async_dma_vchan_info *vchans;
> > > +};
> > > +
> > > +extern struct async_dma_info
> > dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> > > +
> > > /**
> > > * inflight async packet information
> > > */
> > > @@ -129,9 +166,6 @@ struct async_inflight_info {
> > > };
> > >
> > > struct vhost_async {
> > > -/* operation callbacks for DMA */
> > > -struct rte_vhost_async_channel_ops ops;
> > > -
> > > struct rte_vhost_iov_iter iov_iter[VHOST_MAX_ASYNC_IT];
> > > struct rte_vhost_iovec iovec[VHOST_MAX_ASYNC_VEC];
> > > uint16_t iter_idx;
> > > @@ -139,6 +173,19 @@ struct vhost_async {
> > >
> > > /* data transfer status */
> > > struct async_inflight_info *pkts_info;
> > > +/**
> > > + * packet reorder array. "true" indicates that DMA
> > > + * device completes all copies for the packet.
> > > + *
> > > + * Note that this array could be written by multiple
> > > + * threads at the same time. For example, two threads
> > > + * enqueue packets to the same virtqueue with their
> > > + * own DMA devices. However, since offloading is
> > > + * per-packet basis, each packet flag will only be
> > > + * written by one thread. And single byte write is
> > > + * atomic, so no lock is needed.
> > > + */
> > > +bool *pkts_cmpl_flag;
> > > uint16_t pkts_idx;
> > > uint16_t pkts_inflight_n;
> > > union {
> > > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> > > index b3d954aab4..9f81fc9733 100644
> > > --- a/lib/vhost/virtio_net.c
> > > +++ b/lib/vhost/virtio_net.c
> > > @@ -11,6 +11,7 @@
> > > #include <rte_net.h>
> > > #include <rte_ether.h>
> > > #include <rte_ip.h>
> > > +#include <rte_dmadev.h>
> > > #include <rte_vhost.h>
> > > #include <rte_tcp.h>
> > > #include <rte_udp.h>
> > > @@ -25,6 +26,9 @@
> > >
> > > #define MAX_BATCH_LEN 256
> > >
> > > +/* DMA device copy operation tracking array. */
> > > +struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> > > +
> > > static __rte_always_inline bool
> > > rxvq_is_mergeable(struct virtio_net *dev)
> > > {
> > > @@ -43,6 +47,108 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx,
> > uint32_t
> > > nr_vring)
> > > return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
> > > }
> > >
> > > +static __rte_always_inline uint16_t
> > > +vhost_async_dma_transfer(struct vhost_virtqueue *vq, int16_t dma_id,
> > > +uint16_t vchan, uint16_t head_idx,
> > > +struct rte_vhost_iov_iter *pkts, uint16_t nr_pkts)
> > > +{
> > > +struct async_dma_vchan_info *dma_info =
> > > &dma_copy_track[dma_id].vchans[vchan];
> > > +uint16_t ring_mask = dma_info->ring_mask;
> > > +uint16_t pkt_idx;
> > > +
> > > +rte_spinlock_lock(&dma_info->dma_lock);
> > > +
> > > +for (pkt_idx = 0; pkt_idx < nr_pkts; pkt_idx++) {
> > > +struct rte_vhost_iovec *iov = pkts[pkt_idx].iov;
> > > +int copy_idx = 0;
> > > +uint16_t nr_segs = pkts[pkt_idx].nr_segs;
> > > +uint16_t i;
> > > +
> > > +if (rte_dma_burst_capacity(dma_id, vchan) < nr_segs) {
> > > +goto out;
> > > +}
> > > +
> > > +for (i = 0; i < nr_segs; i++) {
> > > +/**
> > > + * We have checked the available space before
> > submit copies
> > > to DMA
> > > + * vChannel, so we don't handle error here.
> > > + */
> > > +copy_idx = rte_dma_copy(dma_id, vchan,
> > > (rte_iova_t)iov[i].src_addr,
> > > +(rte_iova_t)iov[i].dst_addr, iov[i].len,
> > > +RTE_DMA_OP_FLAG_LLC);
> >
> > This assumes rte_dma_copy will always succeed if there's available space.
> >
> > But the API doxygen says:
> >
> > * @return
> > * - 0..UINT16_MAX: index of enqueued job.
> > * - -ENOSPC: if no space left to enqueue.
> > * - other values < 0 on failure.
> >
> > So it should consider other vendor-specific errors.
>
> Error handling is not free here. Specifically, SW fallback is a way to handle
> failed
> copy operations. But it requires vhost to track VA for every source and
> destination
> buffer for every copy. DMA library uses IOVA, so vhost only prepares IOVA for
> copies of
> every packet in async data-path. In the case of IOVA as PA, the prepared IOVAs
> cannot
> be used as SW fallback, which means vhost needs to store VA for every copy of
> every
> packet too, even if there no errors will happen or IOVA is VA.
>
> I am thinking that the only usable DMA engines in vhost are CBDMA and DSA, is
> it worth
> the cost for "future HW"? If there will be other vendor's HW in future, is it
> OK to add the
> support later? Or is there any way to get VA from IOVA?
Let's investigate how much performance drop the error handling will bring and see...
Thanks,
Chenbo
>
> Thanks,
> Jiayu
> >
> > Thanks,
> > Chenbo
> >
> >
>
^ permalink raw reply [relevance 0%]
* RE: [RFC 1/3] ethdev: support GRE optional fields
@ 2022-01-19 10:56 4% ` Ori Kam
0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2022-01-19 10:56 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon (EXTERNAL),
Sean Zhang (Networking SW),
Matan Azrad, Ferruh Yigit
Cc: Andrew Rybchenko, dev
Hi,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
>
> 19/01/2022 10:53, Ferruh Yigit:
> > On 12/30/2021 3:08 AM, Sean Zhang wrote:
> > > --- a/lib/ethdev/rte_flow.h
> > > +++ b/lib/ethdev/rte_flow.h
> > > /**
> > > + * RTE_FLOW_ITEM_TYPE_GRE_OPTION.
> > > + *
> > > + * Matches GRE optional fields in header.
> > > + */
> > > +struct rte_gre_hdr_option {
> > > + rte_be16_t checksum;
> > > + rte_be32_t key;
> > > + rte_be32_t sequence;
> > > +};
> > > +
> >
> > Hi Ori, Andrew,
> >
> > The decision was to have protocol structs in the net library and flow structs
> > use from there, wasn't it?
> > (Btw, a deprecation notice is still pending to clear some existing ones)
> >
> > So for the GRE optional fields, what about having a struct in the 'rte_gre.h'?
> > (Also perhaps an GRE extended protocol header can be defined combining
> > 'rte_gre_hdr' and optional fields struct.)
> > Later flow API struct can embed that struct.
>
> +1 for using librte_net.
> This addition in rte_flow looks to be a mistake.
> Please fix the next version.
>
Nice idea,
but my main concern is that the header should have the header is defined.
Since some of the fields are optional this will look something like this:
gre_hdr_option_checksum {
rte_be_16_t checksum;
}
gre_hdr_option_key {
rte_be_32_t key;
}
gre_hdr_option_ sequence {
rte_be_32_t sequence;
}
I don't want to have so many rte_flow_items,
Has more and more protocols have optional data it doesn't make sense to create the item for each.
If I'm looking at it from an ideal place, I would like that the optional fields will be part of the original item.
For example in test pmd I would like to write:
Eth / ipv4 / udp / gre flags is key & checksum checksum is yyy key is xxx / end
And not
Eth / ipv4 / udp / gre flags is key & checksum / gre_option checksum is yyy key is xxx / end
This means that the structure will look like this:
struct rte_flow_item_gre {
union {
struct {
/**
* Checksum (1b), reserved 0 (12b), version (3b).
* Refer to RFC 2784.
*/
rte_be16_t c_rsvd0_ver;
rte_be16_t protocol; /**< Protocol type. */
}
struct rte_gre_hdr hdr
}
rte_be_16_t checksum;
rte_be_32_t key;
rte_be_32_t sequence;
};
The main issue with this is that it breaks ABI,
Maybe to solve this we can create a new structure gre_ext?
In any way I think we should think how we allow adding members to structures without
ABI breakage.
Best,
Ori
^ permalink raw reply [relevance 4%]
* [PATCH v2] mempool: fix put objects to mempool with cache
@ 2022-01-19 14:52 3% ` Morten Brørup
2022-01-19 15:03 3% ` [PATCH v3] " Morten Brørup
1 sibling, 0 replies; 200+ results
From: Morten Brørup @ 2022-01-19 14:52 UTC (permalink / raw)
To: olivier.matz, andrew.rybchenko
Cc: bruce.richardson, jerinjacobk, dev, Morten Brørup
This patch optimizes the rte_mempool_do_generic_put() caching algorithm,
and fixes a bug in it.
The existing algorithm was:
1. Add the objects to the cache
2. Anything greater than the cache size (if it crosses the cache flush
threshold) is flushed to the ring.
Please note that the description in the source code said that it kept
"cache min value" objects after flushing, but the function actually kept
"size" objects, which is reflected in the above description.
Now, the algorithm is:
1. If the objects cannot be added to the cache without crossing the
flush threshold, flush the cache to the ring.
2. Add the objects to the cache.
This patch changes these details:
1. Bug: The cache was still full after flushing.
In the opposite direction, i.e. when getting objects from the cache, the
cache is refilled to full level when it crosses the low watermark (which
happens to be zero).
Similarly, the cache should be flushed to empty level when it crosses
the high watermark (which happens to be 1.5 x the size of the cache).
The existing flushing behaviour was suboptimal for real applications,
because crossing the low or high watermark typically happens when the
application is in a state where the number of put/get events are out of
balance, e.g. when absorbing a burst of packets into a QoS queue
(getting more mbufs from the mempool), or when a burst of packets is
trickling out from the QoS queue (putting the mbufs back into the
mempool).
NB: When the application is in a state where put/get events are in
balance, the cache should remain within its low and high watermarks, and
the algorithms for refilling/flushing the cache should not come into
play.
Now, the mempool cache is completely flushed when crossing the flush
threshold, so only the newly put (hot) objects remain in the mempool
cache afterwards.
2. Minor bug: The flush threshold comparison has been corrected; it must
be "len > flushthresh", not "len >= flushthresh".
Reasoning: Consider a flush multiplier of 1 instead of 1.5; the cache
would be flushed already when reaching size elements, not when exceeding
size elements.
Now, flushing is triggered when the flush threshold is exceeded, not
when reached.
3. Optimization: The most recent (hot) objects are flushed, leaving the
oldest (cold) objects in the mempool cache.
This is bad for CPUs with a small L1 cache, because when they get
objects from the mempool after the mempool cache has been flushed, they
get cold objects instead of hot objects.
Now, the existing (cold) objects in the mempool cache are flushed before
the new (hot) objects are added the to the mempool cache.
4. Optimization: Using the x86 variant of rte_memcpy() is inefficient
here, where n is relatively small and unknown at compile time.
Now, it has been replaced by an alternative copying method, optimized
for the fact that most Ethernet PMDs operate in bursts of 4 or 8 mbufs
or multiples thereof.
v2 changes:
- Not adding the new objects to the mempool cache before flushing it
also allows the memory allocated for the mempool cache to be reduced
from 3 x to 2 x RTE_MEMPOOL_CACHE_MAX_SIZE.
However, such this change would break the ABI, so it was removed in v2.
- The mempool cache should be cache line aligned for the benefit of the
copying method, which on some CPU architectures performs worse on data
crossing a cache boundary.
However, such this change would break the ABI, so it was removed in v2;
and yet another alternative copying method replaced the rte_memcpy().
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/mempool/rte_mempool.h | 54 +++++++++++++++++++++++++++++----------
1 file changed, 40 insertions(+), 14 deletions(-)
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 1e7a3c1527..8a7067ee5b 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -94,7 +94,8 @@ struct rte_mempool_cache {
* Cache is allocated to this size to allow it to overflow in certain
* cases to avoid needless emptying of cache.
*/
- void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache objects */
+ void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 2] __rte_cache_aligned;
+ /**< Cache objects */
} __rte_cache_aligned;
/**
@@ -1334,6 +1335,7 @@ static __rte_always_inline void
rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
unsigned int n, struct rte_mempool_cache *cache)
{
+ uint32_t index;
void **cache_objs;
/* increment stat now, adding in mempool always success */
@@ -1344,31 +1346,56 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
goto ring_enqueue;
- cache_objs = &cache->objs[cache->len];
+ /* If the request itself is too big for the cache */
+ if (unlikely(n > cache->flushthresh))
+ goto ring_enqueue;
/*
* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
+ * 1. If the objects cannot be added to the cache without
+ * crossing the flush threshold, flush the cache to the ring.
+ * 2. Add the objects to the cache.
*/
- /* Add elements back into the cache */
- rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
+ if (cache->len + n <= cache->flushthresh) {
+ cache_objs = &cache->objs[cache->len];
- cache->len += n;
+ cache->len += n;
+ } else {
+ cache_objs = cache->objs;
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ if (rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len) < 0)
+ rte_panic("cannot put objects in mempool\n");
+#else
+ rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
+#endif
+ cache->len = n;
+ }
+
+ /* Add the objects to the cache. */
+ for (index = 0; index < (n & ~0x3); index += 4) {
+ cache_objs[index] = obj_table[index];
+ cache_objs[index + 1] = obj_table[index + 1];
+ cache_objs[index + 2] = obj_table[index + 2];
+ cache_objs[index + 3] = obj_table[index + 3];
+ }
+ switch (n & 0x3) {
+ case 3:
+ cache_objs[index] = obj_table[index];
+ index++; /* fallthrough */
+ case 2:
+ cache_objs[index] = obj_table[index];
+ index++; /* fallthrough */
+ case 1:
+ cache_objs[index] = obj_table[index];
}
return;
ring_enqueue:
- /* push remaining objects in ring */
+ /* Put the objects into the ring */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
rte_panic("cannot put objects in mempool\n");
@@ -1377,7 +1404,6 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
#endif
}
-
/**
* Put several objects back in the mempool.
*
--
2.17.1
^ permalink raw reply [relevance 3%]
* [PATCH v3] mempool: fix put objects to mempool with cache
2022-01-19 14:52 3% ` [PATCH v2] mempool: fix put objects to mempool with cache Morten Brørup
@ 2022-01-19 15:03 3% ` Morten Brørup
1 sibling, 0 replies; 200+ results
From: Morten Brørup @ 2022-01-19 15:03 UTC (permalink / raw)
To: olivier.matz, andrew.rybchenko
Cc: bruce.richardson, jerinjacobk, dev, Morten Brørup
mempool: fix put objects to mempool with cache
This patch optimizes the rte_mempool_do_generic_put() caching algorithm,
and fixes a bug in it.
The existing algorithm was:
1. Add the objects to the cache
2. Anything greater than the cache size (if it crosses the cache flush
threshold) is flushed to the ring.
Please note that the description in the source code said that it kept
"cache min value" objects after flushing, but the function actually kept
"size" objects, which is reflected in the above description.
Now, the algorithm is:
1. If the objects cannot be added to the cache without crossing the
flush threshold, flush the cache to the ring.
2. Add the objects to the cache.
This patch changes these details:
1. Bug: The cache was still full after flushing.
In the opposite direction, i.e. when getting objects from the cache, the
cache is refilled to full level when it crosses the low watermark (which
happens to be zero).
Similarly, the cache should be flushed to empty level when it crosses
the high watermark (which happens to be 1.5 x the size of the cache).
The existing flushing behaviour was suboptimal for real applications,
because crossing the low or high watermark typically happens when the
application is in a state where the number of put/get events are out of
balance, e.g. when absorbing a burst of packets into a QoS queue
(getting more mbufs from the mempool), or when a burst of packets is
trickling out from the QoS queue (putting the mbufs back into the
mempool).
NB: When the application is in a state where put/get events are in
balance, the cache should remain within its low and high watermarks, and
the algorithms for refilling/flushing the cache should not come into
play.
Now, the mempool cache is completely flushed when crossing the flush
threshold, so only the newly put (hot) objects remain in the mempool
cache afterwards.
2. Minor bug: The flush threshold comparison has been corrected; it must
be "len > flushthresh", not "len >= flushthresh".
Reasoning: Consider a flush multiplier of 1 instead of 1.5; the cache
would be flushed already when reaching size elements, not when exceeding
size elements.
Now, flushing is triggered when the flush threshold is exceeded, not
when reached.
3. Optimization: The most recent (hot) objects are flushed, leaving the
oldest (cold) objects in the mempool cache.
This is bad for CPUs with a small L1 cache, because when they get
objects from the mempool after the mempool cache has been flushed, they
get cold objects instead of hot objects.
Now, the existing (cold) objects in the mempool cache are flushed before
the new (hot) objects are added the to the mempool cache.
4. Optimization: Using the x86 variant of rte_memcpy() is inefficient
here, where n is relatively small and unknown at compile time.
Now, it has been replaced by an alternative copying method, optimized
for the fact that most Ethernet PMDs operate in bursts of 4 or 8 mbufs
or multiples thereof.
v2 changes:
- Not adding the new objects to the mempool cache before flushing it
also allows the memory allocated for the mempool cache to be reduced
from 3 x to 2 x RTE_MEMPOOL_CACHE_MAX_SIZE.
However, such this change would break the ABI, so it was removed in v2.
- The mempool cache should be cache line aligned for the benefit of the
copying method, which on some CPU architectures performs worse on data
crossing a cache boundary.
However, such this change would break the ABI, so it was removed in v2;
and yet another alternative copying method replaced the rte_memcpy().
v3 changes:
- Actually remove my modifications of the rte_mempool_cache structure.
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/mempool/rte_mempool.h | 51 +++++++++++++++++++++++++++++----------
1 file changed, 38 insertions(+), 13 deletions(-)
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 1e7a3c1527..7b364cfc74 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1334,6 +1334,7 @@ static __rte_always_inline void
rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
unsigned int n, struct rte_mempool_cache *cache)
{
+ uint32_t index;
void **cache_objs;
/* increment stat now, adding in mempool always success */
@@ -1344,31 +1345,56 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
goto ring_enqueue;
- cache_objs = &cache->objs[cache->len];
+ /* If the request itself is too big for the cache */
+ if (unlikely(n > cache->flushthresh))
+ goto ring_enqueue;
/*
* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
+ * 1. If the objects cannot be added to the cache without
+ * crossing the flush threshold, flush the cache to the ring.
+ * 2. Add the objects to the cache.
*/
- /* Add elements back into the cache */
- rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
+ if (cache->len + n <= cache->flushthresh) {
+ cache_objs = &cache->objs[cache->len];
- cache->len += n;
+ cache->len += n;
+ } else {
+ cache_objs = cache->objs;
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ if (rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len) < 0)
+ rte_panic("cannot put objects in mempool\n");
+#else
+ rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
+#endif
+ cache->len = n;
+ }
+
+ /* Add the objects to the cache. */
+ for (index = 0; index < (n & ~0x3); index += 4) {
+ cache_objs[index] = obj_table[index];
+ cache_objs[index + 1] = obj_table[index + 1];
+ cache_objs[index + 2] = obj_table[index + 2];
+ cache_objs[index + 3] = obj_table[index + 3];
+ }
+ switch (n & 0x3) {
+ case 3:
+ cache_objs[index] = obj_table[index];
+ index++; /* fallthrough */
+ case 2:
+ cache_objs[index] = obj_table[index];
+ index++; /* fallthrough */
+ case 1:
+ cache_objs[index] = obj_table[index];
}
return;
ring_enqueue:
- /* push remaining objects in ring */
+ /* Put the objects into the ring */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
rte_panic("cannot put objects in mempool\n");
@@ -1377,7 +1403,6 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
#endif
}
-
/**
* Put several objects back in the mempool.
*
--
2.17.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
2021-12-30 6:08 2% [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
@ 2022-01-19 16:56 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-01-19 16:56 UTC (permalink / raw)
To: Yanling Song, dev; +Cc: yanling.song, yanggan, xuyun, stephen, lihuisong
On 12/30/2021 6:08 AM, Yanling Song wrote:
> The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
> Ramaxel Memory Technology is a company which supply a lot of electric products:
> storage, communication, PCB...
> SPNxxx is a serial PCIE interface NIC cards:
> SPN110: 2 PORTs *25G
> SPN120: 4 PORTs *25G
> SPN130: 2 PORTs *100G
>
Hi Yanling,
As far as I can see hnic (from Huawei) and this spnic drivers are alike,
what is the relation between these two?
> The following is main features of our SPNIC:
> - TSO
> - LRO
> - Flow control
> - SR-IOV(Partially supported)
> - VLAN offload
> - VLAN filter
> - CRC offload
> - Promiscuous mode
> - RSS
>
> v6->v5, No real changes:
> 1. Move the fix of RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS from patch 26 to patch 2;
> 2. Change the description of patch 26.
>
> v5->v4:
> 1. Add prefix "spinc_" for external functions;
> 2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
> 3. Do not use void* for keeping the type information
>
> v3->v4:
> 1. Fix ABI test failure;
> 2. Remove some descriptions in spnic.rst.
>
> v2->v3:
> 1. Fix clang compiling failure.
>
> v1->v2:
> 1. Fix coding style issues and compiling failures;
> 2. Only support linux in meson.build;
> 3. Use CLOCK_MONOTONIC_COARSE instead of CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW;
> 4. Fix time_before();
> 5. Remove redundant checks in spnic_dev_configure();
>
> Yanling Song (26):
> drivers/net: introduce a new PMD driver
> net/spnic: initialize the HW interface
> net/spnic: add mbox message channel
> net/spnic: introduce event queue
> net/spnic: add mgmt module
> net/spnic: add cmdq and work queue
> net/spnic: add interface handling cmdq message
> net/spnic: add hardware info initialization
> net/spnic: support MAC and link event handling
> net/spnic: add function info initialization
> net/spnic: add queue pairs context initialization
> net/spnic: support mbuf handling of Tx/Rx
> net/spnic: support Rx congfiguration
> net/spnic: add port/vport enable
> net/spnic: support IO packets handling
> net/spnic: add device configure/version/info
> net/spnic: support RSS configuration update and get
> net/spnic: support VLAN filtering and offloading
> net/spnic: support promiscuous and allmulticast Rx modes
> net/spnic: support flow control
> net/spnic: support getting Tx/Rx queues info
> net/spnic: net/spnic: support xstats statistics
> net/spnic: support VFIO interrupt
> net/spnic: support Tx/Rx queue start/stop
> net/spnic: add doc infrastructure
> net/spnic: fixes unsafe C style code
<...>
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 1/1] mempool: implement index-based per core cache
@ 2022-01-20 8:21 3% ` Morten Brørup
2022-01-21 6:01 3% ` Honnappa Nagarahalli
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2022-01-20 8:21 UTC (permalink / raw)
To: Dharmik Thakkar, honnappa.nagarahalli, Olivier Matz, Andrew Rybchenko
Cc: dev, nd, ruifeng.wang, Beilei Xing
+CC Beilei as i40e maintainer
> From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
> Sent: Thursday, 13 January 2022 06.37
>
> Current mempool per core cache implementation stores pointers to mbufs
> On 64b architectures, each pointer consumes 8B
> This patch replaces it with index-based implementation,
> where in each buffer is addressed by (pool base address + index)
> It reduces the amount of memory/cache required for per core cache
>
> L3Fwd performance testing reveals minor improvements in the cache
> performance (L1 and L2 misses reduced by 0.60%)
> with no change in throughput
>
> Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
> lib/mempool/rte_mempool.h | 150 +++++++++++++++++++++++++-
> lib/mempool/rte_mempool_ops_default.c | 7 ++
> 2 files changed, 156 insertions(+), 1 deletion(-)
>
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 1e7a3c15273c..f2403fbc97a7 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -50,6 +50,10 @@
> #include <rte_memcpy.h>
> #include <rte_common.h>
>
> +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> +#include <rte_vect.h>
> +#endif
> +
> #include "rte_mempool_trace_fp.h"
>
> #ifdef __cplusplus
> @@ -239,6 +243,9 @@ struct rte_mempool {
> int32_t ops_index;
>
> struct rte_mempool_cache *local_cache; /**< Per-lcore local cache
> */
> +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> + void *pool_base_value; /**< Base value to calculate indices */
> +#endif
>
> uint32_t populated_size; /**< Number of populated
> objects. */
> struct rte_mempool_objhdr_list elt_list; /**< List of objects in
> pool */
> @@ -1314,7 +1321,22 @@ rte_mempool_cache_flush(struct rte_mempool_cache
> *cache,
> if (cache == NULL || cache->len == 0)
> return;
> rte_mempool_trace_cache_flush(cache, mp);
> +
> +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> + unsigned int i;
> + unsigned int cache_len = cache->len;
> + void *obj_table[RTE_MEMPOOL_CACHE_MAX_SIZE * 3];
> + void *base_value = mp->pool_base_value;
> + uint32_t *cache_objs = (uint32_t *) cache->objs;
Hi Dharmik and Honnappa,
The essence of this patch is based on recasting the type of the objs field in the rte_mempool_cache structure from an array of pointers to an array of uint32_t.
However, this effectively breaks the ABI, because the rte_mempool_cache structure is public and part of the API.
Some drivers [1] even bypass the mempool API and access the rte_mempool_cache structure directly, assuming that the objs array in the cache is an array of pointers. So you cannot recast the fields in the rte_mempool_cache structure the way this patch requires.
Although I do consider bypassing an API's accessor functions "spaghetti code", this driver's behavior is formally acceptable as long as the rte_mempool_cache structure is not marked as internal.
I really liked your idea of using indexes instead of pointers, so I'm very sorry to shoot it down. :-(
[1]: E.g. the Intel i40e PMD, http://code.dpdk.org/dpdk/latest/source/drivers/net/i40e/i40e_rxtx_vec_avx512.c#L25
-Morten
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-17 23:23 4% ` [PATCH v2] " Michael Barker
@ 2022-01-20 14:16 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-01-20 14:16 UTC (permalink / raw)
To: Michael Barker; +Cc: dev, Ray Kinsella
18/01/2022 00:23, Michael Barker:
> When using clang with -Wall the use of diagnose_if kicks up a warning,
Please could you copy the warning in the commit log?
> requiring all dpdk includes to be wrapped with the pragma. This change
> isolates the ignore just the appropriate location and makes it easier
> for users to apply -Wall,-Werror
Please could you explain how it is related to -Wgcc-compat?
[...]
> #define __rte_internal \
> +_Pragma("GCC diagnostic push") \
> +_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
> __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> -section(".text.internal")))
> +section(".text.internal"))) \
> +_Pragma("GCC diagnostic pop")
^ permalink raw reply [relevance 0%]
* [PATCH v2 0/4] ethdev: introduce IP reassembly offload
@ 2022-01-20 16:26 4% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-01-20 16:26 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (4):
ethdev: introduce IP reassembly offload
ethdev: add dev op to set/get IP reassembly configuration
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 19 ++++++
doc/guides/nics/features.rst | 11 ++++
lib/ethdev/ethdev_driver.h | 45 ++++++++++++++
lib/ethdev/rte_ethdev.c | 110 +++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 104 ++++++++++++++++++++++++++++++++-
lib/ethdev/version.map | 5 ++
lib/security/rte_security.h | 12 +++-
7 files changed, 304 insertions(+), 2 deletions(-)
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2 1/4] ethdev: introduce IP reassembly offload
@ 2022-01-20 16:45 3% ` Stephen Hemminger
2022-01-20 17:11 0% ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2022-01-20 16:45 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj
On Thu, 20 Jan 2022 21:56:24 +0530
Akhil Goyal <gakhil@marvell.com> wrote:
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice.
> + *
> + * A structure used to set IP reassembly configuration.
> + *
> + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> + * the PMD will attempt IP reassembly for the received packets as per
> + * properties defined in this structure:
> + *
> + */
> +struct rte_eth_ip_reass_params {
> + /** Maximum time in ms which PMD can wait for other fragments. */
> + uint32_t reass_timeout;
> + /** Maximum number of fragments that can be reassembled. */
> + uint16_t max_frags;
> + /**
> + * Flags to enable reassembly of packet types -
> + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> + */
> + uint16_t flags;
> +};
> +
Actually, this is not experimental. You are embedding this in dev_info
and dev_info is not experimental; therefore the reassembly parameters
can never change without breaking ABI of dev_info.
^ permalink raw reply [relevance 3%]
* RE: [EXT] Re: [PATCH v2 1/4] ethdev: introduce IP reassembly offload
2022-01-20 16:45 3% ` Stephen Hemminger
@ 2022-01-20 17:11 0% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2022-01-20 17:11 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Anoob Joseph, radu.nicolau, declan.doherty, hemant.agrawal,
matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, olivier.matz, rosen.xu,
Jerin Jacob Kollanukkaran
> On Thu, 20 Jan 2022 21:56:24 +0530
> Akhil Goyal <gakhil@marvell.com> wrote:
>
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior notice.
> > + *
> > + * A structure used to set IP reassembly configuration.
> > + *
> > + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> > + * the PMD will attempt IP reassembly for the received packets as per
> > + * properties defined in this structure:
> > + *
> > + */
> > +struct rte_eth_ip_reass_params {
> > + /** Maximum time in ms which PMD can wait for other fragments. */
> > + uint32_t reass_timeout;
> > + /** Maximum number of fragments that can be reassembled. */
> > + uint16_t max_frags;
> > + /**
> > + * Flags to enable reassembly of packet types -
> > + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> > + */
> > + uint16_t flags;
> > +};
> > +
>
> Actually, this is not experimental. You are embedding this in dev_info
> and dev_info is not experimental; therefore the reassembly parameters
> can never change without breaking ABI of dev_info.
Agreed, will remove the experimental tag from this struct.
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 1/1] mempool: implement index-based per core cache
2022-01-20 8:21 3% ` Morten Brørup
@ 2022-01-21 6:01 3% ` Honnappa Nagarahalli
2022-01-21 7:36 4% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2022-01-21 6:01 UTC (permalink / raw)
To: Morten Brørup, Dharmik Thakkar, Olivier Matz, Andrew Rybchenko
Cc: dev, nd, Ruifeng Wang, Beilei Xing, nd
>
> +CC Beilei as i40e maintainer
>
> > From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
> > Sent: Thursday, 13 January 2022 06.37
> >
> > Current mempool per core cache implementation stores pointers to mbufs
> > On 64b architectures, each pointer consumes 8B This patch replaces it
> > with index-based implementation, where in each buffer is addressed by
> > (pool base address + index) It reduces the amount of memory/cache
> > required for per core cache
> >
> > L3Fwd performance testing reveals minor improvements in the cache
> > performance (L1 and L2 misses reduced by 0.60%) with no change in
> > throughput
> >
> > Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> > lib/mempool/rte_mempool.h | 150 +++++++++++++++++++++++++-
> > lib/mempool/rte_mempool_ops_default.c | 7 ++
> > 2 files changed, 156 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > index 1e7a3c15273c..f2403fbc97a7 100644
> > --- a/lib/mempool/rte_mempool.h
> > +++ b/lib/mempool/rte_mempool.h
> > @@ -50,6 +50,10 @@
> > #include <rte_memcpy.h>
> > #include <rte_common.h>
> >
> > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > +#include <rte_vect.h>
> > +#endif
> > +
> > #include "rte_mempool_trace_fp.h"
> >
> > #ifdef __cplusplus
> > @@ -239,6 +243,9 @@ struct rte_mempool {
> > int32_t ops_index;
> >
> > struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
> > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > + void *pool_base_value; /**< Base value to calculate indices */
> > +#endif
> >
> > uint32_t populated_size; /**< Number of populated
> > objects. */
> > struct rte_mempool_objhdr_list elt_list; /**< List of objects in
> > pool */ @@ -1314,7 +1321,22 @@ rte_mempool_cache_flush(struct
> > rte_mempool_cache *cache,
> > if (cache == NULL || cache->len == 0)
> > return;
> > rte_mempool_trace_cache_flush(cache, mp);
> > +
> > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > + unsigned int i;
> > + unsigned int cache_len = cache->len;
> > + void *obj_table[RTE_MEMPOOL_CACHE_MAX_SIZE * 3];
> > + void *base_value = mp->pool_base_value;
> > + uint32_t *cache_objs = (uint32_t *) cache->objs;
>
> Hi Dharmik and Honnappa,
>
> The essence of this patch is based on recasting the type of the objs field in the
> rte_mempool_cache structure from an array of pointers to an array of
> uint32_t.
>
> However, this effectively breaks the ABI, because the rte_mempool_cache
> structure is public and part of the API.
The patch does not change the public structure, the new member is under compile time flag, not sure how it breaks the ABI.
>
> Some drivers [1] even bypass the mempool API and access the
> rte_mempool_cache structure directly, assuming that the objs array in the
> cache is an array of pointers. So you cannot recast the fields in the
> rte_mempool_cache structure the way this patch requires.
IMO, those drivers are at fault. The mempool cache structure is public only because the APIs are inline. We should still maintain modularity and not use the members of structures belonging to another library directly. A similar effort involving rte_ring was not accepted sometime back [1]
[1] http://inbox.dpdk.org/dev/DBAPR08MB5814907968595EE56F5E20A798390@DBAPR08MB5814.eurprd08.prod.outlook.com/
>
> Although I do consider bypassing an API's accessor functions "spaghetti
> code", this driver's behavior is formally acceptable as long as the
> rte_mempool_cache structure is not marked as internal.
>
> I really liked your idea of using indexes instead of pointers, so I'm very sorry to
> shoot it down. :-(
>
> [1]: E.g. the Intel i40e PMD,
> http://code.dpdk.org/dpdk/latest/source/drivers/net/i40e/i40e_rxtx_vec_avx
> 512.c#L25
It is possible to throw an error when this feature is enabled in this file. Alternatively, this PMD could implement the code for index based mempool.
>
> -Morten
^ permalink raw reply [relevance 3%]
* RE: [PATCH v2 1/1] mempool: implement index-based per core cache
2022-01-21 6:01 3% ` Honnappa Nagarahalli
@ 2022-01-21 7:36 4% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2022-01-21 7:36 UTC (permalink / raw)
To: Honnappa Nagarahalli, Dharmik Thakkar, Olivier Matz,
Andrew Rybchenko, Ray Kinsella
Cc: dev, nd, Ruifeng Wang, Beilei Xing, nd
+Ray Kinsella, ABI Policy maintainer
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Friday, 21 January 2022 07.01
>
> >
> > +CC Beilei as i40e maintainer
> >
> > > From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
> > > Sent: Thursday, 13 January 2022 06.37
> > >
> > > Current mempool per core cache implementation stores pointers to
> mbufs
> > > On 64b architectures, each pointer consumes 8B This patch replaces
> it
> > > with index-based implementation, where in each buffer is addressed
> by
> > > (pool base address + index) It reduces the amount of memory/cache
> > > required for per core cache
> > >
> > > L3Fwd performance testing reveals minor improvements in the cache
> > > performance (L1 and L2 misses reduced by 0.60%) with no change in
> > > throughput
> > >
> > > Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > ---
> > > lib/mempool/rte_mempool.h | 150
> +++++++++++++++++++++++++-
> > > lib/mempool/rte_mempool_ops_default.c | 7 ++
> > > 2 files changed, 156 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > > index 1e7a3c15273c..f2403fbc97a7 100644
> > > --- a/lib/mempool/rte_mempool.h
> > > +++ b/lib/mempool/rte_mempool.h
> > > @@ -50,6 +50,10 @@
> > > #include <rte_memcpy.h>
> > > #include <rte_common.h>
> > >
> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > > +#include <rte_vect.h>
> > > +#endif
> > > +
> > > #include "rte_mempool_trace_fp.h"
> > >
> > > #ifdef __cplusplus
> > > @@ -239,6 +243,9 @@ struct rte_mempool {
> > > int32_t ops_index;
> > >
> > > struct rte_mempool_cache *local_cache; /**< Per-lcore local cache
> */
> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > > + void *pool_base_value; /**< Base value to calculate indices */
> > > +#endif
> > >
> > > uint32_t populated_size; /**< Number of populated
> > > objects. */
> > > struct rte_mempool_objhdr_list elt_list; /**< List of objects in
> > > pool */ @@ -1314,7 +1321,22 @@ rte_mempool_cache_flush(struct
> > > rte_mempool_cache *cache,
> > > if (cache == NULL || cache->len == 0)
> > > return;
> > > rte_mempool_trace_cache_flush(cache, mp);
> > > +
> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > > + unsigned int i;
> > > + unsigned int cache_len = cache->len;
> > > + void *obj_table[RTE_MEMPOOL_CACHE_MAX_SIZE * 3];
> > > + void *base_value = mp->pool_base_value;
> > > + uint32_t *cache_objs = (uint32_t *) cache->objs;
> >
> > Hi Dharmik and Honnappa,
> >
> > The essence of this patch is based on recasting the type of the objs
> field in the
> > rte_mempool_cache structure from an array of pointers to an array of
> > uint32_t.
> >
> > However, this effectively breaks the ABI, because the
> rte_mempool_cache
> > structure is public and part of the API.
> The patch does not change the public structure, the new member is under
> compile time flag, not sure how it breaks the ABI.
>
> >
> > Some drivers [1] even bypass the mempool API and access the
> > rte_mempool_cache structure directly, assuming that the objs array in
> the
> > cache is an array of pointers. So you cannot recast the fields in the
> > rte_mempool_cache structure the way this patch requires.
> IMO, those drivers are at fault. The mempool cache structure is public
> only because the APIs are inline. We should still maintain modularity
> and not use the members of structures belonging to another library
> directly. A similar effort involving rte_ring was not accepted sometime
> back [1]
>
> [1]
> http://inbox.dpdk.org/dev/DBAPR08MB5814907968595EE56F5E20A798390@DBAPR0
> 8MB5814.eurprd08.prod.outlook.com/
>
> >
> > Although I do consider bypassing an API's accessor functions
> "spaghetti
> > code", this driver's behavior is formally acceptable as long as the
> > rte_mempool_cache structure is not marked as internal.
> >
> > I really liked your idea of using indexes instead of pointers, so I'm
> very sorry to
> > shoot it down. :-(
> >
> > [1]: E.g. the Intel i40e PMD,
> >
> http://code.dpdk.org/dpdk/latest/source/drivers/net/i40e/i40e_rxtx_vec_
> avx
> > 512.c#L25
> It is possible to throw an error when this feature is enabled in this
> file. Alternatively, this PMD could implement the code for index based
> mempool.
>
I agree with both your points, Honnappa.
The ABI remains intact, and only changes when this feature is enabled at compile time.
In addition to your suggestions, I propose that the patch modifies the objs type in the mempool cache structure itself, instead of type casting it through an access variable. This should throw an error when compiling an application that accesses it as a pointer array instead of a uint32_t array - like the affected Intel PMDs.
The updated objs field in the mempool cache structure should have the same size when compiled as the original objs field, so this feature doesn't change anything else in the ABI, only the type of the mempool cache objects.
Also, the description of the feature should stress that applications accessing the cache objects directly will fail miserably.
^ permalink raw reply [relevance 4%]
Results 11001-11200 of ~18000 next (newer) | prev (older) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2020-04-28 23:58 [dpdk-dev] [PATCH v3 0/8] eal: cleanup resources on shutdown Stephen Hemminger
2021-11-13 0:28 3% ` [PATCH v4 0/5] cleanup more stuff " Stephen Hemminger
2021-11-13 3:32 3% ` [PATCH v5 0/5] cleanup DPDK resources via eal_cleanup Stephen Hemminger
2021-11-13 17:22 3% ` [PATCH v6 0/5] cleanup more resources on eal_cleanup Stephen Hemminger
2020-10-08 15:30 [dpdk-dev] [PATCH v4 1/5] eal: add API for bus close rohit.raj
2022-01-10 5:26 3% ` [PATCH v5 1/2] " rohit.raj
2021-01-12 1:04 [dpdk-dev] [PATCH] eal/rwlock: add note about writer starvation Stephen Hemminger
2021-02-12 0:21 ` [dpdk-dev] [PATCH v2] " Honnappa Nagarahalli
2021-05-12 19:10 ` Thomas Monjalon
2021-11-08 10:18 0% ` Thomas Monjalon
2021-03-10 23:24 [dpdk-dev] [PATCH] doc: propose correction rte_bsf64 return type declaration Tyler Retzlaff
2021-10-26 7:45 ` [dpdk-dev] [PATCH v2] doc: propose correction rte_{bsf, fls} inline functions type use Morten Brørup
2021-11-11 4:15 ` Tyler Retzlaff
2021-11-11 11:54 3% ` Thomas Monjalon
2021-11-11 12:41 0% ` Morten Brørup
2021-06-23 17:31 [dpdk-dev] [PATCH] doc: note KNI alternatives and deprecation plan Ferruh Yigit
2021-11-23 12:08 ` [PATCH v2 1/2] doc: note KNI alternatives Ferruh Yigit
2021-11-23 12:08 5% ` [PATCH v2 2/2] doc: announce KNI deprecation Ferruh Yigit
2021-11-24 17:16 ` [PATCH v3 1/2] doc: note KNI alternatives Ferruh Yigit
2021-11-24 17:16 5% ` [PATCH v3 2/2] doc: announce KNI deprecation Ferruh Yigit
2021-08-03 8:26 [dpdk-dev] [RFC v2 1/3] eventdev: allow for event devices requiring maintenance Mattias Rönnblom
2021-10-26 17:31 ` [dpdk-dev] [PATCH " Mattias Rönnblom
2021-10-29 14:38 ` Jerin Jacob
2021-10-29 15:03 ` Mattias Rönnblom
2021-10-29 15:17 ` Jerin Jacob
2021-11-01 9:26 3% ` Mattias Rönnblom
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-10-18 22:56 ` Stephen Hemminger
2021-10-19 8:32 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-20 15:30 3% ` Dmitry Kozlyuk
2021-10-21 9:16 0% ` Harman Kalra
2021-10-21 12:33 0% ` Dmitry Kozlyuk
2021-10-19 18:35 4% ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
2021-10-19 18:35 1% ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-19 21:27 4% ` Dmitry Kozlyuk
2021-10-20 9:25 3% ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-22 20:49 4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
2021-10-24 20:04 4% ` [dpdk-dev] [PATCH v6 0/9] " David Marchand
2021-10-25 13:04 0% ` [dpdk-dev] [PATCH v5 0/6] " Raslan Darawsheh
2021-10-25 13:09 0% ` David Marchand
2021-10-25 13:34 4% ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
2021-10-25 14:27 4% ` [dpdk-dev] [PATCH v8 " David Marchand
2021-10-25 14:32 0% ` Raslan Darawsheh
2021-10-25 19:24 0% ` David Marchand
2021-09-01 5:30 [dpdk-dev] [PATCH 0/2] *** support IOMMU for DMA device *** Xuan Ding
2021-10-11 7:59 ` [dpdk-dev] [PATCH v7 0/2] Support IOMMU for DMA device Xuan Ding
2021-10-21 12:33 0% ` Maxime Coquelin
2021-09-03 0:47 [dpdk-dev] [PATCH 0/5] Packet capture framework enhancements Stephen Hemminger
2021-10-20 21:42 ` [dpdk-dev] [PATCH v15 00/12] Packet capture framework update Stephen Hemminger
2021-10-20 21:42 1% ` [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering Stephen Hemminger
2021-10-21 14:16 0% ` Kinsella, Ray
2021-10-27 6:34 0% ` Wang, Yinan
2021-10-20 21:42 1% ` [dpdk-dev] [PATCH v15 11/12] doc: changes for new pcapng and dumpcap utility Stephen Hemminger
2021-09-06 16:55 [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver Selwin Sebastian
2021-09-06 17:17 ` David Marchand
2021-10-27 14:59 0% ` Thomas Monjalon
2021-10-28 14:54 0% ` Sebastian, Selwin
2021-09-09 16:40 [dpdk-dev] [PATCH] port: eventdev port api promoted Rahul Shah
2021-09-10 7:36 ` David Marchand
2021-09-10 13:40 ` Kinsella, Ray
2021-10-13 12:12 ` Thomas Monjalon
2021-10-20 9:55 3% ` Kinsella, Ray
2021-09-09 17:56 [dpdk-dev] [PATCH 00/18] comment spelling errors Stephen Hemminger
2021-11-12 0:02 ` [PATCH v4 00/18] fix docbook and " Stephen Hemminger
2021-11-12 0:02 4% ` [PATCH v4 08/18] eal: fix typos in comments Stephen Hemminger
2021-11-12 15:22 0% ` Kinsella, Ray
2021-09-10 2:23 [dpdk-dev] [PATCH 0/8] Removal of PCI bus ABIs Chenbo Xia
2021-10-14 7:07 ` [dpdk-dev] [PATCH v2 0/7] " Thomas Monjalon
2021-10-14 8:07 ` Xia, Chenbo
2021-10-14 8:25 ` Thomas Monjalon
2021-10-27 12:03 4% ` Xia, Chenbo
2021-10-04 13:29 [dpdk-dev] [PATCH v2] ci: update machine meson option to platform Juraj Linkeš
2021-10-11 13:40 ` [dpdk-dev] [PATCH v3] " Juraj Linkeš
2021-10-14 12:26 ` Aaron Conole
2021-10-25 15:42 0% ` Thomas Monjalon
2021-10-05 20:15 [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-07 22:10 ` [dpdk-dev] [PATCH v5 " Dmitry Kozlyuk
2021-10-22 21:24 4% ` Thomas Monjalon
2021-10-08 21:28 [dpdk-dev] [PATCH] lpm: fix buffer overflow Vladimir Medvedkin
2021-10-20 19:55 3% ` David Marchand
2021-10-21 17:15 0% ` Medvedkin, Vladimir
2021-10-08 22:40 [dpdk-dev] [PATCH v15 0/9] eal: Add EAL API for threading Narcisa Ana Maria Vasile
2021-10-09 7:41 ` [dpdk-dev] [PATCH v16 " Narcisa Ana Maria Vasile
2021-10-09 7:41 ` [dpdk-dev] [PATCH v16 8/9] eal: implement functions for thread barrier management Narcisa Ana Maria Vasile
2021-10-12 16:32 ` Thomas Monjalon
2021-11-09 2:07 3% ` Narcisa Ana Maria Vasile
2021-11-10 3:13 0% ` Narcisa Ana Maria Vasile
2021-11-10 3:01 3% ` [dpdk-dev] [PATCH v17 00/13] eal: Add EAL API for threading Narcisa Ana Maria Vasile
2021-11-11 1:33 3% ` [PATCH v18 0/8] " Narcisa Ana Maria Vasile
2021-10-11 12:43 [dpdk-dev] [PATCH v2 0/5] cryptodev: hide internal structures Akhil Goyal
2021-10-18 14:41 ` [dpdk-dev] [PATCH v3 0/7] " Akhil Goyal
2021-10-18 14:41 ` [dpdk-dev] [PATCH v3 3/7] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-10-19 16:00 0% ` Zhang, Roy Fan
2021-10-18 14:42 ` [dpdk-dev] [PATCH v3 6/7] cryptodev: update fast path APIs to use new flat array Akhil Goyal
2021-10-19 12:28 0% ` Ananyev, Konstantin
2021-10-20 11:27 ` [dpdk-dev] [PATCH v4 0/8] cryptodev: hide internal structures Akhil Goyal
2021-10-20 11:27 2% ` [dpdk-dev] [PATCH v4 3/8] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-10-20 11:27 3% ` [dpdk-dev] [PATCH v4 7/8] cryptodev: update fast path APIs to use new flat array Akhil Goyal
2021-10-20 11:27 7% ` [dpdk-dev] [PATCH v4 8/8] cryptodev: move device specific structures Akhil Goyal
2021-10-13 1:52 [dpdk-dev] [PATCH v4 2/2] app/test: delete cmdline free function zhihongx.peng
2021-10-18 13:58 ` [dpdk-dev] [PATCH v5] lib/cmdline: release cl when cmdline exit zhihongx.peng
2021-10-20 9:22 0% ` Peng, ZhihongX
2021-10-13 19:22 [dpdk-dev] [PATCH v2 1/7] security: rework session framework Akhil Goyal
2021-10-18 21:34 ` [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework Akhil Goyal
2021-10-18 21:34 ` [dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework Akhil Goyal
2021-10-20 19:27 0% ` Ananyev, Konstantin
2021-10-21 6:53 0% ` Akhil Goyal
2021-10-21 10:38 0% ` Ananyev, Konstantin
2021-10-20 15:45 ` [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework Power, Ciara
2021-10-20 16:41 3% ` Akhil Goyal
2021-10-20 16:48 0% ` Akhil Goyal
2021-10-20 18:04 0% ` Akhil Goyal
2021-10-21 8:43 0% ` Zhang, Roy Fan
2021-10-13 19:27 [dpdk-dev] [PATCH v2] test/hash: fix buffer overflow Vladimir Medvedkin
2021-10-14 17:48 ` [dpdk-dev] [PATCH v3] " Vladimir Medvedkin
2021-10-15 9:33 ` David Marchand
2021-10-15 13:02 ` Medvedkin, Vladimir
2021-10-19 7:02 ` David Marchand
2021-10-19 15:57 0% ` Medvedkin, Vladimir
2021-10-14 20:55 [dpdk-dev] [PATCH] ring: fix size of name array in ring structure Honnappa Nagarahalli
2021-10-20 23:06 0% ` Ananyev, Konstantin
2021-10-21 7:35 0% ` David Marchand
2021-10-15 5:13 [dpdk-dev] [PATCH] app/testpmd: fix l4 sw csum over multi segments Xiaoyun Li
2021-12-03 11:38 ` [PATCH v4 0/2] Add functions to calculate UDP/TCP cksum in mbuf Xiaoyun Li
2021-12-03 11:38 3% ` [PATCH v4 1/2] net: add " Xiaoyun Li
2021-12-15 11:33 0% ` Singh, Aman Deep
2022-01-04 15:18 0% ` Li, Xiaoyun
2022-01-04 15:40 0% ` Li, Xiaoyun
2022-01-06 12:56 0% ` Singh, Aman Deep
2022-01-06 16:03 ` [PATCH v5 0/2] Add " Xiaoyun Li
2022-01-06 16:03 3% ` [PATCH v5 1/2] net: add " Xiaoyun Li
2021-10-15 8:16 [dpdk-dev] [PATCH v14 0/5] Add PIE support for HQoS library Liguzinski, WojciechX
2021-10-19 8:18 ` [dpdk-dev] [PATCH v15 " Liguzinski, WojciechX
2021-10-19 12:18 0% ` Dumitrescu, Cristian
2021-10-19 12:45 3% ` [dpdk-dev] [PATCH v16 " Liguzinski, WojciechX
2021-10-20 7:49 3% ` [dpdk-dev] [PATCH v17 " Liguzinski, WojciechX
2021-10-25 11:32 3% ` [dpdk-dev] [PATCH v18 " Liguzinski, WojciechX
2021-10-26 8:24 3% ` Liu, Yu Y
2021-10-26 8:33 0% ` Thomas Monjalon
2021-10-26 10:02 0% ` Dumitrescu, Cristian
2021-10-28 10:17 3% ` [dpdk-dev] [PATCH v19 " Liguzinski, WojciechX
2021-11-02 23:57 3% ` [dpdk-dev] [PATCH v20 " Liguzinski, WojciechX
2021-11-03 17:52 0% ` Thomas Monjalon
2021-11-04 8:29 0% ` Liguzinski, WojciechX
2021-11-04 10:40 3% ` [dpdk-dev] [PATCH v21 0/3] " Liguzinski, WojciechX
2021-11-04 10:49 3% ` [dpdk-dev] [PATCH v22 " Liguzinski, WojciechX
2021-11-04 11:03 3% ` [dpdk-dev] [PATCH v23 " Liguzinski, WojciechX
2021-11-04 14:55 3% ` [dpdk-dev] [PATCH v24 " Thomas Monjalon
2021-10-15 9:30 [dpdk-dev] [PATCH v2 0/5] optimized Toeplitz hash implementation Vladimir Medvedkin
2021-10-15 9:30 ` [dpdk-dev] [PATCH v2 1/5] hash: add new toeplitz " Vladimir Medvedkin
2021-10-15 16:58 ` Stephen Hemminger
2021-10-18 10:40 ` Ananyev, Konstantin
2021-10-19 1:15 ` Stephen Hemminger
2021-10-19 15:42 0% ` Medvedkin, Vladimir
2021-10-15 19:02 [dpdk-dev] [PATCH v4 01/14] eventdev: make driver interface as internal pbhagavatula
2021-10-18 23:35 ` [dpdk-dev] [PATCH v5 " pbhagavatula
2021-10-18 23:36 ` [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters memory to hugepage pbhagavatula
2021-10-20 20:24 0% ` Carrillo, Erik G
2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
2021-10-18 14:49 ` [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
2021-10-19 8:49 ` David Marchand
2021-10-19 9:04 ` Andrew Rybchenko
2021-10-19 9:23 0% ` Andrew Rybchenko
2021-10-19 9:27 0% ` David Marchand
2021-10-19 9:38 0% ` Andrew Rybchenko
2021-10-19 9:42 0% ` Thomas Monjalon
2021-10-18 15:43 [dpdk-dev] [PATCH v4] ethdev: add namespace Ferruh Yigit
2021-10-20 19:23 1% ` [dpdk-dev] [PATCH v5] " Ferruh Yigit
2021-10-22 2:02 1% ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
2021-10-22 11:03 1% ` [dpdk-dev] [PATCH v7] " Ferruh Yigit
2021-10-19 18:14 [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library jerinj
2021-10-25 7:35 ` Mattias Rönnblom
2021-10-25 9:03 ` Jerin Jacob
2021-10-29 11:57 ` Mattias Rönnblom
2021-10-29 15:51 2% ` Jerin Jacob
2021-10-31 9:18 4% ` Mattias Rönnblom
2021-10-31 14:01 4% ` Jerin Jacob
2021-10-31 19:34 0% ` Thomas Monjalon
2021-10-31 21:13 2% ` Jerin Jacob
2021-10-31 21:55 0% ` Thomas Monjalon
2021-10-31 22:19 0% ` Jerin Jacob
2021-10-25 21:40 4% [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1 Thomas Monjalon
2021-10-28 7:10 0% ` Jiang, YuX
2021-11-01 11:53 0% ` Jiang, YuX
2021-11-05 21:51 0% ` Thinh Tran
2021-11-08 10:50 0% ` Pei Zhang
2021-10-26 15:56 [dpdk-dev] [PATCH v3 1/3] config/x86: add support for AMD platform Aman Kumar
2021-10-27 7:28 ` [dpdk-dev] [PATCH v4 1/2] " Aman Kumar
2021-10-27 7:28 ` [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy " Aman Kumar
2021-10-27 8:13 ` Thomas Monjalon
2021-10-27 11:03 3% ` Van Haaren, Harry
2021-10-27 11:41 0% ` Mattias Rönnblom
2021-10-27 12:15 ` Van Haaren, Harry
2021-10-27 12:22 ` Ananyev, Konstantin
2021-10-27 13:34 ` Aman Kumar
2021-10-27 14:10 2% ` Van Haaren, Harry
2021-10-27 14:31 0% ` Thomas Monjalon
2021-10-29 16:01 0% ` Song, Keesang
2021-10-27 17:43 2% [dpdk-dev] [Bug 842] [dpdk-21.11 rc1] FIPS tests are failing bugzilla
2021-10-28 8:35 3% [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable Thomas Monjalon
2021-10-28 8:38 0% ` Kinsella, Ray
2021-10-28 8:56 0% ` Andrew Rybchenko
2021-11-04 10:45 0% ` Ferruh Yigit
2021-10-28 14:15 [dpdk-dev] [PATCH v2] vhost: mark vDPA driver API as internal Maxime Coquelin
2021-10-29 16:15 3% ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
2021-10-28 21:01 4% [dpdk-dev] Windows community call: MoM 2021-10-27 Dmitry Kozlyuk
2021-10-29 13:48 [dpdk-dev] Overriding rte_config.h Ben Magistro
2021-11-01 15:03 ` Bruce Richardson
2021-11-02 11:20 ` Ananyev, Konstantin
2021-11-02 12:07 ` Bruce Richardson
2021-11-02 12:24 3% ` Ananyev, Konstantin
2021-11-02 14:19 3% ` Bruce Richardson
2021-11-02 15:00 0% ` Ananyev, Konstantin
2021-11-03 14:38 0% ` Ben Magistro
2021-11-04 11:03 0% ` Ananyev, Konstantin
2021-11-02 9:56 4% [dpdk-dev] [PATCH v3] vhost: mark vDPA driver API as internal Maxime Coquelin
2021-11-02 10:47 4% [dpdk-dev] [PATCH] vhost: rename driver callbacks struct Maxime Coquelin
2021-11-03 8:16 0% ` Xia, Chenbo
2021-11-02 19:03 14% [dpdk-dev] [PATCH] ip_frag: increase default value for config parameter Konstantin Ananyev
2021-11-08 22:08 0% ` Thomas Monjalon
2021-11-03 5:00 [dpdk-dev] [PATCH] doc: remove deprecation notice for vhost Chenbo Xia
2021-11-03 5:25 3% ` Xia, Chenbo
2021-11-03 7:03 0% ` David Marchand
2021-11-03 17:50 5% [dpdk-dev] [PATCH] doc: remove deprecation notice for interrupt Harman Kalra
2021-11-03 22:48 [dpdk-dev] [PATCH v2] ethdev: mark old macros as deprecated Ferruh Yigit
2022-01-12 14:36 1% ` [PATCH v3] " Ferruh Yigit
2021-11-04 19:54 4% [dpdk-dev] Minutes of Technical Board Meeting, 2021-Nov-03 Maxime Coquelin
2021-11-08 11:51 [dpdk-dev] [PATCH v3] ip_frag: hide internal structures Konstantin Ananyev
2021-11-08 13:55 ` [dpdk-dev] [PATCH v4 0/2] ip_frag cleanup patches Konstantin Ananyev
2021-11-08 13:55 3% ` [dpdk-dev] [PATCH v4 2/2] ip_frag: add namespace Konstantin Ananyev
2021-11-09 12:32 3% ` [dpdk-dev] [PATCH v5] " Konstantin Ananyev
2021-11-10 16:48 [PATCH 0/5] Extend optional libraries list David Marchand
2021-11-10 16:48 4% ` [PATCH 1/5] ci: test build with minimum configuration David Marchand
2021-11-16 0:24 4% ethdev: hide internal structures Tyler Retzlaff
2021-11-16 9:32 0% ` Ferruh Yigit
2021-11-16 17:54 4% ` Tyler Retzlaff
2021-11-16 20:07 4% ` Ferruh Yigit
2021-11-16 20:44 0% ` Tyler Retzlaff
2021-11-16 10:32 3% ` Ananyev, Konstantin
2021-11-16 19:10 0% ` Tyler Retzlaff
2021-11-16 21:25 0% ` Stephen Hemminger
2021-11-16 22:58 3% ` Tyler Retzlaff
2021-11-16 23:22 0% ` Stephen Hemminger
2021-11-17 22:05 0% ` Tyler Retzlaff
2021-11-18 14:46 [PATCH v1 0/3] Fix typo's and capitalise PMD Sean Morrissey
2021-11-18 14:46 1% ` [PATCH v1 1/3] fix PMD wording typo Sean Morrissey
2021-11-22 10:50 ` [PATCH v2 0/3] Fix typo's and capitalise PMD Sean Morrissey
2021-11-22 10:50 1% ` [PATCH v2 1/3] fix PMD wording typo Sean Morrissey
2021-11-18 19:28 [PATCH v1] gpudev: return EINVAL if invalid input pointer for free and unregister eagostini
2021-11-18 20:19 ` Tyler Retzlaff
2021-11-19 9:34 ` Ferruh Yigit
2021-11-19 9:56 ` Thomas Monjalon
2021-11-24 17:24 3% ` Tyler Retzlaff
2021-11-24 18:04 0% ` Bruce Richardson
2021-12-01 21:37 0% ` Tyler Retzlaff
2021-11-22 10:54 [RFC 0/1] integrate dmadev in vhost Jiayu Hu
2021-12-30 21:55 ` [PATCH v1 " Jiayu Hu
2021-12-30 21:55 ` [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath Jiayu Hu
2022-01-14 6:30 3% ` Xia, Chenbo
2022-01-17 5:39 0% ` Hu, Jiayu
2022-01-19 2:18 0% ` Xia, Chenbo
2021-11-22 17:00 12% [PATCH v1] doc: update release notes for 21.11 John McNamara
2021-11-22 17:05 0% ` Ajit Khaparde
2021-11-23 7:59 [PATCH] ethdev: deprecate header fields and metadata flow actions Viacheslav Ovsiienko
2021-11-24 15:37 ` [PATCH v3] " Viacheslav Ovsiienko
2021-11-25 12:31 4% ` Ferruh Yigit
2021-11-25 12:50 0% ` Thomas Monjalon
2021-11-24 13:00 4% Minutes of Technical Board Meeting, 2021-11-17 Olivier Matz
2021-11-26 20:34 4% DPDK 21.11 released! David Marchand
2021-11-29 13:16 11% [PATCH] version: 22.03-rc0 David Marchand
2021-11-30 15:35 0% ` Thomas Monjalon
2021-11-30 19:51 3% ` David Marchand
2021-12-02 16:13 0% ` David Marchand
2021-12-02 18:11 11% ` [PATCH v2] " David Marchand
2021-12-02 19:34 0% ` Thomas Monjalon
2021-12-02 20:36 0% ` David Marchand
2021-11-29 19:47 [PATCH v3 1/5] common/cnxk: add REE HW definitions lironh
2021-12-07 18:31 ` [PATCH v4 0/4] regex/cn9k: use cnxk infrastructure lironh
2021-12-08 9:14 3% ` Jerin Jacob
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers jerinj
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 4/5] regex/cn9k: use cnxk infrastructure jerinj
2021-12-11 9:04 1% ` [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers jerinj
2021-11-29 20:45 vmxnet3 no longer functional on DPDK 21.11 Lewis Donzis
2021-11-30 13:42 ` Bruce Richardson
2021-12-06 1:52 3% ` Lewis Donzis
2021-12-03 10:03 3% [RFC] cryptodev: asymmetric crypto random number source Kusztal, ArkadiuszX
2021-12-13 8:14 3% ` Akhil Goyal
2021-12-13 9:27 0% ` Ramkumar Balu
2021-12-17 15:26 0% ` Kusztal, ArkadiuszX
2021-12-04 17:24 [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control jerinj
2021-12-04 17:38 3% ` Stephen Hemminger
2021-12-05 7:03 3% ` Jerin Jacob
2021-12-05 18:00 0% ` Stephen Hemminger
2021-12-06 9:57 0% ` Jerin Jacob
2021-12-06 8:35 1% [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers jerinj
2021-12-06 13:35 3% ` Ferruh Yigit
2021-12-07 7:39 3% ` Jerin Jacob
2021-12-07 11:01 0% ` Ferruh Yigit
2021-12-07 11:51 0% ` Kevin Traynor
2021-12-13 16:48 [PATCH 1/2] maintainers: fix stable maintainers list Kevin Traynor
2021-12-13 16:48 5% ` [PATCH 2/2] doc: update LTS release cadence Kevin Traynor
2021-12-14 14:12 [PATCH 00/12] add packet generator library and example app Ronan Randles
2021-12-14 14:57 ` Bruce Richardson
2022-01-12 16:18 3% ` Morten Brørup
2021-12-24 22:59 [PATCH 0/1] mempool: implement index-based per core cache Dharmik Thakkar
2022-01-13 5:36 ` [PATCH v2 " Dharmik Thakkar
2022-01-13 5:36 ` [PATCH v2 1/1] " Dharmik Thakkar
2022-01-20 8:21 3% ` Morten Brørup
2022-01-21 6:01 3% ` Honnappa Nagarahalli
2022-01-21 7:36 4% ` Morten Brørup
2021-12-25 11:28 2% [PATCH v4 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2021-12-26 15:34 [RFC] mempool: rte_mempool_do_generic_get optimizations Morten Brørup
2022-01-19 14:52 3% ` [PATCH v2] mempool: fix put objects to mempool with cache Morten Brørup
2022-01-19 15:03 3% ` [PATCH v3] " Morten Brørup
2021-12-29 13:37 2% [PATCH v5 00/26] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2021-12-30 3:08 [RFC 0/3] Add support for GRE optional fields matching Sean Zhang
2021-12-30 3:08 ` [RFC 1/3] ethdev: support GRE optional fields Sean Zhang
2022-01-19 9:53 ` Ferruh Yigit
2022-01-19 10:01 ` Thomas Monjalon
2022-01-19 10:56 4% ` Ori Kam
2021-12-30 6:08 2% [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2022-01-19 16:56 0% ` Ferruh Yigit
2022-01-03 15:08 [PATCH 0/8] ethdev: introduce IP reassembly offload Akhil Goyal
2022-01-20 16:26 4% ` [PATCH v2 0/4] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 1/4] " Akhil Goyal
2022-01-20 16:45 3% ` Stephen Hemminger
2022-01-20 17:11 0% ` [EXT] " Akhil Goyal
2022-01-17 23:14 4% [PATCH] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if Michael Barker
2022-01-17 23:23 4% ` [PATCH v2] " Michael Barker
2022-01-20 14:16 0% ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).